Sample records for integrated computational platform

  1. Boutiques: a flexible framework to integrate command-line applications in computing platforms.

    PubMed

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-05-01

    We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.

  2. Boutiques: a flexible framework to integrate command-line applications in computing platforms

    PubMed Central

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-01-01

    Abstract We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science. PMID:29718199

  3. FORCEnet Net Centric Architecture - A Standards View

    DTIC Science & Technology

    2006-06-01

    SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION

  4. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  5. A Geospatial Information Grid Framework for Geological Survey.

    PubMed

    Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong

    2015-01-01

    The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.

  6. A Geospatial Information Grid Framework for Geological Survey

    PubMed Central

    Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong

    2015-01-01

    The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255

  7. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  8. Computational toxicology using the OpenTox application programming interface and Bioclipse

    PubMed Central

    2011-01-01

    Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173

  9. 20170312 - Computer Simulation of Developmental ...

    EPA Pesticide Factsheets

    Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of

  10. Computer Simulation of Developmental Processes and ...

    EPA Pesticide Factsheets

    Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of

  11. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  12. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  13. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  14. Network-based drug discovery by integrating systems biology and computational technologies

    PubMed Central

    Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua

    2013-01-01

    Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768

  15. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  16. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  17. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  18. Research on application information system integration platform in medicine manufacturing enterprise.

    PubMed

    Deng, Wu; Zhao, Huimin; Zou, Li; Li, Yuanyuan; Li, Zhengguang

    2012-08-01

    Computer and information technology popularizes in the medicine manufacturing enterprise for its potentials in working efficiency and service quality. In allusion to the explosive data and information of application system in current medicine manufacturing enterprise, we desire to propose a novel application information system integration platform in medicine manufacturing enterprise, which based on a combination of RFID technology and SOA, to implement information sharing and alternation. This method exploits the application integration platform across service interface layer to invoke the RFID middleware. The loose coupling in integration solution is realized by Web services. The key techniques in RFID event components and expanded role-based security access mechanism are studied in detail. Finally, a case study is implemented and tested to evidence our understanding on application system integration platform in medicine manufacturing enterprise.

  19. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  20. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    PubMed

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  1. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  2. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  3. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  4. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE PAGES

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  5. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  6. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  7. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  8. Tele-Medicine Applications of an ISDN-Based Tele-Working Platform

    DTIC Science & Technology

    2001-10-25

    developed over the Hellenic Integrated Services Digital Network (ISDN), is based on user terminals (personal computers), networking apparatus, and a...key infrastructure, ready to offer enhanced message switching and translation in response to market trends [8]. Three (3) years ago, the Hellenic PTT...should outcome to both an integrated Tele- Working platform, a main central database (completed with maintenance facilities), and a ready-to-be

  9. An integrated biotechnology platform for developing sustainable chemical processes.

    PubMed

    Barton, Nelson R; Burgard, Anthony P; Burk, Mark J; Crater, Jason S; Osterhout, Robin E; Pharkya, Priti; Steer, Brian A; Sun, Jun; Trawick, John D; Van Dien, Stephen J; Yang, Tae Hoon; Yim, Harry

    2015-03-01

    Genomatica has established an integrated computational/experimental metabolic engineering platform to design, create, and optimize novel high performance organisms and bioprocesses. Here we present our platform and its use to develop E. coli strains for production of the industrial chemical 1,4-butanediol (BDO) from sugars. A series of examples are given to demonstrate how a rational approach to strain engineering, including carefully designed diagnostic experiments, provided critical insights about pathway bottlenecks, byproducts, expression balancing, and commercial robustness, leading to a superior BDO production strain and process.

  10. Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing

    NASA Astrophysics Data System (ADS)

    Kim, Mooseop; Ryou, Jaecheol

    The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.

  11. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  12. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  13. SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications

    PubMed Central

    Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.

    2018-01-01

    The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069

  14. SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.

    PubMed

    Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D

    2017-04-01

    The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.

  15. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  16. Sharing Health Big Data for Research - A Design by Use Cases: The INSHARE Platform Approach.

    PubMed

    Bouzillé, Guillaume; Westerlynck, Richard; Defossez, Gautier; Bouslimi, Dalel; Bayat, Sahar; Riou, Christine; Busnel, Yann; Le Guillou, Clara; Cauvin, Jean-Michel; Jacquelinet, Christian; Pladys, Patrick; Oger, Emmanuel; Stindel, Eric; Ingrand, Pierre; Coatrieux, Gouenou; Cuggia, Marc

    2017-01-01

    Sharing and exploiting Health Big Data (HBD) allow tackling challenges: data protection/governance taking into account legal, ethical, and deontological aspects enables trust, transparent and win-win relationship between researchers, citizens, and data providers. Lack of interoperability: compartmentalized and syntactically/semantica heterogeneous data. INSHARE project using experimental proof of concept explores how recent technologies overcome such issues. Using 6 data providers, platform is designed via 3 steps to: (1) analyze use cases, needs, and requirements; (2) define data sharing governance, secure access to platform; and (3) define platform specifications. Three use cases - from 5 studies and 11 data sources - were analyzed for platform design. Governance derived from SCANNER model was adapted to data sharing. Platform architecture integrates: data repository and hosting, semantic integration services, data processing, aggregate computing, data quality and integrity monitoring, Id linking, multisource query builder, visualization and data export services, data governance, study management service and security including data watermarking.

  17. System Architecture Development for Energy and Water Infrastructure Data Management and Geovisual Analytics

    NASA Astrophysics Data System (ADS)

    Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.

    2017-12-01

    Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).

  18. Integrative structure modeling with the Integrative Modeling Platform.

    PubMed

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  19. Monolithic silicon-photonic platforms in state-of-the-art CMOS SOI processes [Invited].

    PubMed

    Stojanović, Vladimir; Ram, Rajeev J; Popović, Milos; Lin, Sen; Moazeni, Sajjad; Wade, Mark; Sun, Chen; Alloatti, Luca; Atabaki, Amir; Pavanello, Fabio; Mehta, Nandish; Bhargava, Pavan

    2018-05-14

    Integrating photonics with advanced electronics leverages transistor performance, process fidelity and package integration, to enable a new class of systems-on-a-chip for a variety of applications ranging from computing and communications to sensing and imaging. Monolithic silicon photonics is a promising solution to meet the energy efficiency, sensitivity, and cost requirements of these applications. In this review paper, we take a comprehensive view of the performance of the silicon-photonic technologies developed to date for photonic interconnect applications. We also present the latest performance and results of our "zero-change" silicon photonics platforms in 45 nm and 32 nm SOI CMOS. The results indicate that the 45 nm and 32 nm processes provide a "sweet-spot" for adding photonic capability and enhancing integrated system applications beyond the Moore-scaling, while being able to offload major communication tasks from more deeply-scaled compute and memory chips without complicated 3D integration approaches.

  20. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    PubMed

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.

  1. A platform for population-based weight management: description of a health plan-based integrated systems approach.

    PubMed

    Pronk, Nicolaas P; Boucher, Jackie L; Gehling, Eve; Boyle, Raymond G; Jeffery, Robert W

    2002-10-01

    To describe an integrated, operational platform from which mail- and telephone-based health promotion programs are implemented and to specifically relate this approach to weight management programming in a managed care setting. In-depth description of essential systems structures, including people, computer technology, and decision-support protocols. The roles of support staff, counselors, a librarian, and a manager in delivering a weight management program are described. Information availability using computer technology is a critical component in making this system effective and is presented according to its architectural layout and design. Protocols support counselors and administrative support staff in decision making, and a detailed flowchart presents the layout of this part of the system. This platform is described in the context of a weight management program, and we present baseline characteristics of 1801 participants, their behaviors, self-reported medical conditions, and initial pattern of enrollment in the various treatment options. Considering the prevalence and upward trend of overweight and obesity in the United States, a need exists for robust intervention platforms that can systematically support multiple types of programs. Weight management interventions implemented using this platform are scalable to the population level and are sustainable over time despite the limits of defined resources and budgets. The present article describes an innovative approach to reaching a large population with effective programs in an integrated, coordinated, and systematic manner. This comprehensive, robust platform represents an example of how obesity prevention and treatment research may be translated into the applied setting.

  2. Toward an integrated software platform for systems pharmacology

    PubMed Central

    Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki

    2013-01-01

    Understanding complex biological systems requires the extensive support of computational tools. This is particularly true for systems pharmacology, which aims to understand the action of drugs and their interactions in a systems context. Computational models play an important role as they can be viewed as an explicit representation of biological hypotheses to be tested. A series of software and data resources are used for model development, verification and exploration of the possible behaviors of biological systems using the model that may not be possible or not cost effective by experiments. Software platforms play a dominant role in creativity and productivity support and have transformed many industries, techniques that can be applied to biology as well. Establishing an integrated software platform will be the next important step in the field. © 2013 The Authors. Biopharmaceutics & Drug Disposition published by John Wiley & Sons, Ltd. PMID:24150748

  3. Surviving sepsis--a 3D integrative educational simulator.

    PubMed

    Ježek, Filip; Tribula, Martin; Kulhánek, Tomáš; Mateják, Marek; Privitzer, Pavol; Šilar, Jan; Kofránek, Jiří; Lhotská, Lenka

    2015-08-01

    Computer technology offers greater educational possibilities, notably simulation and virtual reality. This paper presents a technology which serves to integrate multiple modalities, namely 3D virtual reality, node-based simulator, Physiomodel explorer and explanatory physiological simulators employing Modelica language and Unity3D platform. This emerging tool chain should allow the authors to concentrate more on educational content instead of application development. The technology is demonstrated through Surviving sepsis educational scenario, targeted on Microsoft Windows Store platform.

  4. A generic, cost-effective, and scalable cell lineage analysis platform

    PubMed Central

    Biezuner, Tamir; Spiro, Adam; Raz, Ofir; Amir, Shiran; Milo, Lilach; Adar, Rivka; Chapal-Ilani, Noa; Berman, Veronika; Fried, Yael; Ainbinder, Elena; Cohen, Galit; Barr, Haim M.; Halaban, Ruth; Shapiro, Ehud

    2016-01-01

    Advances in single-cell genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells. Current sequencing-based methods for cell lineage analysis depend on low-resolution bulk analysis or rely on extensive single-cell sequencing, which is not scalable and could be biased by functional dependencies. Here we show an integrated biochemical-computational platform for generic single-cell lineage analysis that is retrospective, cost-effective, and scalable. It consists of a biochemical-computational pipeline that inputs individual cells, produces targeted single-cell sequencing data, and uses it to generate a lineage tree of the input cells. We validated the platform by applying it to cells sampled from an ex vivo grown tree and analyzed its feasibility landscape by computer simulations. We conclude that the platform may serve as a generic tool for lineage analysis and thus pave the way toward large-scale human cell lineage discovery. PMID:27558250

  5. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  6. Time Triggered Protocol (TTP) for Integrated Modular Avionics

    NASA Technical Reports Server (NTRS)

    Motzet, Guenter; Gwaltney, David A.; Bauer, Guenther; Jakovljevic, Mirko; Gagea, Leonard

    2006-01-01

    Traditional avionics computing systems are federated, with each system provided on a number of dedicated hardware units. Federated applications are physically separated from one another and analysis of the systems is undertaken individually. Integrated Modular Avionics (IMA) takes these federated functions and integrates them on a common computing platform in a tightly deterministic distributed real-time network of computing modules in which the different applications can run. IMA supports different levels of criticality in the same computing resource and provides a platform for implementation of fault tolerance through hardware and application redundancy. Modular implementation has distinct benefits in design, testing and system maintainability. This paper covers the requirements for fault tolerant bus systems used to provide reliable communication between IMA computing modules. An overview of the Time Triggered Protocol (TTP) specification and implementation as a reliable solution for IMA systems is presented. Application examples in aircraft avionics and a development system for future space application are covered. The commercially available TTP controller can be also be implemented in an FPGA and the results from implementation studies are covered. Finally future direction for the application of TTP and related development activities are presented.

  7. Educational process in modern climatology within the web-GIS platform "Climate"

    NASA Astrophysics Data System (ADS)

    Gordova, Yulia; Gorbatenko, Valentina; Gordov, Evgeny; Martynova, Yulia; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    These days, common to all scientific fields the problem of training of scientists in the environmental sciences is exacerbated by the need to develop new computational and information technology skills in distributed multi-disciplinary teams. To address this and other pressing problems of Earth system sciences, software infrastructure for information support of integrated research in the geosciences was created based on modern information and computational technologies and a software and hardware platform "Climate» (http://climate.scert.ru/) was developed. In addition to the direct analysis of geophysical data archives, the platform is aimed at teaching the basics of the study of changes in regional climate. The educational component of the platform includes a series of lectures on climate, environmental and meteorological modeling and laboratory work cycles on the basics of analysis of current and potential future regional climate change using Siberia territory as an example. The educational process within the Platform is implemented using the distance learning system Moodle (www.moodle.org). This work is partially supported by the Ministry of education and science of the Russian Federation (contract #8345), SB RAS project VIII.80.2.1, RFBR grant #11-05-01190a, and integrated project SB RAS #131.

  8. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards large-scale source integration. Finally, we review monolithic integration strategies for single-photon detectors and their essential role in on-chip feed forward operations.

  9. The application of a Web-geographic information system for improving urban water cycle modelling.

    PubMed

    Mair, M; Mikovits, C; Sengthaler, M; Schöpf, M; Kinzel, H; Urich, C; Kleidorfer, M; Sitzenfrei, R; Rauch, W

    2014-01-01

    Research in urban water management has experienced a transition from traditional model applications to modelling water cycles as an integrated part of urban areas. This includes the interlinking of models of many research areas (e.g. urban development, socio-economy, urban water management). The integration and simulation is realized in newly developed frameworks (e.g. DynaMind and OpenMI) and often assumes a high knowledge in programming. This work presents a Web based urban water management modelling platform which simplifies the setup and usage of complex integrated models. The platform is demonstrated with a small application example on a case study within the Alpine region. The used model is a DynaMind model benchmarking the impact of newly connected catchments on the flooding behaviour of an existing combined sewer system. As a result the workflow of the user within a Web browser is demonstrated and benchmark results are shown. The presented platform hides implementation specific aspects behind Web services based technologies such that the user can focus on his main aim, which is urban water management modelling and benchmarking. Moreover, this platform offers a centralized data management, automatic software updates and access to high performance computers accessible with desktop computers and mobile devices.

  10. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  11. Cloud Computing for DoD

    DTIC Science & Technology

    2012-05-01

    cloud computing 17 NASA Nebula Platform •  Cloud computing pilot program at NASA Ames •  Integrates open-source components into seamless, self...Mission support •  Education and public outreach (NASA Nebula , 2010) 18 NSF Supported Cloud Research •  Support for Cloud Computing in...Mell, P. & Grance, T. (2011). The NIST Definition of Cloud Computing. NIST Special Publication 800-145 •  NASA Nebula (2010). Retrieved from

  12. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.

  13. Wolfram technologies as an integrated scalable platform for interactive learning

    NASA Astrophysics Data System (ADS)

    Kaurov, Vitaliy

    2012-02-01

    We rely on technology profoundly with the prospect of even greater integration in the future. Well known challenges in education are a technology-inadequate curriculum and many software platforms that are difficult to scale or interconnect. We'll review an integrated technology, much of it free, that addresses these issues for individuals and small schools as well as for universities. Topics include: Mathematica, a programming environment that offers a diverse range of functionality; natural language programming for getting started quickly and accessing data from Wolfram|Alpha; quick and easy construction of interactive courseware and scientific applications; partnering with publishers to create interactive e-textbooks; course assistant apps for mobile platforms; the computable document format (CDF); teacher-student and student-student collaboration on interactive projects and web publishing at the Wolfram Demonstrations site.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trujillo, Angelina Michelle

    Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.

  15. Final Report. Center for Scalable Application Development Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less

  16. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  17. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  18. The Design of Modular Web-Based Collaboration

    NASA Astrophysics Data System (ADS)

    Intapong, Ploypailin; Settapat, Sittapong; Kaewkamnerdpong, Boonserm; Achalakul, Tiranee

    Online collaborative systems are popular communication channels as the systems allow people from various disciplines to interact and collaborate with ease. The systems provide communication tools and services that can be integrated on the web; consequently, the systems are more convenient to use and easier to install. Nevertheless, most of the currently available systems are designed according to some specific requirements and cannot be straightforwardly integrated into various applications. This paper provides the design of a new collaborative platform, which is component-based and re-configurable. The platform is called the Modular Web-based Collaboration (MWC). MWC shares the same concept as computer supported collaborative work (CSCW) and computer-supported collaborative learning (CSCL), but it provides configurable tools for online collaboration. Each tool module can be integrated into users' web applications freely and easily. This makes collaborative system flexible, adaptable and suitable for online collaboration.

  19. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  20. Development of a Computer-Assisted Instrumentation Curriculum for Physics Students: Using LabVIEW and Arduino Platform

    ERIC Educational Resources Information Center

    Kuan, Wen-Hsuan; Tseng, Chi-Hung; Chen, Sufen; Wong, Ching-Chang

    2016-01-01

    We propose an integrated curriculum to establish essential abilities of computer programming for the freshmen of a physics department. The implementation of the graphical-based interfaces from Scratch to LabVIEW then to LabVIEW for Arduino in the curriculum "Computer-Assisted Instrumentation in the Design of Physics Laboratories" brings…

  1. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  2. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms.

    PubMed

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-10-06

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions.

  3. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms

    PubMed Central

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-01-01

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions. PMID:27782058

  4. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    PubMed

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R; Nestle, Frank O

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  5. Integration of Lyoplate Based Flow Cytometry and Computational Analysis for Standardized Immunological Biomarker Discovery

    PubMed Central

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A. Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R.; Nestle, Frank O.

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases. PMID:23843942

  6. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  7. COBALT: Development of a Platform to Flight Test Lander GN&C Technologies on Suborbital Rockets

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Seubert, Carl R.; Amzajerdian, Farzin; Bergh, Chuck; Kourchians, Ara; Restrepo, Carolina I.; Villapando, Carlos Y.; O'Neal, Travis V.; Robertson, Edward A.; Pierrottet, Diego; hide

    2017-01-01

    The NASA COBALT Project (CoOperative Blending of Autonomous Landing Technologies) is developing and integrating new precision-landing Guidance, Navigation and Control (GN&C) technologies, along with developing a terrestrial fight-test platform for Technology Readiness Level (TRL) maturation. The current technologies include a third- generation Navigation Doppler Lidar (NDL) sensor for ultra-precise velocity and line- of-site (LOS) range measurements, and the Lander Vision System (LVS) that provides passive-optical Terrain Relative Navigation (TRN) estimates of map-relative position. The COBALT platform is self contained and includes the NDL and LVS sensors, blending filter, a custom compute element, power unit, and communication system. The platform incorporates a structural frame that has been designed to integrate with the payload frame onboard the new Masten Xodiac vertical take-o, vertical landing (VTVL) terrestrial rocket vehicle. Ground integration and testing is underway, and terrestrial fight testing onboard Xodiac is planned for 2017 with two flight campaigns: one open-loop and one closed-loop.

  8. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations

    NASA Astrophysics Data System (ADS)

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos

    2017-12-01

    Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.

  9. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.

    PubMed

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos

    2017-12-01

    The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.

  10. jAMVLE, a New Integrated Molecular Visualization Learning Environment

    ERIC Educational Resources Information Center

    Bottomley, Steven; Chandler, David; Morgan, Eleanor; Helmerhorst, Erik

    2006-01-01

    A new computer-based molecular visualization tool has been developed for teaching, and learning, molecular structure. This java-based jmol Amalgamated Molecular Visualization Learning Environment (jAMVLE) is platform-independent, integrated, and interactive. It has an overall graphical user interface that is intuitive and easy to use. The…

  11. Hardware platforms for MEMS gyroscope tuning based on evolutionary computation using open-loop and closed -loop frequency response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  12. Cloudgene: A graphical execution platform for MapReduce programs on private and public clouds

    PubMed Central

    2012-01-01

    Background The MapReduce framework enables a scalable processing and analyzing of large datasets by distributing the computational load on connected computer nodes, referred to as a cluster. In Bioinformatics, MapReduce has already been adopted to various case scenarios such as mapping next generation sequencing data to a reference genome, finding SNPs from short read data or matching strings in genotype files. Nevertheless, tasks like installing and maintaining MapReduce on a cluster system, importing data into its distributed file system or executing MapReduce programs require advanced knowledge in computer science and could thus prevent scientists from usage of currently available and useful software solutions. Results Here we present Cloudgene, a freely available platform to improve the usability of MapReduce programs in Bioinformatics by providing a graphical user interface for the execution, the import and export of data and the reproducibility of workflows on in-house (private clouds) and rented clusters (public clouds). The aim of Cloudgene is to build a standardized graphical execution environment for currently available and future MapReduce programs, which can all be integrated by using its plug-in interface. Since Cloudgene can be executed on private clusters, sensitive datasets can be kept in house at all time and data transfer times are therefore minimized. Conclusions Our results show that MapReduce programs can be integrated into Cloudgene with little effort and without adding any computational overhead to existing programs. This platform gives developers the opportunity to focus on the actual implementation task and provides scientists a platform with the aim to hide the complexity of MapReduce. In addition to MapReduce programs, Cloudgene can also be used to launch predefined systems (e.g. Cloud BioLinux, RStudio) in public clouds. Currently, five different bioinformatic programs using MapReduce and two systems are integrated and have been successfully deployed. Cloudgene is freely available at http://cloudgene.uibk.ac.at. PMID:22888776

  13. Scalable, Lightweight, Integrated and Quick-to-Assemble (SLIQ) Hyperdrives for Functional Circuit Dissection.

    PubMed

    Liang, Li; Oline, Stefan N; Kirk, Justin C; Schmitt, Lukas Ian; Komorowski, Robert W; Remondes, Miguel; Halassa, Michael M

    2017-01-01

    Independently adjustable multielectrode arrays are routinely used to interrogate neuronal circuit function, enabling chronic in vivo monitoring of neuronal ensembles in freely behaving animals at a single-cell, single spike resolution. Despite the importance of this approach, its widespread use is limited by highly specialized design and fabrication methods. To address this, we have developed a Scalable, Lightweight, Integrated and Quick-to-assemble multielectrode array platform. This platform additionally integrates optical fibers with independently adjustable electrodes to allow simultaneous single unit recordings and circuit-specific optogenetic targeting and/or manipulation. In current designs, the fully assembled platforms are scalable from 2 to 32 microdrives, and yet range 1-3 g, light enough for small animals. Here, we describe the design process starting from intent in computer-aided design, parameter testing through finite element analysis and experimental means, and implementation of various applications across mice and rats. Combined, our methods may expand the utility of multielectrode recordings and their continued integration with other tools enabling functional dissection of intact neural circuits.

  14. Leveraging Computer-Mediated Communication Technologies to Enhance Interactions in Online Learning

    ERIC Educational Resources Information Center

    Wright, Linda J.

    2011-01-01

    Computer-mediated communication (CMC) technologies have been an integral part of distance education for many years. They are found in both synchronous and asynchronous platforms and are intended to enhance the learning experience for students. CMC technologies add an interactive element to the online learning environment. The findings from this…

  15. Fourier transform spectrometer controller for partitioned architectures

    NASA Astrophysics Data System (ADS)

    Tamas-Selicean, D.; Keymeulen, D.; Berisford, D.; Carlson, R.; Hand, K.; Pop, P.; Wadsworth, W.; Levy, R.

    The current trend in spacecraft computing is to integrate applications of different criticality levels on the same platform using no separation. This approach increases the complexity of the development, verification and integration processes, with an impact on the whole system life cycle. Researchers at ESA and NASA advocated for the use of partitioned architecture to reduce this complexity. Partitioned architectures rely on platform mechanisms to provide robust temporal and spatial separation between applications. Such architectures have been successfully implemented in several industries, such as avionics and automotive. In this paper we investigate the challenges of developing and the benefits of integrating a scientific instrument, namely a Fourier Transform Spectrometer, in such a partitioned architecture.

  16. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.

  17. Analysis of outcomes in radiation oncology: An integrated computational platform

    PubMed Central

    Liu, Dezhi; Ajlouni, Munther; Jin, Jian-Yue; Ryu, Samuel; Siddiqui, Farzan; Patel, Anushka; Movsas, Benjamin; Chetty, Indrin J.

    2009-01-01

    Radiotherapy research and outcome analyses are essential for evaluating new methods of radiation delivery and for assessing the benefits of a given technology on locoregional control and overall survival. In this article, a computational platform is presented to facilitate radiotherapy research and outcome studies in radiation oncology. This computational platform consists of (1) an infrastructural database that stores patient diagnosis, IMRT treatment details, and follow-up information, (2) an interface tool that is used to import and export IMRT plans in DICOM RT and AAPM/RTOG formats from a wide range of planning systems to facilitate reproducible research, (3) a graphical data analysis and programming tool that visualizes all aspects of an IMRT plan including dose, contour, and image data to aid the analysis of treatment plans, and (4) a software package that calculates radiobiological models to evaluate IMRT treatment plans. Given the limited number of general-purpose computational environments for radiotherapy research and outcome studies, this computational platform represents a powerful and convenient tool that is well suited for analyzing dose distributions biologically and correlating them with the delivered radiation dose distributions and other patient-related clinical factors. In addition the database is web-based and accessible by multiple users, facilitating its convenient application and use. PMID:19544785

  18. The Effects of Integrating Social Learning Environment with Online Learning

    ERIC Educational Resources Information Center

    Raspopovic, Miroslava; Cvetanovic, Svetlana; Medan, Ivana; Ljubojevic, Danijela

    2017-01-01

    The aim of this paper is to present the learning and teaching styles using the Social Learning Environment (SLE), which was developed based on the computer supported collaborative learning approach. To avoid burdening learners with multiple platforms and tools, SLE was designed and developed in order to integrate existing systems, institutional…

  19. LLNL Partners with IBM on Brain-Like Computing Chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Essen, Brian

    Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.

  20. LLNL Partners with IBM on Brain-Like Computing Chip

    ScienceCinema

    Van Essen, Brian

    2018-06-25

    Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less

  2. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.

  3. A decision support model for investment on P2P lending platform.

    PubMed

    Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.

  4. A decision support model for investment on P2P lending platform

    PubMed Central

    Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234

  5. Industrial Cloud: Toward Inter-enterprise Integration

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, Tomasz Wiktor; Rong, Chunming; Thorsen, Kari Anne Haaland

    Industrial cloud is introduced as a new inter-enterprise integration concept in cloud computing. The characteristics of an industrial cloud are given by its definition and architecture and compared with other general cloud concepts. The concept is then demonstrated by a practical use case, based on Integrated Operations (IO) in the Norwegian Continental Shelf (NCS), showing how industrial digital information integration platform gives competitive advantage to the companies involved. Further research and development challenges are also discussed.

  6. Silicon photonics integrated circuits: a manufacturing platform for high density, low power optical I/O's.

    PubMed

    Absil, Philippe P; Verheyen, Peter; De Heyn, Peter; Pantouvaki, Marianna; Lepage, Guy; De Coster, Jeroen; Van Campenhout, Joris

    2015-04-06

    Silicon photonics integrated circuits are considered to enable future computing systems with optical input-outputs co-packaged with CMOS chips to circumvent the limitations of electrical interfaces. In this paper we present the recent progress made to enable dense multiplexing by exploiting the integration advantage of silicon photonics integrated circuits. We also discuss the manufacturability of such circuits, a key factor for a wide adoption of this technology.

  7. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  8. IIIV/Si Nanoscale Lasers and Their Integration with Silicon Photonics

    NASA Astrophysics Data System (ADS)

    Bondarenko, Olesya

    The rapidly evolving global information infrastructure requires ever faster data transfer within computer networks and stations. Integrated chip scale photonics can pave the way to accelerated signal manipulation and boost bandwidth capacity of optical interconnects in a compact and ergonomic arrangement. A key building block for integrated photonic circuits is an on-chip laser. In this dissertation we explore ways to reduce the physical footprint of semiconductor lasers and make them suitable for high density integration on silicon, a standard material platform for today's integrated circuits. We demonstrated the first room temperature metalo-dielectric nanolaser, sub-wavelength in all three dimensions. Next, we demonstrated a nanolaser on silicon, showing the feasibility of its integration with this platform. We also designed and realized an ultracompact feedback laser with edge-emitting structure, amenable for in-plane coupling with a standard silicon waveguide. Finally, we discuss the challenges and propose solutions for improvement of the device performance and practicality.

  9. IAServ: an intelligent home care web services platform in a cloud for aging-in-place.

    PubMed

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-11-12

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  10. IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place

    PubMed Central

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-01-01

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647

  11. Integration of the HTC Vive into the medical platform MeVisLab

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Gall, Markus; Wallner, Jürgen; de Almeida Germano Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-03-01

    Virtual Reality (VR) is an immersive technology that replicates an environment via computer-simulated reality. VR gets a lot of attention in computer games but has also great potential in other areas, like the medical domain. Examples are planning, simulations and training of medical interventions, like for facial surgeries where an aesthetic outcome is important. However, importing medical data into VR devices is not trivial, especially when a direct connection and visualization from your own application is needed. Furthermore, most researcher don't build their medical applications from scratch, rather they use platforms, like MeVisLab, Slicer or MITK. The platforms have in common that they integrate and build upon on libraries like ITK and VTK, further providing a more convenient graphical interface to them for the user. In this contribution, we demonstrate the usage of a VR device for medical data under MeVisLab. Therefore, we integrated the OpenVR library into MeVisLab as an own module. This enables the direct and uncomplicated usage of head mounted displays, like the HTC Vive under MeVisLab. Summarized, medical data from other MeVisLab modules can directly be connected per drag-and-drop to our VR module and will be rendered inside the HTC Vive for an immersive inspection.

  12. Citizen Sensors for SHM: Towards a Crowdsourcing Platform

    PubMed Central

    Ozer, Ekin; Feng, Maria Q.; Feng, Dongming

    2015-01-01

    This paper presents an innovative structural health monitoring (SHM) platform in terms of how it integrates smartphone sensors, the web, and crowdsourcing. The ubiquity of smartphones has provided an opportunity to create low-cost sensor networks for SHM. Crowdsourcing has given rise to citizen initiatives becoming a vast source of inexpensive, valuable but heterogeneous data. Previously, the authors have investigated the reliability of smartphone accelerometers for vibration-based SHM. This paper takes a step further to integrate mobile sensing and web-based computing for a prospective crowdsourcing-based SHM platform. An iOS application was developed to enable citizens to measure structural vibration and upload the data to a server with smartphones. A web-based platform was developed to collect and process the data automatically and store the processed data, such as modal properties of the structure, for long-term SHM purposes. Finally, the integrated mobile and web-based platforms were tested to collect the low-amplitude ambient vibration data of a bridge structure. Possible sources of uncertainties related to citizens were investigated, including the phone location, coupling conditions, and sampling duration. The field test results showed that the vibration data acquired by smartphones operated by citizens without expertise are useful for identifying structural modal properties with high accuracy. This platform can be further developed into an automated, smart, sustainable, cost-free system for long-term monitoring of structural integrity of spatially distributed urban infrastructure. Citizen Sensors for SHM will be a novel participatory sensing platform in the way that it offers hybrid solutions to transitional crowdsourcing parameters. PMID:26102490

  13. Bio-jETI: a service integration, design, and provisioning platform for orchestrated bioinformatics processes.

    PubMed

    Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard

    2008-04-25

    With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.

  14. Radiation and scattering from printed antennas on cylindrically conformal platforms

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.; Bindiganavale, Sunil

    1994-01-01

    The goal was to develop suitable methods and software for the analysis of antennas on cylindrical coated and uncoated platforms. Specifically, the finite element boundary integral and finite element ABC methods were employed successfully and associated software were developed for the analysis and design of wraparound and discrete cavity-backed arrays situated on cylindrical platforms. This work led to the successful implementation of analysis software for such antennas. Developments which played a role in this respect are the efficient implementation of the 3D Green's function for a metallic cylinder, the incorporation of the fast Fourier transform in computing the matrix-vector products executed in the solver of the finite element-boundary integral system, and the development of a new absorbing boundary condition for terminating the finite element mesh on cylindrical surfaces.

  15. Extending IPsec for Efficient Remote Attestation

    NASA Astrophysics Data System (ADS)

    Sadeghi, Ahmad-Reza; Schulz, Steffen

    When establishing a VPN to connect different sites of a network, the integrity of the involved VPN endpoints is often a major security concern. Based on the Trusted Platform Module (TPM), available in many computing platforms today, remote attestation mechanisms can be used to evaluate the internal state of remote endpoints automatically. However, existing protocols and extensions are either unsuited for use with IPsec or impose considerable additional implementation complexity and protocol overhead.

  16. Mapping an Emergent Field of "Computational Education Policy": Policy Rationalities, Prediction and Data in the Age of Artificial Intelligence

    ERIC Educational Resources Information Center

    Gulson, Kalervo N.; Webb, P. Taylor

    2017-01-01

    Contemporary education policy involves the integration of novel forms of data and the creation of new data platforms, in addition to the infusion of business principles into school governance networks, and intensification of socio-technical relations. In this paper, we examine how "computational rationality" may be understood as…

  17. Integrated-optics heralded controlled-NOT gate for polarization-encoded qubits

    NASA Astrophysics Data System (ADS)

    Zeuner, Jonas; Sharma, Aditya N.; Tillmann, Max; Heilmann, René; Gräfe, Markus; Moqanaki, Amir; Szameit, Alexander; Walther, Philip

    2018-03-01

    Recent progress in integrated-optics technology has made photonics a promising platform for quantum networks and quantum computation protocols. Integrated optical circuits are characterized by small device footprints and unrivalled intrinsic interferometric stability. Here, we take advantage of femtosecond-laser-written waveguides' ability to process polarization-encoded qubits and present an implementation of a heralded controlled-NOT gate on chip. We evaluate the gate performance in the computational basis and a superposition basis, showing that the gate can create polarization entanglement between two photons. Transmission through the integrated device is optimized using thermally expanded core fibers and adiabatically reduced mode-field diameters at the waveguide facets. This demonstration underlines the feasibility of integrated quantum gates for all-optical quantum networks and quantum repeaters.

  18. Flexible workflow sharing and execution services for e-scientists

    NASA Astrophysics Data System (ADS)

    Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely

    2013-04-01

    The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.

  19. CILogon: An Integrated Identity and Access Management Platform for Science

    NASA Astrophysics Data System (ADS)

    Basney, J.

    2016-12-01

    When scientists work together, they use web sites and other software to share their ideas and data. To ensure the integrity of their work, these systems require the scientists to log in and verify that they are part of the team working on a particular science problem. Too often, the identity and access verification process is a stumbling block for the scientists. Scientific research projects are forced to invest time and effort into developing and supporting Identity and Access Management (IAM) services, distracting them from the core goals of their research collaboration. CILogon provides an IAM platform that enables scientists to work together to meet their IAM needs more effectively so they can allocate more time and effort to their core mission of scientific research. The CILogon platform enables federated identity management and collaborative organization management. Federated identity management enables researchers to use their home organization identities to access cyberinfrastructure, rather than requiring yet another username and password to log on. Collaborative organization management enables research projects to define user groups for authorization to collaboration platforms (e.g., wikis, mailing lists, and domain applications). CILogon's IAM platform serves the unique needs of research collaborations, namely the need to dynamically form collaboration groups across organizations and countries, sharing access to data, instruments, compute clusters, and other resources to enable scientific discovery. CILogon provides a software-as-a-service platform to ease integration with cyberinfrastructure, while making all software components publicly available under open source licenses to enable re-use. Figure 1 illustrates the components and interfaces of this platform. CILogon has been operational since 2010 and has been used by over 7,000 researchers from more than 170 identity providers to access cyberinfrastructure including Globus, LIGO, Open Science Grid, SeedMe, and XSEDE. The "CILogon 2.0" platform, launched in 2016, adds support for virtual organization (VO) membership management, identity linking, international collaborations, and standard integration protocols, through integration with the Internet2 COmanage collaboration software.

  20. Multiphysics and multiscale modelling, data-model fusion and integration of organ physiology in the clinic: ventricular cardiac mechanics.

    PubMed

    Chabiniok, Radomir; Wang, Vicky Y; Hadjicharalambous, Myrianthi; Asner, Liya; Lee, Jack; Sermesant, Maxime; Kuhl, Ellen; Young, Alistair A; Moireau, Philippe; Nash, Martyn P; Chapelle, Dominique; Nordsletten, David A

    2016-04-06

    With heart and cardiovascular diseases continually challenging healthcare systems worldwide, translating basic research on cardiac (patho)physiology into clinical care is essential. Exacerbating this already extensive challenge is the complexity of the heart, relying on its hierarchical structure and function to maintain cardiovascular flow. Computational modelling has been proposed and actively pursued as a tool for accelerating research and translation. Allowing exploration of the relationships between physics, multiscale mechanisms and function, computational modelling provides a platform for improving our understanding of the heart. Further integration of experimental and clinical data through data assimilation and parameter estimation techniques is bringing computational models closer to use in routine clinical practice. This article reviews developments in computational cardiac modelling and how their integration with medical imaging data is providing new pathways for translational cardiac modelling.

  1. Design of sensor node platform for wireless biomedical sensor networks.

    PubMed

    Xijun, Chen; -H Meng, Max; Hongliang, Ren

    2005-01-01

    Design of low-cost, miniature, lightweight, ultra low-power, flexible sensor platform capable of customization and seamless integration into a wireless biomedical sensor network(WBSN) for health monitoring applications presents one of the most challenging tasks. In this paper, we propose a WBSN node platform featuring an ultra low-power microcontroller, an IEEE 802.15.4 compatible transceiver, and a flexible expansion connector. The proposed solution promises a cost-effective, flexible platform that allows easy customization, energy-efficient computation and communication. The development of a common platform for multiple physical sensors will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of an ECG (Electrocardiogram) sensor.

  2. A cell-phone-based brain-computer interface for communication in daily life

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Te; Wang, Yijun; Jung, Tzyy-Ping

    2011-04-01

    Moving a brain-computer interface (BCI) system from a laboratory demonstration to real-life applications still poses severe challenges to the BCI community. This study aims to integrate a mobile and wireless electroencephalogram (EEG) system and a signal-processing platform based on a cell phone into a truly wearable and wireless online BCI. Its practicality and implications in a routine BCI are demonstrated through the realization and testing of a steady-state visual evoked potential (SSVEP)-based BCI. This study implemented and tested online signal processing methods in both time and frequency domains for detecting SSVEPs. The results of this study showed that the performance of the proposed cell-phone-based platform was comparable, in terms of the information transfer rate, with other BCI systems using bulky commercial EEG systems and personal computers. To the best of our knowledge, this study is the first to demonstrate a truly portable, cost-effective and miniature cell-phone-based platform for online BCIs.

  3. A cell-phone-based brain-computer interface for communication in daily life.

    PubMed

    Wang, Yu-Te; Wang, Yijun; Jung, Tzyy-Ping

    2011-04-01

    Moving a brain-computer interface (BCI) system from a laboratory demonstration to real-life applications still poses severe challenges to the BCI community. This study aims to integrate a mobile and wireless electroencephalogram (EEG) system and a signal-processing platform based on a cell phone into a truly wearable and wireless online BCI. Its practicality and implications in a routine BCI are demonstrated through the realization and testing of a steady-state visual evoked potential (SSVEP)-based BCI. This study implemented and tested online signal processing methods in both time and frequency domains for detecting SSVEPs. The results of this study showed that the performance of the proposed cell-phone-based platform was comparable, in terms of the information transfer rate, with other BCI systems using bulky commercial EEG systems and personal computers. To the best of our knowledge, this study is the first to demonstrate a truly portable, cost-effective and miniature cell-phone-based platform for online BCIs.

  4. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yier

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less

  5. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  6. Bringing Web 2.0 to bioinformatics.

    PubMed

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  7. Integration of the virtual model of a Stewart platform with the avatar of a vehicle in a virtual reality

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2016-08-01

    The development of methods of computer aided design and engineering allows conducting virtual tests, among others concerning motion simulation of technical means. The paper presents a method of integrating an object in the form of a virtual model of a Stewart platform with an avatar of a vehicle moving in a virtual environment. The area of the problem includes issues related to the problem of fidelity of mapping the work of the analyzed technical mean. The main object of investigations is a 3D model of a Stewart platform, which is a subsystem of the simulator designated for driving learning for disabled persons. The analyzed model of the platform, prepared for motion simulation, was created in the “Motion Simulation” module of a CAD/CAE class system Siemens PLM NX. Whereas the virtual environment, in which the moves the avatar of the passenger car, was elaborated in a VR class system EON Studio. The element integrating both of the mentioned software environments is a developed application that reads information from the virtual reality (VR) concerning the current position of the car avatar. Then, basing on the accepted algorithm, it sends control signals to respective joints of the model of the Stewart platform (CAD).

  8. IVAG: An Integrative Visualization Application for Various Types of Genomic Data Based on R-Shiny and the Docker Platform.

    PubMed

    Lee, Tae-Rim; Ahn, Jin Mo; Kim, Gyuhee; Kim, Sangsoo

    2017-12-01

    Next-generation sequencing (NGS) technology has become a trend in the genomics research area. There are many software programs and automated pipelines to analyze NGS data, which can ease the pain for traditional scientists who are not familiar with computer programming. However, downstream analyses, such as finding differentially expressed genes or visualizing linkage disequilibrium maps and genome-wide association study (GWAS) data, still remain a challenge. Here, we introduce a dockerized web application written in R using the Shiny platform to visualize pre-analyzed RNA sequencing and GWAS data. In addition, we have integrated a genome browser based on the JBrowse platform and an automated intermediate parsing process required for custom track construction, so that users can easily build and navigate their personal genome tracks with in-house datasets. This application will help scientists perform series of downstream analyses and obtain a more integrative understanding about various types of genomic data by interactively visualizing them with customizable options.

  9. Run Environment and Data Management for Earth System Models

    NASA Astrophysics Data System (ADS)

    Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.

    2009-04-01

    The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.

  10. Integrated Modular Avionics for Spacecraft: Earth Observation Use Case Demonstrator

    NASA Astrophysics Data System (ADS)

    Deredempt, Marie-Helene; Rossignol, Alain; Hyounet, Philippe

    2013-08-01

    Integrated Modular Avionics (IMA) for Space, as European Space Agency initiative, aimed to make applicable to space domain the time and space partitioning concepts and particularly the ARINC 653 standard [1][2]. Expected benefits of such an approach are development flexibility, capability to provide differential V&V for different criticality level functionalities and to integrate late or In-Orbit delivery. This development flexibility could improve software subcontracting, industrial organization and software reuse. Time and space partitioning technique facilitates integration of software functions as black boxes and integration of decentralized function such as star tracker in On Board Computer to save mass and power by limiting electronics resources. In aeronautical domain, Integrated Modular Avionics architecture is based on a network of LRU (Line Replaceable Unit) interconnected by AFDX (Avionic Full DupleX). Time and Space partitioning concept is applicable to LRU and provides independent partitions which inter communicate using ARINC 653 communication ports. Using End System (LRU component) intercommunication between LRU is managed in the same way than intercommunication between partitions in LRU. In such architecture an application developed using only communication port can be integrated in an LRU or another one without impacting the global architecture. In space domain, a redundant On Board Computer controls (ground monitoring TM) and manages the platform (ground command TC) in terms of power, solar array deployment, attitude, orbit, thermal, maintenance, failure detection and recovery isolation. In addition, Payload units and platform units such as RIU, PCDU, AOCS units (Star tracker, Reaction wheels) are considered in this architecture. Interfaces are mainly realized through MIL-STD-1553B busses and SpaceWire and this could be considered as the main constraint for IMA implementation in space domain. During the first phase of IMA SP project, ARINC653 impact was analyzed. Requirements and architecture for space domain were defined [3][4] and System Executive platforms (based on Xtratum, Pike OS, and AIR) were developed with RTEMS as Guest OS. This paper focuses on the demonstrator developed by Astrium as part of IMA SP project. This demonstrator has the objective to confirm operational software partitioning feasibility above Xtratum System Executive Platform with acceptable CPU overhead.

  11. Annual Industrial Capabilities Report to Congress

    DTIC Science & Technology

    2013-10-01

    platform concepts for airframe, propulsion, sensors , weapons integration, avionics, and active and passive survivability features will all be explored...for full integration into the National Airspace System. Greater computing power, combined with developments in miniaturization, sensors , and...the design engineering skills for missile propulsion systems is at risk. The Department relies on the viability of a small number of SRM and turbine

  12. Towards a Service-Oriented Enterprise: The Design of a Cloud Business Integration Platform in a Medium-Sized Manufacturing Enterprise

    ERIC Educational Resources Information Center

    Stamas, Paul J.

    2013-01-01

    This case study research followed the two-year transition of a medium-sized manufacturing firm towards a service-oriented enterprise. A service-oriented enterprise is an emerging architecture of the firm that leverages the paradigm of services computing to integrate the capabilities of the firm with the complementary competencies of business…

  13. Bio-jETI: a service integration, design, and provisioning platform for orchestrated bioinformatics processes

    PubMed Central

    Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard

    2008-01-01

    Background With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Methods Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. Conclusions As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way. PMID:18460173

  14. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    NASA Astrophysics Data System (ADS)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  15. Remote Video Monitor of Vehicles in Cooperative Information Platform

    NASA Astrophysics Data System (ADS)

    Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan

    Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.

  16. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  17. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  18. Global Software Development with Cloud Platforms

    NASA Astrophysics Data System (ADS)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  19. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data

    PubMed Central

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-01-01

    Summary: The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Availability and implementation: Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl. Contact: peter@biomatters.com PMID:22543367

  20. Geneious Basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data.

    PubMed

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-06-15

    The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl.

  1. Convergence Is Real

    ERIC Educational Resources Information Center

    Enyeart, Mike; Staman, E. Michael; Valdes, Jose J., Jr.

    2007-01-01

    The concept of convergence has evolved significantly during recent years. Today, "convergence" refers to the integration of the communications and computing resources and services that seamlessly traverse multiple infrastructures and deliver content to multiple platforms or appliances. Convergence is real. Those in higher education, and especially…

  2. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  3. Time and Space Partition Platform for Safe and Secure Flight Software

    NASA Astrophysics Data System (ADS)

    Esquinas, Angel; Zamorano, Juan; de la Puente, Juan A.; Masmano, Miguel; Crespo, Alfons

    2012-08-01

    There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable realtime kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.

  4. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  5. Integrating photonics with silicon nanoelectronics for the next generation of systems on a chip.

    PubMed

    Atabaki, Amir H; Moazeni, Sajjad; Pavanello, Fabio; Gevorgyan, Hayk; Notaros, Jelena; Alloatti, Luca; Wade, Mark T; Sun, Chen; Kruger, Seth A; Meng, Huaiyu; Al Qubaisi, Kenaish; Wang, Imbert; Zhang, Bohan; Khilo, Anatol; Baiocco, Christopher V; Popović, Miloš A; Stojanović, Vladimir M; Ram, Rajeev J

    2018-04-01

    Electronic and photonic technologies have transformed our lives-from computing and mobile devices, to information technology and the internet. Our future demands in these fields require innovation in each technology separately, but also depend on our ability to harness their complementary physics through integrated solutions 1,2 . This goal is hindered by the fact that most silicon nanotechnologies-which enable our processors, computer memory, communications chips and image sensors-rely on bulk silicon substrates, a cost-effective solution with an abundant supply chain, but with substantial limitations for the integration of photonic functions. Here we introduce photonics into bulk silicon complementary metal-oxide-semiconductor (CMOS) chips using a layer of polycrystalline silicon deposited on silicon oxide (glass) islands fabricated alongside transistors. We use this single deposited layer to realize optical waveguides and resonators, high-speed optical modulators and sensitive avalanche photodetectors. We integrated this photonic platform with a 65-nanometre-transistor bulk CMOS process technology inside a 300-millimetre-diameter-wafer microelectronics foundry. We then implemented integrated high-speed optical transceivers in this platform that operate at ten gigabits per second, composed of millions of transistors, and arrayed on a single optical bus for wavelength division multiplexing, to address the demand for high-bandwidth optical interconnects in data centres and high-performance computing 3,4 . By decoupling the formation of photonic devices from that of transistors, this integration approach can achieve many of the goals of multi-chip solutions 5 , but with the performance, complexity and scalability of 'systems on a chip' 1,6-8 . As transistors smaller than ten nanometres across become commercially available 9 , and as new nanotechnologies emerge 10,11 , this approach could provide a way to integrate photonics with state-of-the-art nanoelectronics.

  6. Hybrid graphene/silicon integrated optical isolators with photonic spin–orbit interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jingwen; Sun, Xiankai, E-mail: xksun@cuhk.edu.hk; Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Shatin, New Territories

    2016-04-11

    Optical isolators are an important building block in photonic computation and communication. In traditional optics, isolators are realized with magneto-optical garnets. However, it remains challenging to incorporate such materials on an integrated platform because of the difficulty in material growth and bulky device footprint. Here, we propose an ultracompact integrated isolator by exploiting graphene's magneto-optical property on a silicon-on-insulator platform. The photonic nonreciprocity is achieved because the cyclotrons in graphene experiencing different optical spins exhibit different responses to counterpropagating light. Taking advantage of cavity resonance effects, we have numerically optimized a device design, which shows excellent isolation performance with themore » extinction ratio over 45 dB and the insertion loss around 12 dB at a wavelength near 1.55 μm. Featuring graphene's CMOS compatibility and substantially reduced device footprint, our proposal sheds light on monolithic integration of nonreciprocal photonic devices.« less

  7. TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; McNutt, T

    2015-06-15

    Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less

  8. Lane Detection on the iPhone

    NASA Astrophysics Data System (ADS)

    Ren, Feixiang; Huang, Jinsheng; Terauchi, Mutsuhiro; Jiang, Ruyi; Klette, Reinhard

    A robust and efficient lane detection system is an essential component of Lane Departure Warning Systems, which are commonly used in many vision-based Driver Assistance Systems (DAS) in intelligent transportation. Various computation platforms have been proposed in the past few years for the implementation of driver assistance systems (e.g., PC, laptop, integrated chips, PlayStation, and so on). In this paper, we propose a new platform for the implementation of lane detection, which is based on a mobile phone (the iPhone). Due to physical limitations of the iPhone w.r.t. memory and computing power, a simple and efficient lane detection algorithm using a Hough transform is developed and implemented on the iPhone, as existing algorithms developed based on the PC platform are not suitable for mobile phone devices (currently). Experiments of the lane detection algorithm are made both on PC and on iPhone.

  9. Federated and Cloud Enabled Resources for Data Management and Utilization

    NASA Astrophysics Data System (ADS)

    Rankin, R.; Gordon, M.; Potter, R. G.; Satchwill, B.

    2011-12-01

    The emergence of cloud computing over the past three years has led to a paradigm shift in how data can be managed, processed and made accessible. Building on the federated data management system offered through the Canadian Space Science Data Portal (www.cssdp.ca), we demonstrate how heterogeneous and geographically distributed data sets and modeling tools have been integrated to form a virtual data center and computational modeling platform that has services for data processing and visualization embedded within it. We also discuss positive and negative experiences in utilizing Eucalyptus and OpenStack cloud applications, and job scheduling facilitated by Condor and Star Cluster. We summarize our findings by demonstrating use of these technologies in the Cloud Enabled Space Weather Data Assimilation and Modeling Platform CESWP (www.ceswp.ca), which is funded through Canarie's (canarie.ca) Network Enabled Platforms program in Canada.

  10. Thermal and Power Challenges in High Performance Computing Systems

    NASA Astrophysics Data System (ADS)

    Natarajan, Venkat; Deshpande, Anand; Solanki, Sudarshan; Chandrasekhar, Arun

    2009-05-01

    This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new sophisticated applications in highly diverse areas such as health, education, finance, entertainment, etc. is driving the platform and device requirements for future systems. The key ingredients of future platforms are vertically integrated (3D) die-stacked devices which provide the required performance characteristics with the associated form factor advantages. Two of the major challenges to the design of through silicon via (TSV) based 3D stacked technologies are (i) effective thermal management and (ii) efficient power delivery mechanisms. Some of the key challenges that are articulated in this paper include hot-spot superposition and intensification in a 3D stack, design/optimization of thermal through silicon vias (TTSVs), non-uniform power loading of multi-die stacks, efficient on-chip power delivery, minimization of electrical hotspots etc.

  11. Cloud Based Earth Observation Data Exploitation Platforms

    NASA Astrophysics Data System (ADS)

    Romeo, A.; Pinto, S.; Loekken, S.; Marin, A.

    2017-12-01

    In the last few years data produced daily by several private and public Earth Observation (EO) satellites reached the order of tens of Terabytes, representing for scientists and commercial application developers both a big opportunity for their exploitation and a challenge for their management. New IT technologies, such as Big Data and cloud computing, enable the creation of web-accessible data exploitation platforms, which offer to scientists and application developers the means to access and use EO data in a quick and cost effective way. RHEA Group is particularly active in this sector, supporting the European Space Agency (ESA) in the Exploitation Platforms (EP) initiative, developing technology to build multi cloud platforms for the processing and analysis of Earth Observation data, and collaborating with larger European initiatives such as the European Plate Observing System (EPOS) and the European Open Science Cloud (EOSC). An EP is a virtual workspace, providing a user community with access to (i) large volume of data, (ii) algorithm development and integration environment, (iii) processing software and services (e.g. toolboxes, visualization routines), (iv) computing resources, (v) collaboration tools (e.g. forums, wiki, etc.). When an EP is dedicated to a specific Theme, it becomes a Thematic Exploitation Platform (TEP). Currently, ESA has seven TEPs in a pre-operational phase dedicated to geo-hazards monitoring and prevention, costal zones, forestry areas, hydrology, polar regions, urban areas and food security. On the technology development side, solutions like the multi cloud EO data processing platform provides the technology to integrate ICT resources and EO data from different vendors in a single platform. In particular it offers (i) Multi-cloud data discovery, (ii) Multi-cloud data management and access and (iii) Multi-cloud application deployment. This platform has been demonstrated with the EGI Federated Cloud, Innovation Platform Testbed Poland and the Amazon Web Services cloud. This work will present an overview of the TEPs and the multi-cloud EO data processing platform, and discuss their main achievements and their impacts in the context of distributed Research Infrastructures such as EPOS and EOSC.

  12. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  13. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  14. Integrated long-range UAV/UGV collaborative target tracking

    NASA Astrophysics Data System (ADS)

    Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv

    2009-05-01

    Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.

  15. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  16. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  17. Grace: A cross-platform micromagnetic simulator on graphics processing units

    NASA Astrophysics Data System (ADS)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  18. A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking

    NASA Astrophysics Data System (ADS)

    Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes

    We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.

  19. The Personal Motion Platform

    NASA Technical Reports Server (NTRS)

    Park, Brian Vandellyn

    1993-01-01

    The Neutral Body Posture experienced in microgravity creates a biomechanical equilibrium by enabling the internal forces within the body to find their own balance. A patented reclining chair based on this posture provides a minimal stress environment for interfacing with computer systems for extended periods. When the chair is mounted on a 3 or 6 axis motion platform, a generic motion simulator for simulated digital environments is created. The Personal Motion Platform provides motional feedback to the occupant in synchronization with their movements inside the digital world which enhances the simulation experience. Existing HMD based simulation systems can be integrated to the turnkey system. Future developments are discussed.

  20. Genomics Portals: integrative web-platform for mining genomics data.

    PubMed

    Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario

    2010-01-13

    A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  1. Genomics Portals: integrative web-platform for mining genomics data

    PubMed Central

    2010-01-01

    Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909

  2. G-DOC Plus - an integrative bioinformatics platform for precision medicine.

    PubMed

    Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha

    2016-04-30

    G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available at: https://gdoc.georgetown.edu .

  3. A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers

    NASA Technical Reports Server (NTRS)

    Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)

    1997-01-01

    The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.

  4. Xyce parallel electronic simulator : users' guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less

  5. Geo-spatial Service and Application based on National E-government Network Platform and Cloud

    NASA Astrophysics Data System (ADS)

    Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.

    2014-04-01

    With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.

  6. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  7. ENFIN a network to enhance integrative systems biology.

    PubMed

    Kahlem, Pascal; Birney, Ewan

    2007-12-01

    Integration of biological data of various types and development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing both an adapted infrastructure to connect databases and platforms to enable the generation of new bioinformatics tools as well as the experimental validation of computational predictions. We will give an overview of the projects tackled within ENFIN and discuss the challenges associated with integration for systems biology.

  8. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  9. Slow Computing Simulation of Bio-plausible Control

    DTIC Science & Technology

    2012-03-01

    information networks, neuromorphic chips would become necessary. Small unstable flying platforms currently require RTK, GPS, or Vicon closed-circuit...Visual, and IR Sensing FPGA ASIC Neuromorphic Chip Simulation Quad Rotor Robotic Insect Uniform Independent Network Single Modality Neural Network... neuromorphic Processing across parallel computational elements =0.54 N u m b e r o f c o m p u ta tio n s - No info 14 integrated circuit

  10. TERRA REF: Advancing phenomics with high resolution, open access sensor and genomics data

    NASA Astrophysics Data System (ADS)

    LeBauer, D.; Kooper, R.; Burnette, M.; Willis, C.

    2017-12-01

    Automated plant measurement has the potential to improve understanding of genetic and environmental controls on plant traits (phenotypes). The application of sensors and software in the automation of high throughput phenotyping reflects a fundamental shift from labor intensive hand measurements to drone, tractor, and robot mounted sensing platforms. These tools are expected to speed the rate of crop improvement by enabling plant breeders to more accurately select plants with improved yields, resource use efficiency, and stress tolerance. However, there are many challenges facing high throughput phenomics: sensors and platforms are expensive, currently there are few standard methods of data collection and storage, and the analysis of large data sets requires high performance computers and automated, reproducible computing pipelines. To overcome these obstacles and advance the science of high throughput phenomics, the TERRA Phenotyping Reference Platform (TERRA-REF) team is developing an open-access database of high resolution sensor data. TERRA REF is an integrated field and greenhouse phenotyping system that includes: a reference field scanner with fifteen sensors that can generate terrabytes of data each day at mm resolution; UAV, tractor, and fixed field sensing platforms; and an automated controlled-environment scanner. These platforms will enable investigation of diverse sensing modalities, and the investigation of traits under controlled and field environments. It is the goal of TERRA REF to lower the barrier to entry for academic and industry researchers by providing high-resolution data, open source software, and online computing resources. Our project is unique in that all data will be made fully public in November 2018, and is already available to early adopters through the beta-user program. We will describe the datasets and how to use them as well as the databases and computing pipeline and how these can be reused and remixed in other phenomics pipelines. Finally, we will describe the National Data Service workbench, a cloud computing platform that can access the petabyte scale data while supporting reproducible research.

  11. Electrical Design and Evaluation of Asynchronous Serial Bus Communication Network of 48 Sensor Platform LSIs with Single-Ended I/O for Integrated MEMS-LSI Sensors.

    PubMed

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Muroyama, Masanori

    2018-01-15

    For installing many sensors in a limited space with a limited computing resource, the digitization of the sensor output at the site of sensation has advantages such as a small amount of wiring, low signal interference and high scalability. For this purpose, we have developed a dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) (referred to as "sensor platform LSI") for bus-networked Micro-Electro-Mechanical-Systems (MEMS)-LSI integrated sensors. In this LSI, collision avoidance, adaptation and event-driven functions are simply implemented to relieve data collision and congestion in asynchronous serial bus communication. In this study, we developed a network system with 48 sensor platform LSIs based on Printed Circuit Board (PCB) in a backbone bus topology with the bus length being 2.4 m. We evaluated the serial communication performance when 48 LSIs operated simultaneously with the adaptation function. The number of data packets received from each LSI was almost identical, and the average sampling frequency of 384 capacitance channels (eight for each LSI) was 73.66 Hz.

  12. Integrated Approach to Reconstruction of Microbial Regulatory Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodionov, Dmitry A; Novichkov, Pavel S

    2013-11-04

    This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated inmore » RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.« less

  13. Study on application of dynamic monitoring of land use based on mobile GIS technology

    NASA Astrophysics Data System (ADS)

    Tian, Jingyi; Chu, Jian; Guo, Jianxing; Wang, Lixin

    2006-10-01

    The land use dynamic monitoring is an important mean to maintain the real-time update of the land use data. Mobile GIS technology integrates GIS, GPS and Internet. It can update the historic al data in real time with site-collected data and realize the data update in large scale with high precision. The Monitoring methods on the land use change data with the mobile GIS technology were discussed. Mobile terminal of mobile GIS has self-developed for this study with GPS-25 OEM and notebook computer. The RTD (real-time difference) operation mode is selected. Mobile GIS system of dynamic monitoring of land use have developed with Visual C++ as operation platform, MapObjects control as graphic platform and MSCmm control as communication platform, which realizes organic integration of GPS, GPRS and GIS. This system has such following basic functions as data processing, graphic display, graphic editing, attribute query and navigation. Qinhuangdao city was selected as the experiential area. Shown by the study result, the mobile GIS integration system of dynamic monitoring of land use developed by this study has practical application value.

  14. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  15. Strategic Integration of Multiple Bioinformatics Resources for System Level Analysis of Biological Networks.

    PubMed

    D'Souza, Mark; Sulakhe, Dinanath; Wang, Sheng; Xie, Bing; Hashemifar, Somaye; Taylor, Andrew; Dubchak, Inna; Conrad Gilliam, T; Maltsev, Natalia

    2017-01-01

    Recent technological advances in genomics allow the production of biological data at unprecedented tera- and petabyte scales. Efficient mining of these vast and complex datasets for the needs of biomedical research critically depends on a seamless integration of the clinical, genomic, and experimental information with prior knowledge about genotype-phenotype relationships. Such experimental data accumulated in publicly available databases should be accessible to a variety of algorithms and analytical pipelines that drive computational analysis and data mining.We present an integrated computational platform Lynx (Sulakhe et al., Nucleic Acids Res 44:D882-D887, 2016) ( http://lynx.cri.uchicago.edu ), a web-based database and knowledge extraction engine. It provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization. It gives public access to the Lynx integrated knowledge base (LynxKB) and its analytical tools via user-friendly web services and interfaces. The Lynx service-oriented architecture supports annotation and analysis of high-throughput experimental data. Lynx tools assist the user in extracting meaningful knowledge from LynxKB and experimental data, and in the generation of weighted hypotheses regarding the genes and molecular mechanisms contributing to human phenotypes or conditions of interest. The goal of this integrated platform is to support the end-to-end analytical needs of various translational projects.

  16. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinda, Peter August

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems softwaremore » for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3), although a prototype connecting Palacios with the GEM5 architectural simulator was demonstrated, our conclusion was that such a platform was less useful for design space exploration than anticipated due to inherent complexity of the connection between the instruction set architecture level and the microarchitectural level. For effort (4), we found that a code injection approach proved to be more fruitful. The results of our efforts are publicly available in the open source Palacios codebase and published papers, all of which are available from the project web site, v3vee.org. Palacios is currently one of the two codebases (the other being Sandia’s Kitten lightweight kernel) that underlies the node operating system for the DOE Hobbes Project, one of two projects tasked with building a systems software prototype for the national exascale computing effort.« less

  17. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  18. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  19. I3Mote: An Open Development Platform for the Intelligent Industrial Internet

    PubMed Central

    Martinez, Borja; Vilajosana, Xavier; Kim, Il Han; Zhou, Jianwei; Tuset-Peiró, Pere; Xhafa, Ariton; Poissonnier, Dominique; Lu, Xiaolin

    2017-01-01

    In this article we present the Intelligent Industrial Internet (I3) Mote, an open hardware platform targeting industrial connectivity and sensing deployments. The I3Mote features the most advanced low-power components to tackle sensing, on-board computing and wireless/wired connectivity for demanding industrial applications. The platform has been designed to fill the gap in the industrial prototyping and early deployment market with a compact form factor, low-cost and robust industrial design. I3Mote is an advanced and compact prototyping system integrating the required components to be deployed as a product, leveraging the need for adopting industries to build their own tailored solution. This article describes the platform design, firmware and software ecosystem and characterizes its performance in terms of energy consumption. PMID:28452945

  20. Comprehensive Solar-Terrestrial Environment Model (COSTEM) for Space Weather Predictions

    DTIC Science & Technology

    2007-07-01

    research in data assimilation methodologies applicable to the space environment, as well as "threat adaptive" grid computing technologies, where we...SWMF is tested by(SWMF) [29, 43] was designed in 2001 and has sse et xriig mlil ope been developed to integrate and couple several system tests...its components. The night on several computer/compiler platforms. main design goals of the SWMF were to minimizedocumented. mai deigngoas o th SWF

  1. Integration of hybrid wireless networks in cloud services oriented enterprise information systems

    NASA Astrophysics Data System (ADS)

    Li, Shancang; Xu, Lida; Wang, Xinheng; Wang, Jue

    2012-05-01

    This article presents a hybrid wireless network integration scheme in cloud services-based enterprise information systems (EISs). With the emerging hybrid wireless networks and cloud computing technologies, it is necessary to develop a scheme that can seamlessly integrate these new technologies into existing EISs. By combining the hybrid wireless networks and computing in EIS, a new framework is proposed, which includes frontend layer, middle layer and backend layers connected to IP EISs. Based on a collaborative architecture, cloud services management framework and process diagram are presented. As a key feature, the proposed approach integrates access control functionalities within the hybrid framework that provide users with filtered views on available cloud services based on cloud service access requirements and user security credentials. In future work, we will implement the proposed framework over SwanMesh platform by integrating the UPnP standard into an enterprise information system.

  2. Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)

    NASA Astrophysics Data System (ADS)

    Nebert, D. D.; Huang, Q.; Yang, C.

    2013-12-01

    The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This paper presents the background, architectural design, and activities of GeoCloud in support of the Geospatial Platform Initiative. System security strategies and approval processes for migrating federal geospatial data, information, and applications into cloud, and cost estimation for cloud operations are covered. Finally, some lessons learned from the GeoCloud project are discussed as reference for geoscientists to consider in the adoption of cloud computing.

  3. Examination of District Technology Coordinators in South Central Texas

    ERIC Educational Resources Information Center

    Egeolu, Charity Nnenna

    2013-01-01

    The profusion of computers and educational technologies in schools has precipitated the need for staff with technological skill sets necessary for the integration and support of educational technology infrastructures across multiple platforms at schools and district levels. The purpose of the quantitative survey study was to explore technology…

  4. Engaging Language Learners through Technology Integration: Theory, Applications, and Outcomes

    ERIC Educational Resources Information Center

    Li, Shuai, Ed.; Swanson, Peter, Ed.

    2014-01-01

    Web 2.0 technologies, open source software platforms, and mobile applications have transformed teaching and learning of second and foreign languages. Language teaching has transitioned from a teacher-centered approach to a student-centered approach through the use of Computer-Assisted Language Learning (CALL) and new teaching approaches.…

  5. Computational embryology as an integrative platform for predictive DART (45th Conf of Europ Teratology Society)

    EPA Science Inventory

    Chemical regulation is challenged by the large number of chemicals requiring assessment for potential human health and environmental impacts. For example, the USEPA lists more than 85,000 chemicals on its inventory of substances that fall under the Toxic Substances Control Act (T...

  6. Public-Private Consortium Aims to Cut Preclinical Cancer Drug Discovery from Six Years to Just One | Frederick National Laboratory for Cancer Research

    Cancer.gov

    Scientists from two U.S. national laboratories, industry, and academia today launched an unprecedented effort to transform the way cancer drugs are discovered by creating an open and sharable platform that integrates high-performance computing, share

  7. Teaching and Learning in the Mixed-Reality Science Classroom

    ERIC Educational Resources Information Center

    Tolentino, Lisa; Birchfield, David; Megowan-Romanowicz, Colleen; Johnson-Glenberg, Mina C.; Kelliher, Aisling; Martinez, Christopher

    2009-01-01

    As emerging technologies become increasingly inexpensive and robust, there is an exciting opportunity to move beyond general purpose computing platforms to realize a new generation of K-12 technology-based learning environments. Mixed-reality technologies integrate real world components with interactive digital media to offer new potential to…

  8. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  9. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  10. Platform for efficient switching between multiple devices in the intensive care unit.

    PubMed

    De Backere, F; Vanhove, T; Dejonghe, E; Feys, M; Herinckx, T; Vankelecom, J; Decruyenaere, J; De Turck, F

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.

  11. Integrative Utilization of Microenvironments, Biomaterials and Computational Techniques for Advanced Tissue Engineering.

    PubMed

    Shamloo, Amir; Mohammadaliha, Negar; Mohseni, Mina

    2015-10-20

    This review aims to propose the integrative implementation of microfluidic devices, biomaterials, and computational methods that can lead to a significant progress in tissue engineering and regenerative medicine researches. Simultaneous implementation of multiple techniques can be very helpful in addressing biological processes. Providing controllable biochemical and biomechanical cues within artificial extracellular matrix similar to in vivo conditions is crucial in tissue engineering and regenerative medicine researches. Microfluidic devices provide precise spatial and temporal control over cell microenvironment. Moreover, generation of accurate and controllable spatial and temporal gradients of biochemical factors is attainable inside microdevices. Since biomaterials with tunable properties are a worthwhile option to construct artificial extracellular matrix, in vitro platforms that simultaneously utilize natural, synthetic, or engineered biomaterials inside microfluidic devices are phenomenally advantageous to experimental studies in the field of tissue engineering. Additionally, collaboration between experimental and computational methods is a useful way to predict and understand mechanisms responsible for complex biological phenomena. Computational results can be verified by using experimental platforms. Computational methods can also broaden the understanding of the mechanisms behind the biological phenomena observed during experiments. Furthermore, computational methods are powerful tools to optimize the fabrication of microfluidic devices and biomaterials with specific features. Here we present a succinct review of the benefits of microfluidic devices, biomaterial, and computational methods in the case of tissue engineering and regeneration medicine. Furthermore, some breakthroughs in biological phenomena including the neuronal axon development, cancerous cell migration and blood vessel formation via angiogenesis by virtue of the aforementioned approaches are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Air Force highly integrated photonics program: development and demonstration of an optically transparent fiber optic network for avionics applications

    NASA Astrophysics Data System (ADS)

    Whaley, Gregory J.; Karnopp, Roger J.

    2010-04-01

    The goal of the Air Force Highly Integrated Photonics (HIP) program is to develop and demonstrate single photonic chip components which support a single mode fiber network architecture for use on mobile military platforms. We propose an optically transparent, broadcast and select fiber optic network as the next generation interconnect on avionics platforms. In support of this network, we have developed three principal, single-chip photonic components: a tunable laser transmitter, a 32x32 port star coupler, and a 32 port multi-channel receiver which are all compatible with demanding avionics environmental and size requirements. The performance of the developed components will be presented as well as the results of a demonstration system which integrates the components into a functional network representative of the form factor used in advanced avionics computing and signal processing applications.

  13. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  14. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  15. IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.

    This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less

  16. Organomatics and organometrics: Novel platforms for long-term whole-organ culture

    PubMed Central

    Bruinsma, Bote G.; Yarmush, Martin L.; Uygun, Korkut

    2014-01-01

    Organ culture systems are instrumental as experimental whole-organ models of physiology and disease, as well as preservation modalities facilitating organ replacement therapies such as transplantation. Nevertheless, a coordinated system of machine perfusion components and integrated regulatory control has yet to be fully developed to achieve long-term maintenance of organ function ex vivo. Here we outline current strategies for organ culture, or organomatics, and how these systems can be regulated by means of computational algorithms, or organometrics, to achieve the organ culture platforms anticipated in modern-day biomedicine. PMID:25035864

  17. Ubiquitous Computing for Remote Cardiac Patient Monitoring: A Survey

    PubMed Central

    Kumar, Sunil; Kambhatla, Kashyap; Hu, Fei; Lifson, Mark; Xiao, Yang

    2008-01-01

    New wireless technologies, such as wireless LAN and sensor networks, for telecardiology purposes give new possibilities for monitoring vital parameters with wearable biomedical sensors, and give patients the freedom to be mobile and still be under continuous monitoring and thereby better quality of patient care. This paper will detail the architecture and quality-of-service (QoS) characteristics in integrated wireless telecardiology platforms. It will also discuss the current promising hardware/software platforms for wireless cardiac monitoring. The design methodology and challenges are provided for realistic implementation. PMID:18604301

  18. Ubiquitous computing for remote cardiac patient monitoring: a survey.

    PubMed

    Kumar, Sunil; Kambhatla, Kashyap; Hu, Fei; Lifson, Mark; Xiao, Yang

    2008-01-01

    New wireless technologies, such as wireless LAN and sensor networks, for telecardiology purposes give new possibilities for monitoring vital parameters with wearable biomedical sensors, and give patients the freedom to be mobile and still be under continuous monitoring and thereby better quality of patient care. This paper will detail the architecture and quality-of-service (QoS) characteristics in integrated wireless telecardiology platforms. It will also discuss the current promising hardware/software platforms for wireless cardiac monitoring. The design methodology and challenges are provided for realistic implementation.

  19. SenSyF Experience on Integration of EO Services in a Generic, Cloud-Based EO Exploitation Platform

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno; Catarino, Nuno; Gutierrez, Antonio; Grosso, Nuno; Andrade, Joao; Caumont, Herve; Goncalves, Pedro; Villa, Guillermo; Mangin, Antoine; Serra, Romain; Johnsen, Harald; Grydeland, Tom; Emsley, Stephen; Jauch, Eduardo; Moreno, Jose; Ruiz, Antonio

    2016-08-01

    SenSyF is a cloud-based data processing framework for EO- based services. It has been pioneer in addressing Big Data issues from the Earth Observation point of view, and is a precursor of several of the technologies and methodologies that will be deployed in ESA's Thematic Exploitation Platforms and other related systems.The SenSyF system focuses on developing fully automated data management, together with access to a processing and exploitation framework, including Earth Observation specific tools. SenSyF is both a development and validation platform for data intensive applications using Earth Observation data. With SenSyF, scientific, institutional or commercial institutions developing EO- based applications and services can take advantage of distributed computational and storage resources, tailored for applications dependent on big Earth Observation data, and without resorting to deep infrastructure and technological investments.This paper describes the integration process and the experience gathered from different EO Service providers during the project.

  20. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    PubMed

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  1. A Perspective on Implementing a Quantitative Systems Pharmacology Platform for Drug Discovery and the Advancement of Personalized Medicine

    PubMed Central

    Stern, Andrew M.; Schurdak, Mark E.; Bahar, Ivet; Berg, Jeremy M.; Taylor, D. Lansing

    2016-01-01

    Drug candidates exhibiting well-defined pharmacokinetic and pharmacodynamic profiles that are otherwise safe often fail to demonstrate proof-of-concept in phase II and III trials. Innovation in drug discovery and development has been identified as a critical need for improving the efficiency of drug discovery, especially through collaborations between academia, government agencies, and industry. To address the innovation challenge, we describe a comprehensive, unbiased, integrated, and iterative quantitative systems pharmacology (QSP)–driven drug discovery and development strategy and platform that we have implemented at the University of Pittsburgh Drug Discovery Institute. Intrinsic to QSP is its integrated use of multiscale experimental and computational methods to identify mechanisms of disease progression and to test predicted therapeutic strategies likely to achieve clinical validation for appropriate subpopulations of patients. The QSP platform can address biological heterogeneity and anticipate the evolution of resistance mechanisms, which are major challenges for drug development. The implementation of this platform is dedicated to gaining an understanding of mechanism(s) of disease progression to enable the identification of novel therapeutic strategies as well as repurposing drugs. The QSP platform will help promote the paradigm shift from reactive population-based medicine to proactive personalized medicine by focusing on the patient as the starting and the end point. PMID:26962875

  2. HpQTL: a geometric morphometric platform to compute the genetic architecture of heterophylly.

    PubMed

    Sun, Lidan; Wang, Jing; Zhu, Xuli; Jiang, Libo; Gosik, Kirk; Sang, Mengmeng; Sun, Fengsuo; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling

    2017-02-15

    Heterophylly, i.e. morphological changes in leaves along the axis of an individual plant, is regarded as a strategy used by plants to cope with environmental change. However, little is known of the extent to which heterophylly is controlled by genes and how each underlying gene exerts its effect on heterophyllous variation. We described a geometric morphometric model that can quantify heterophylly in plants and further constructed an R-based computing platform by integrating this model into a genetic mapping and association setting. The platform, named HpQTL, allows specific quantitative trait loci mediating heterophyllous variation to be mapped throughout the genome. The statistical properties of HpQTL were examined and validated via computer simulation. Its biological relevance was demonstrated by results from a real data analysis of heterophylly in a wood plant, mei (Prunus mume). HpQTL provides a powerful tool to analyze heterophylly and its underlying genetic architecture in a quantitative manner. It also contributes a new approach for genome-wide association studies aimed to dissect the programmed regulation of plant development and evolution. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Research on digital city geographic information common services platform

    NASA Astrophysics Data System (ADS)

    Chen, Dequan; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    Traditional GIS (Geographic Information System) software development mode exposes many defects that will largely slow down the city informational progress. It is urgent need to build a common application infrastructure for informational project to speed up the development pace of digital city. The advent of service-oriented architecture (SOA) has motivated the adoption of GIS functionality portals that can be executed in distributed computing environment. According to the SOA principle, we bring forward and design a digital city geographic information common services platform which provides application development service interfaces for field users that can be further extended relevant business application. In the end, a public-oriented Web GIS is developed based on the platform for helping public users to query geographic information in their daily life. It indicates that our platform have the capacity that can be integrated by other applications conveniently.

  4. Technical Note: scuda: A software platform for cumulative dose assessment.

    PubMed

    Park, Seyoun; McNutt, Todd; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2016-10-01

    Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.

  5. Technical Note: SCUDA: A software platform for cumulative dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seyoun; McNutt, Todd; Quon, Harry

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our imagemore » PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.« less

  6. Integrating an Educational Game in Moodle LMS

    ERIC Educational Resources Information Center

    Minovic, Miroslav; Milovanovic, Milos; Minovic, Jelena; Starcevic, Dusan

    2012-01-01

    The authors present a learning platform based on a computer game. Learning games combine two industries: education and entertainment, which is often called "Edutainment." The game is realized as a strategic game (similar to Risk[TM]), implemented as a module for Moodle CMS, utilizing Java Applet technology. Moodle is an open-source course…

  7. Dynamic Systems for Individual Tracking via Heterogeneous Information Integration and Crowd Source Distributed Simulation

    DTIC Science & Technology

    2015-12-04

    51   6.6   Power Consumption: Communications ...simulations executing on mobile computing platforms, an area not widely studied to date in the distributed simulation research community . A...simulation community . These initial studies focused on two conservative synchronization algorithms widely used in the distributed simulation field

  8. Using Mathematica to Teach Process Units: A Distillation Case Study

    ERIC Educational Resources Information Center

    Rasteiro, Maria G.; Bernardo, Fernando P.; Saraiva, Pedro M.

    2005-01-01

    The question addressed here is how to integrate computational tools, namely interactive general-purpose platforms, in the teaching of process units. Mathematica has been selected as a complementary tool to teach distillation processes, with the main objective of leading students to achieve a better understanding of the physical phenomena involved…

  9. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  10. Agent-Based Intelligent Interface for Wheelchair Movement Control

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.

    2018-01-01

    People who suffer from any kind of motor difficulty face serious complications to autonomously move in their daily lives. However, a growing number research projects which propose different powered wheelchairs control systems are arising. Despite of the interest of the research community in the area, there is no platform that allows an easy integration of various control methods that make use of heterogeneous sensors and computationally demanding algorithms. In this work, an architecture based on virtual organizations of agents is proposed that makes use of a flexible and scalable communication protocol that allows the deployment of embedded agents in computationally limited devices. In order to validate the proper functioning of the proposed system, it has been integrated into a conventional wheelchair and a set of alternative control interfaces have been developed and deployed, including a portable electroencephalography system, a voice interface or as specifically designed smartphone application. A set of tests were conducted to test both the platform adequacy and the accuracy and ease of use of the proposed control systems yielding positive results that can be useful in further wheelchair interfaces design and implementation. PMID:29751603

  11. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  12. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    NASA Astrophysics Data System (ADS)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  13. Electrical Design and Evaluation of Asynchronous Serial Bus Communication Network of 48 Sensor Platform LSIs with Single-Ended I/O for Integrated MEMS-LSI Sensors

    PubMed Central

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki

    2018-01-01

    For installing many sensors in a limited space with a limited computing resource, the digitization of the sensor output at the site of sensation has advantages such as a small amount of wiring, low signal interference and high scalability. For this purpose, we have developed a dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) (referred to as “sensor platform LSI”) for bus-networked Micro-Electro-Mechanical-Systems (MEMS)-LSI integrated sensors. In this LSI, collision avoidance, adaptation and event-driven functions are simply implemented to relieve data collision and congestion in asynchronous serial bus communication. In this study, we developed a network system with 48 sensor platform LSIs based on Printed Circuit Board (PCB) in a backbone bus topology with the bus length being 2.4 m. We evaluated the serial communication performance when 48 LSIs operated simultaneously with the adaptation function. The number of data packets received from each LSI was almost identical, and the average sampling frequency of 384 capacitance channels (eight for each LSI) was 73.66 Hz. PMID:29342923

  14. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  15. Long-term real-time structural health monitoring using wireless smart sensor

    NASA Astrophysics Data System (ADS)

    Jang, Shinae; Mensah-Bonsu, Priscilla O.; Li, Jingcheng; Dahal, Sushil

    2013-04-01

    Improving the safety and security of civil infrastructure has become a critical issue for decades since it plays a central role in the economics and politics of a modern society. Structural health monitoring of civil infrastructure using wireless smart sensor network has emerged as a promising solution recently to increase structural reliability, enhance inspection quality, and reduce maintenance costs. Though hardware and software framework are well prepared for wireless smart sensors, the long-term real-time health monitoring strategy are still not available due to the lack of systematic interface. In this paper, the Imote2 smart sensor platform is employed, and a graphical user interface for the long-term real-time structural health monitoring has been developed based on Matlab for the Imote2 platform. This computer-aided engineering platform enables the control, visualization of measured data as well as safety alarm feature based on modal property fluctuation. A new decision making strategy to check the safety is also developed and integrated in this software. Laboratory validation of the computer aided engineering platform for the Imote2 on a truss bridge and a building structure has shown the potential of the interface for long-term real-time structural health monitoring.

  16. The Development of GIS Educational Resources Sharing among Central Taiwan Universities

    NASA Astrophysics Data System (ADS)

    Chou, T.-Y.; Yeh, M.-L.; Lai, Y.-C.

    2011-09-01

    Using GIS in the classroom enhance students' computer skills and explore the range of knowledge. The paper highlights GIS integration on e-learning platform and introduces a variety of abundant educational resources. This research project will demonstrate tools for e-learning environment and delivers some case studies for learning interaction from Central Taiwan Universities. Feng Chia University (FCU) obtained a remarkable academic project subsidized by Ministry of Education and developed e-learning platform for excellence in teaching/learning programs among Central Taiwan's universities. The aim of the project is to integrate the educational resources of 13 universities in central Taiwan. FCU is serving as the hub of Center University. To overcome the problem of distance, e-platforms have been established to create experiences with collaboration enhanced learning. The e-platforms provide coordination of web service access among the educational community and deliver GIS educational resources. Most of GIS related courses cover the development of GIS, principles of cartography, spatial data analysis and overlaying, terrain analysis, buffer analysis, 3D GIS application, Remote Sensing, GPS technology, and WebGIS, MobileGIS, ArcGIS manipulation. In each GIS case study, students have been taught to know geographic meaning, collect spatial data and then use ArcGIS software to analyze spatial data. On one of e-Learning platforms provide lesson plans and presentation slides. Students can learn Arc GIS online. As they analyze spatial data, they can connect to GIS hub to get data they need including satellite images, aerial photos, and vector data. Moreover, e-learning platforms provide solutions and resources. Different levels of image scales have been integrated into the systems. Multi-scale spatial development and analyses in Central Taiwan integrate academic research resources among CTTLRC partners. Thus, establish decision-making support mechanism in teaching and learning. Accelerate communication, cooperation and sharing among academic units

  17. Sensor4PRI: A Sensor Platform for the Protection of Railway Infrastructures

    PubMed Central

    Cañete, Eduardo; Chen, Jaime; Díaz, Manuel; Llopis, Luis; Rubio, Bartolomé

    2015-01-01

    Wireless Sensor Networks constitute pervasive and distributed computing systems and are potentially one of the most important technologies of this century. They have been specifically identified as a good candidate to become an integral part of the protection of critical infrastructures. In this paper we focus on railway infrastructure protection and we present the details of a sensor platform designed to be integrated into a slab track system in order to carry out both installation and maintenance monitoring activities. In the installation phase, the platform helps operators to install the slab tracks in the right position. In the maintenance phase, the platform collects information about the structural health and behavior of the infrastructure when a train travels along it and relays the readings to a base station. The base station uses trains as data mules to upload the information to the internet. The use of a train as a data mule is especially suitable for collecting information from remote or inaccessible places which do not have a direct connection to the internet and require less network infrastructure. The overall aim of the system is to deploy a permanent economically viable monitoring system to improve the safety of railway infrastructures. PMID:25734648

  18. Uncovering novel repositioning opportunities using the Open Targets platform.

    PubMed

    Khaladkar, Mugdha; Koscielny, Gautier; Hasan, Samiul; Agarwal, Pankaj; Dunham, Ian; Rajpal, Deepak; Sanseau, Philippe

    2017-12-01

    The recently developed Open Targets platform consolidates a wide range of comprehensive evidence associating known and potential drug targets with human diseases. We have harnessed the integrated data from this platform for novel drug repositioning opportunities. Our computational workflow systematically mines data from various evidence categories and presents potential repositioning opportunities for drugs that are marketed or being investigated in ongoing human clinical trials, based on evidence strength on target-disease pairing. We classified these novel target-disease opportunities in several ways: (i) number of independent counts of evidence; (ii) broad therapy area of origin; and (iii) repositioning within or across therapy areas. Finally, we elaborate on one example that was identified by this approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Computing health quality measures using Informatics for Integrating Biology and the Bedside.

    PubMed

    Klann, Jeffrey G; Murphy, Shawn N

    2013-04-19

    The Health Quality Measures Format (HQMF) is a Health Level 7 (HL7) standard for expressing computable Clinical Quality Measures (CQMs). Creating tools to process HQMF queries in clinical databases will become increasingly important as the United States moves forward with its Health Information Technology Strategic Plan to Stages 2 and 3 of the Meaningful Use incentive program (MU2 and MU3). Informatics for Integrating Biology and the Bedside (i2b2) is one of the analytical databases used as part of the Office of the National Coordinator (ONC)'s Query Health platform to move toward this goal. Our goal is to integrate i2b2 with the Query Health HQMF architecture, to prepare for other HQMF use-cases (such as MU2 and MU3), and to articulate the functional overlap between i2b2 and HQMF. Therefore, we analyze the structure of HQMF, and then we apply this understanding to HQMF computation on the i2b2 clinical analytical database platform. Specifically, we develop a translator between two query languages, HQMF and i2b2, so that the i2b2 platform can compute HQMF queries. We use the HQMF structure of queries for aggregate reporting, which define clinical data elements and the temporal and logical relationships between them. We use the i2b2 XML format, which allows flexible querying of a complex clinical data repository in an easy-to-understand domain-specific language. The translator can represent nearly any i2b2-XML query as HQMF and execute in i2b2 nearly any HQMF query expressible in i2b2-XML. This translator is part of the freely available reference implementation of the QueryHealth initiative. We analyze limitations of the conversion and find it covers many, but not all, of the complex temporal and logical operators required by quality measures. HQMF is an expressive language for defining quality measures, and it will be important to understand and implement for CQM computation, in both meaningful use and population health. However, its current form might allow complexity that is intractable for current database systems (both in terms of implementation and computation). Our translator, which supports the subset of HQMF currently expressible in i2b2-XML, may represent the beginnings of a practical compromise. It is being pilot-tested in two Query Health demonstration projects, and it can be further expanded to balance computational tractability with the advanced features needed by measure developers.

  20. Computing Health Quality Measures Using Informatics for Integrating Biology and the Bedside

    PubMed Central

    Murphy, Shawn N

    2013-01-01

    Background The Health Quality Measures Format (HQMF) is a Health Level 7 (HL7) standard for expressing computable Clinical Quality Measures (CQMs). Creating tools to process HQMF queries in clinical databases will become increasingly important as the United States moves forward with its Health Information Technology Strategic Plan to Stages 2 and 3 of the Meaningful Use incentive program (MU2 and MU3). Informatics for Integrating Biology and the Bedside (i2b2) is one of the analytical databases used as part of the Office of the National Coordinator (ONC)’s Query Health platform to move toward this goal. Objective Our goal is to integrate i2b2 with the Query Health HQMF architecture, to prepare for other HQMF use-cases (such as MU2 and MU3), and to articulate the functional overlap between i2b2 and HQMF. Therefore, we analyze the structure of HQMF, and then we apply this understanding to HQMF computation on the i2b2 clinical analytical database platform. Specifically, we develop a translator between two query languages, HQMF and i2b2, so that the i2b2 platform can compute HQMF queries. Methods We use the HQMF structure of queries for aggregate reporting, which define clinical data elements and the temporal and logical relationships between them. We use the i2b2 XML format, which allows flexible querying of a complex clinical data repository in an easy-to-understand domain-specific language. Results The translator can represent nearly any i2b2-XML query as HQMF and execute in i2b2 nearly any HQMF query expressible in i2b2-XML. This translator is part of the freely available reference implementation of the QueryHealth initiative. We analyze limitations of the conversion and find it covers many, but not all, of the complex temporal and logical operators required by quality measures. Conclusions HQMF is an expressive language for defining quality measures, and it will be important to understand and implement for CQM computation, in both meaningful use and population health. However, its current form might allow complexity that is intractable for current database systems (both in terms of implementation and computation). Our translator, which supports the subset of HQMF currently expressible in i2b2-XML, may represent the beginnings of a practical compromise. It is being pilot-tested in two Query Health demonstration projects, and it can be further expanded to balance computational tractability with the advanced features needed by measure developers. PMID:23603227

  1. Note: computer controlled rotation mount for large diameter optics.

    PubMed

    Rakonjac, Ana; Roberts, Kris O; Deb, Amita B; Kjærgaard, Niels

    2013-02-01

    We describe the construction of a motorized optical rotation mount with a 40 mm clear aperture. The device is used to remotely control the power of large diameter laser beams for a magneto-optical trap. A piezo-electric ultrasonic motor on a printed circuit board provides rotation with a precision better than 0.03° and allows for a very compact design. The rotation unit is controlled from a computer via serial communication, making integration into most software control platforms straightforward.

  2. A Cloud-Based Internet of Things Platform for Ambient Assisted Living

    PubMed Central

    Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto

    2014-01-01

    A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application. PMID:25093343

  3. A cloud-based Internet of Things platform for ambient assisted living.

    PubMed

    Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto

    2014-08-04

    A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application.

  4. SSTAC/ARTS review of the draft Integrated Technology Plan (ITP). Volume 6: Controls and guidance

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Viewgraphs of briefings from the Space Systems and Technology Advisory Committee (SSTAC)/ARTS review of the draft Integrated Technology Plan (ITP) on controls and guidance are included. Topics covered include: strategic avionics technology planning and bridging programs; avionics technology plan; vehicle health management; spacecraft guidance research; autonomous rendezvous and docking; autonomous landing; computational control; fiberoptic rotation sensors; precision instrument and telescope pointing; microsensors and microinstruments; micro guidance and control initiative; and earth-orbiting platforms controls-structures interaction.

  5. CompatPM: enabling energy efficient multimedia workloads for distributed mobile platforms

    NASA Astrophysics Data System (ADS)

    Nathuji, Ripal; O'Hara, Keith J.; Schwan, Karsten; Balch, Tucker

    2007-01-01

    The computation and communication abilities of modern platforms are enabling increasingly capable cooperative distributed mobile systems. An example is distributed multimedia processing of sensor data in robots deployed for search and rescue, where a system manager can exploit the application's cooperative nature to optimize the distribution of roles and tasks in order to successfully accomplish the mission. Because of limited battery capacities, a critical task a manager must perform is online energy management. While support for power management has become common for the components that populate mobile platforms, what is lacking is integration and explicit coordination across the different management actions performed in a variety of system layers. This papers develops an integration approach for distributed multimedia applications, where a global manager specifies both a power operating point and a workload for a node to execute. Surprisingly, when jointly considering power and QoS, experimental evaluations show that using a simple deadline-driven approach to assigning frequencies can be non-optimal. These trends are further affected by certain characteristics of underlying power management mechanisms, which in our research, are identified as groupings that classify component power management as "compatible" (VFC) or "incompatible" (VFI) with voltage and frequency scaling. We build on these findings to develop CompatPM, a vertically integrated control strategy for power management in distributed mobile systems. Experimental evaluations of CompatPM indicate average energy improvements of 8% when platform resources are managed jointly rather than independently, demonstrating that previous attempts to maximize battery life by simply minimizing frequency are inappropriate from a platform-level perspective.

  6. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, Noel

    2013-04-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  7. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, N.

    2012-12-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  8. Teach-Discover-Treat (TDT): Collaborative Computational Drug Discovery for Neglected Diseases

    PubMed Central

    Jansen, Johanna M.; Cornell, Wendy; Tseng, Y. Jane; Amaro, Rommie E.

    2012-01-01

    Teach – Discover – Treat (TDT) is an initiative to promote the development and sharing of computational tools solicited through a competition with the aim to impact education and collaborative drug discovery for neglected diseases. Collaboration, multidisciplinary integration, and innovation are essential for successful drug discovery. This requires a workforce that is trained in state-of-the-art workflows and equipped with the ability to collaborate on platforms that are accessible and free. The TDT competition solicits high quality computational workflows for neglected disease targets, using freely available, open access tools. PMID:23085175

  9. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  10. Design of strength characteristics on the example of a mining support

    NASA Astrophysics Data System (ADS)

    Gwiazda, A.; Sękala, A.; Banaś, W.; Topolska, S.; Foit, K.; Monica, Z.

    2017-08-01

    It is a special group of particular design aproches that could be characterized as “design for X”. All areas of specific these design methodology, taking into account the requirements of the life cycle are described with the acronym DfX. It means an integrated computing platform approach to design binding together both the area of design knowledge and area of computer systems. In this perspective, computer systems are responsible for the link between design requirements with the subject of the project and to filter the information being circulated throughout the operation of the project. The DfX methodologies together form an approach integrating to different functional areas of industrial organization. Among the internal elements it can distinguish the structure of the project team, the people making it, the same process design, control system design and implementation of the action tools to assist this process. Among the elements that are obtained in the framework of this approach should be distinguished: higher operating efficiency, professionalism, the ability to create innovation, incremental progress of the project and the appropriate focus of the project team. It have been done attempts to integrate identified specific areas for action in the field of design methodology. They have already taken place earlier in the design due to the Economic Design for Manufacture. This approach was characteristic for European industry. In this case, an approach was developed in methodology, which can be defined as the Design to/for Cost. The article presents the idea of an integrated design approach related with the DfX approach. The results are described on the base of a virtual 3D model of a mining support. This model was elaborated in the advanced engineering platform like Siemens PLM NX.

  11. Spectral Factorization and Homogenization Methods for Modeling and Control of Flexible Structures.

    DTIC Science & Technology

    1986-12-15

    to the computation of hybrid, state-space modeling of an integrated space platform . Throughout this effort we have focused on the potential for...models can provide an effective tool for analysis of dynamics of vibrations and their effect on small angle motions for complex space platforms . In this... WIX 1 v .41(Ac 0 0o4 1 2.. 9 2% - L .0U V)V14IC Ma a * 9L 0 a soe - a a.. x m c 4. i.! 0~~~I W ** PMiscellaneous Routines• Power Series Expansion

  12. Parallel computing in experimental mechanics and optical measurement: A review (II)

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  13. Development and Characteristics of a Mobile, Semi-Autonomous Floating Platform for in situ Lake Measurements

    NASA Astrophysics Data System (ADS)

    Barry, D.; Lemmin, U.; Le Dantec, N.; Zulliger, L.; Rusterholz, M.; Bolay, M.; Rossier, J.; Kangur, K.

    2013-12-01

    In the development of sustainable management strategies of lakes more insight into their physical, chemical and ecological dynamics is needed. Field data obtained from various types of sensors with adequate spatial and temporal sampling rate are essential to understand better the processes that govern fluxes and pathways of water masses and transported compounds, whether for model validation or for monitoring purposes. One advantage of unmanned platforms is that they limit the disturbances typically affecting the quality of data collected on small vessels, including perturbations caused by movements of onboard crew. We have developed a mobile, semi-autonomous floating platform with 8 h power autonomy using a 5 m long by 2.5 m wide catamaran. Our approach focused on modularity and high payload capacity in order to accommodate a large number of sensors both in terms of electronic (power and data) and mechanical constraints of integration. Software architecture and onboard electronics use National Instruments technology to simplify and standardize integration of sensors, actuators and communication. Piecewise-movable deck sections allow optimizing platform stability depending on the payload. The entire system is controlled by a remote computer located on an accompanying vessel and connected via a wireless link with a range of over 1 km. Real-time transmission of GPS-stamped measurements allows immediate modifications in the survey plan if needed. The displacement of the platform is semi-autonomous, with the options of either autopilot mode following a pre-planned course specified by waypoints or remote manual control from the accompanying vessel. Maintenance of permanent control over the platform displacement is required for safety reasons with respect to other users of the lake. Currently, the sensor payload comprises an array of fast temperature probes, a bottom-tracking ADCP and atmospheric sensors including a radiometer. A towed CTD with additional water quality sensors operated from a remotely controlled winch is presently being integrated. Field tests have shown that the platform is reliable, capable of collecting long transects of 2D lake and collocated atmospheric boundary layer data and adaptable to integrate new sensors.

  14. A framework for development of an intelligent system for design and manufacturing of stamping dies

    NASA Astrophysics Data System (ADS)

    Hussein, H. M. A.; Kumar, S.

    2014-07-01

    An integration of computer aided design (CAD), computer aided process planning (CAPP) and computer aided manufacturing (CAM) is required for development of an intelligent system to design and manufacture stamping dies in sheet metal industries. In this paper, a framework for development of an intelligent system for design and manufacturing of stamping dies is proposed. In the proposed framework, the intelligent system is structured in form of various expert system modules for different activities of design and manufacturing of dies. All system modules are integrated with each other. The proposed system takes its input in form of a CAD file of sheet metal part, and then system modules automate all tasks related to design and manufacturing of stamping dies. Modules are coded using Visual Basic (VB) and developed on the platform of AutoCAD software.

  15. High-Throughput Computing on High-Performance Platforms: A Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, D; Panitkin, S; Matteo, Turilli

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less

  16. Systems Medicine: The Future of Medical Genomics, Healthcare, and Wellness.

    PubMed

    Saqi, Mansoor; Pellet, Johann; Roznovat, Irina; Mazein, Alexander; Ballereau, Stéphane; De Meulder, Bertrand; Auffray, Charles

    2016-01-01

    Recent advances in genomics have led to the rapid and relatively inexpensive collection of patient molecular data including multiple types of omics data. The integration of these data with clinical measurements has the potential to impact on our understanding of the molecular basis of disease and on disease management. Systems medicine is an approach to understanding disease through an integration of large patient datasets. It offers the possibility for personalized strategies for healthcare through the development of a new taxonomy of disease. Advanced computing will be an important component in effectively implementing systems medicine. In this chapter we describe three computational challenges associated with systems medicine: disease subtype discovery using integrated datasets, obtaining a mechanistic understanding of disease, and the development of an informatics platform for the mining, analysis, and visualization of data emerging from translational medicine studies.

  17. Investigating the Primary School Teachers' Perspectives on the Use of Education Platforms in Teaching

    ERIC Educational Resources Information Center

    Uredi, Lütfi; Akbasli, Sait; Ulum, Hakan

    2016-01-01

    Technology plays an important role in educational activities in Turkey. This is largely because of the Faith Project, which was recently introduced into the country. The Fatih Project is a project of Turkish government which seeks to integrate computer technology into the country's public education system. Education Informatics Network is one of…

  18. The EPA CompTox Dashboard and Underpinning Software Architecture – a platform for data integration for environmental chemistry data (ACS Fall Meeting 7 of 12)

    EPA Science Inventory

    The CompTox Dashboard was developed by the Environmental Protection Agency’s National Center for Computational Toxicology. This dashboard has been architected in a manner that allows for the deployment of multiple “applications”, both as publicly available databases, and for dep...

  19. A New Cloud Architecture of Virtual Trusted Platform Modules

    NASA Astrophysics Data System (ADS)

    Liu, Dongxi; Lee, Jack; Jang, Julian; Nepal, Surya; Zic, John

    We propose and implement a cloud architecture of virtual Trusted Platform Modules (TPMs) to improve the usability of TPMs. In this architecture, virtual TPMs can be obtained from the TPM cloud on demand. Hence, the TPM functionality is available for applications that do not have physical TPMs in their local platforms. Moreover, the TPM cloud allows users to access their keys and data in the same virtual TPM even if they move to untrusted platforms. The TPM cloud is easy to access for applications in different languages since cloud computing delivers services in standard protocols. The functionality of the TPM cloud is demonstrated by applying it to implement the Needham-Schroeder public-key protocol for web authentications, such that the strong security provided by TPMs is integrated into high level applications. The chain of trust based on the TPM cloud is discussed and the security properties of the virtual TPMs in the cloud is analyzed.

  20. Module-based multiscale simulation of angiogenesis in skeletal muscle

    PubMed Central

    2011-01-01

    Background Mathematical modeling of angiogenesis has been gaining momentum as a means to shed new light on the biological complexity underlying blood vessel growth. A variety of computational models have been developed, each focusing on different aspects of the angiogenesis process and occurring at different biological scales, ranging from the molecular to the tissue levels. Integration of models at different scales is a challenging and currently unsolved problem. Results We present an object-oriented module-based computational integration strategy to build a multiscale model of angiogenesis that links currently available models. As an example case, we use this approach to integrate modules representing microvascular blood flow, oxygen transport, vascular endothelial growth factor transport and endothelial cell behavior (sensing, migration and proliferation). Modeling methodologies in these modules include algebraic equations, partial differential equations and agent-based models with complex logical rules. We apply this integrated model to simulate exercise-induced angiogenesis in skeletal muscle. The simulation results compare capillary growth patterns between different exercise conditions for a single bout of exercise. Results demonstrate how the computational infrastructure can effectively integrate multiple modules by coordinating their connectivity and data exchange. Model parameterization offers simulation flexibility and a platform for performing sensitivity analysis. Conclusions This systems biology strategy can be applied to larger scale integration of computational models of angiogenesis in skeletal muscle, or other complex processes in other tissues under physiological and pathological conditions. PMID:21463529

  1. Integrating Commercial Off-The-Shelf (COTS) graphics and extended memory packages with CLIPS

    NASA Technical Reports Server (NTRS)

    Callegari, Andres C.

    1990-01-01

    This paper addresses the question of how to mix CLIPS with graphics and how to overcome PC's memory limitations by using the extended memory available in the computer. By adding graphics and extended memory capabilities, CLIPS can be converted into a complete and powerful system development tool, on the other most economical and popular computer platform. New models of PCs have amazing processing capabilities and graphic resolutions that cannot be ignored and should be used to the fullest of their resources. CLIPS is a powerful expert system development tool, but it cannot be complete without the support of a graphics package needed to create user interfaces and general purpose graphics, or without enough memory to handle large knowledge bases. Now, a well known limitation on the PC's is the usage of real memory which limits CLIPS to use only 640 Kb of real memory, but now that problem can be solved by developing a version of CLIPS that uses extended memory. The user has access of up to 16 MB of memory on 80286 based computers and, practically, all the available memory (4 GB) on computers that use the 80386 processor. So if we give CLIPS a self-configuring graphics package that will automatically detect the graphics hardware and pointing device present in the computer, and we add the availability of the extended memory that exists in the computer (with no special hardware needed), the user will be able to create more powerful systems at a fraction of the cost and on the most popular, portable, and economic platform available such as the PC platform.

  2. Computer-aided drug design at Boehringer Ingelheim

    NASA Astrophysics Data System (ADS)

    Muegge, Ingo; Bergner, Andreas; Kriegl, Jan M.

    2017-03-01

    Computer-Aided Drug Design (CADD) is an integral part of the drug discovery endeavor at Boehringer Ingelheim (BI). CADD contributes to the evaluation of new therapeutic concepts, identifies small molecule starting points for drug discovery, and develops strategies for optimizing hit and lead compounds. The CADD scientists at BI benefit from the global use and development of both software platforms and computational services. A number of computational techniques developed in-house have significantly changed the way early drug discovery is carried out at BI. In particular, virtual screening in vast chemical spaces, which can be accessed by combinatorial chemistry, has added a new option for the identification of hits in many projects. Recently, a new framework has been implemented allowing fast, interactive predictions of relevant on and off target endpoints and other optimization parameters. In addition to the introduction of this new framework at BI, CADD has been focusing on the enablement of medicinal chemists to independently perform an increasing amount of molecular modeling and design work. This is made possible through the deployment of MOE as a global modeling platform, allowing computational and medicinal chemists to freely share ideas and modeling results. Furthermore, a central communication layer called the computational chemistry framework provides broad access to predictive models and other computational services.

  3. Display integration for ground combat vehicles

    NASA Astrophysics Data System (ADS)

    Busse, David J.

    1998-09-01

    The United States Army's requirement to employ high resolution target acquisition sensors and information warfare to increase its dominance over enemy forces has led to the need to integrate advanced display devices into ground combat vehicle crew stations. The Army's force structure require the integration of advanced displays on both existing and emerging ground combat vehicle systems. The fielding of second generation target acquisition sensors, color digital terrain maps and high volume digital command and control information networks on these platforms define display performance requirements. The greatest challenge facing the system integrator is the development and integration of advanced displays that meet operational, vehicle and human computer interface performance requirements for the ground combat vehicle fleet. The subject of this paper is to address those challenges: operational and vehicle performance, non-soldier centric crew station configurations, display performance limitations related to human computer interfaces and vehicle physical environments, display technology limitations and the Department of Defense (DOD) acquisition reform initiatives. How the ground combat vehicle Program Manager and system integrator are addressing these challenges are discussed through the integration of displays on fielded, current and future close combat vehicle applications.

  4. Indoor Map Aided Wi-Fi Integrated Lbs on Smartphone Platforms

    NASA Astrophysics Data System (ADS)

    Yu, C.; El-Sheimy, N.

    2017-09-01

    In this research, an indoor map aided INS/Wi-Fi integrated location based services (LBS) applications is proposed and implemented on smartphone platforms. Indoor map information together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value from Wi-Fi are collected to obtain an accurate, continuous, and low-cost position solution. The main challenge of this research is to make effective use of various measurements that complement each other without increasing the computational burden of the system. The integrated system in this paper includes three modules: INS, Wi-Fi (if signal available) and indoor maps. A cascade structure Particle/Kalman filter framework is applied to combine the different modules. Firstly, INS position and Wi-Fi fingerprint position integrated through Kalman filter for estimating positioning information. Then, indoor map information is applied to correct the error of INS/Wi-Fi estimated position through particle filter. Indoor tests show that the proposed method can effectively reduce the accumulation positioning errors of stand-alone INS systems, and provide stable, continuous and reliable indoor location service.

  5. The Common Data Acquisition Platform in the Helmholtz Association

    NASA Astrophysics Data System (ADS)

    Kaever, P.; Balzer, M.; Kopmann, A.; Zimmer, M.; Rongen, H.

    2017-04-01

    Various centres of the German Helmholtz Association (HGF) started in 2012 to develop a modular data acquisition (DAQ) platform, covering the entire range from detector readout to data transfer into parallel computing environments. This platform integrates generic hardware components like the multi-purpose HGF-Advanced Mezzanine Card or a smart scientific camera framework, adding user value with Linux drivers and board support packages. Technically the scope comprises the DAQ-chain from FPGA-modules to computing servers, notably frontend-electronics-interfaces, microcontrollers and GPUs with their software plus high-performance data transmission links. The core idea is a generic and component-based approach, enabling the implementation of specific experiment requirements with low effort. This so called DTS-platform will support standards like MTCA.4 in hard- and software to ensure compatibility with commercial components. Its capability to deploy on other crate standards or FPGA-boards with PCI express or Ethernet interfaces remains an essential feature. Competences of the participating centres are coordinated in order to provide a solid technological basis for both research topics in the Helmholtz Programme ``Matter and Technology'': ``Detector Technology and Systems'' and ``Accelerator Research and Development''. The DTS-platform aims at reducing costs and development time and will ensure access to latest technologies for the collaboration. Due to its flexible approach, it has the potential to be applied in other scientific programs.

  6. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    PubMed

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. The Image Data Resource: A Bioimage Data Integration and Publication Platform.

    PubMed

    Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R

    2017-08-01

    Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.

  8. Platform-dependent optimization considerations for mHealth applications

    NASA Astrophysics Data System (ADS)

    Kaghyan, Sahak; Akopian, David; Sarukhanyan, Hakob

    2015-03-01

    Modern mobile devices contain integrated sensors that enable multitude of applications in such fields as mobile health (mHealth), entertainment, sports, etc. Human physical activity monitoring is one of such the emerging applications. There exists a range of challenges that relate to activity monitoring tasks, and, particularly, exploiting optimal solutions and architectures for respective mobile software application development. This work addresses mobile computations related to integrated inertial sensors for activity monitoring, such as accelerometers, gyroscopes, integrated global positioning system (GPS) and WLAN-based positioning, that can be used for activity monitoring. Some of the aspects will be discussed in this paper. Each of the sensing data sources has its own characteristics such as specific data formats, data rates, signal acquisition durations etc., and these specifications affect energy consumption. Energy consumption significantly varies as sensor data acquisition is followed by data analysis including various transformations and signal processing algorithms. This paper will address several aspects of more optimal activity monitoring implementations exploiting state-of-the-art capabilities of modern platforms.

  9. HTC Vive MeVisLab integration via OpenVR for medical applications

    PubMed Central

    Egger, Jan; Gall, Markus; Wallner, Jürgen; Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-01-01

    Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection. PMID:28323840

  10. A Perspective on Implementing a Quantitative Systems Pharmacology Platform for Drug Discovery and the Advancement of Personalized Medicine.

    PubMed

    Stern, Andrew M; Schurdak, Mark E; Bahar, Ivet; Berg, Jeremy M; Taylor, D Lansing

    2016-07-01

    Drug candidates exhibiting well-defined pharmacokinetic and pharmacodynamic profiles that are otherwise safe often fail to demonstrate proof-of-concept in phase II and III trials. Innovation in drug discovery and development has been identified as a critical need for improving the efficiency of drug discovery, especially through collaborations between academia, government agencies, and industry. To address the innovation challenge, we describe a comprehensive, unbiased, integrated, and iterative quantitative systems pharmacology (QSP)-driven drug discovery and development strategy and platform that we have implemented at the University of Pittsburgh Drug Discovery Institute. Intrinsic to QSP is its integrated use of multiscale experimental and computational methods to identify mechanisms of disease progression and to test predicted therapeutic strategies likely to achieve clinical validation for appropriate subpopulations of patients. The QSP platform can address biological heterogeneity and anticipate the evolution of resistance mechanisms, which are major challenges for drug development. The implementation of this platform is dedicated to gaining an understanding of mechanism(s) of disease progression to enable the identification of novel therapeutic strategies as well as repurposing drugs. The QSP platform will help promote the paradigm shift from reactive population-based medicine to proactive personalized medicine by focusing on the patient as the starting and the end point. © 2016 Society for Laboratory Automation and Screening.

  11. HTC Vive MeVisLab integration via OpenVR for medical applications.

    PubMed

    Egger, Jan; Gall, Markus; Wallner, Jürgen; Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-01-01

    Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.

  12. Simulation of Conformal Spiral Slot Antennas on Composite Platforms

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Nurnberger, M. W.; Ozdemir,T.

    1998-01-01

    During the course of the grant, we wrote and distributed about 12 reports and an equal number of journal papers supported fully or in part by this grant. The list of reports (title & abstract) and papers are given in Appendices A and B. This grant has indeed been instrumental in developing a robust hybrid finite element method for the analysis of complex broadband antennas on doubly curved platforms. Previous to the grant, our capability was limited to simple printed patch antennas on mostly planar platforms. More specifically: (1) mixed element formulations were developed and new edge-based prisms were introduced; (2) these elements were important in permitting flexibility in geometry gridding for most antennas of interest; (3) new perfectly matched absorbers were introduced for mesh truncations associated with highly curved surfaces; (4) fast integral algorithms were introduced for boundary integral truncations reducing CPU time from O(N-2) down to O(N-1.5) or less; (5) frequency extrapolation schemes were developed for efficient broadband performance evaluations. This activity has been successfully continued by NASA researchers; (6) computer codes were developed and extensively tested for several broadband configurations. These include FEMA-CYL, FEMA-PRISM and FEMA-TETRA written by L. Kempel, T. Ozdemir and J. Gong, respectively; (7) a new infinite balun feed was designed nearly constant impedance over the 800-3000 MHz operational band; (8) a complete slot spiral antenna was developed, fabricated and tested at NASA Langley. This new design is a culmination of the projects goals and integrates the computational and experimental efforts. this antenna design resulted in a U.S. patent and was revised three times to achieve the desired bandwidth and gain requirements from 800-3000 MHz.

  13. Simultaneous electrical recording of cardiac electrophysiology and contraction on chip

    DOE PAGES

    Qian, Fang; Huang, Chao; Lin, Yi-Dong; ...

    2017-04-18

    Prevailing commercialized cardiac platforms for in vitro drug development utilize planar microelectrode arrays to map action potentials, or impedance sensing to record contraction in real time, but cannot record both functions on the same chip with high spatial resolution. We report a novel cardiac platform that can record cardiac tissue adhesion, electrophysiology, and contractility on the same chip. The platform integrates two independent yet interpenetrating sensor arrays: a microelectrode array for field potential readouts and an interdigitated electrode array for impedance readouts. Together, these arrays provide real-time, non-invasive data acquisition of both cardiac electrophysiology and contractility under physiological conditions andmore » under drug stimuli. Furthermore, we cultured human induced pluripotent stem cell-derived cardiomyocytes as a model system, and used to validate the platform with an excitation–contraction decoupling chemical. Preliminary data using the platform to investigate the effect of the drug norepinephrine are combined with computational efforts. Finally, this platform provides a quantitative and predictive assay system that can potentially be used for comprehensive assessment of cardiac toxicity earlier in the drug discovery process.« less

  14. Simultaneous electrical recording of cardiac electrophysiology and contraction on chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Fang; Huang, Chao; Lin, Yi-Dong

    Prevailing commercialized cardiac platforms for in vitro drug development utilize planar microelectrode arrays to map action potentials, or impedance sensing to record contraction in real time, but cannot record both functions on the same chip with high spatial resolution. We report a novel cardiac platform that can record cardiac tissue adhesion, electrophysiology, and contractility on the same chip. The platform integrates two independent yet interpenetrating sensor arrays: a microelectrode array for field potential readouts and an interdigitated electrode array for impedance readouts. Together, these arrays provide real-time, non-invasive data acquisition of both cardiac electrophysiology and contractility under physiological conditions andmore » under drug stimuli. Furthermore, we cultured human induced pluripotent stem cell-derived cardiomyocytes as a model system, and used to validate the platform with an excitation–contraction decoupling chemical. Preliminary data using the platform to investigate the effect of the drug norepinephrine are combined with computational efforts. Finally, this platform provides a quantitative and predictive assay system that can potentially be used for comprehensive assessment of cardiac toxicity earlier in the drug discovery process.« less

  15. Earth system modelling on system-level heterogeneous architectures: EMAC (version 2.42) on the Dynamical Exascale Entry Platform (DEEP)

    NASA Astrophysics Data System (ADS)

    Christou, Michalis; Christoudias, Theodoros; Morillo, Julián; Alvarez, Damian; Merx, Hendrik

    2016-09-01

    We examine an alternative approach to heterogeneous cluster-computing in the many-core era for Earth system models, using the European Centre for Medium-Range Weather Forecasts Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model as a pilot application on the Dynamical Exascale Entry Platform (DEEP). A set of autonomous coprocessors interconnected together, called Booster, complements a conventional HPC Cluster and increases its computing performance, offering extra flexibility to expose multiple levels of parallelism and achieve better scalability. The EMAC model atmospheric chemistry code (Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA)) was taskified with an offload mechanism implemented using OmpSs directives. The model was ported to the MareNostrum 3 supercomputer to allow testing with Intel Xeon Phi accelerators on a production-size machine. The changes proposed in this paper are expected to contribute to the eventual adoption of Cluster-Booster division and Many Integrated Core (MIC) accelerated architectures in presently available implementations of Earth system models, towards exploiting the potential of a fully Exascale-capable platform.

  16. Integrating CAD modules in a PACS environment using a wide computing infrastructure.

    PubMed

    Suárez-Cuenca, Jorge J; Tilve, Amara; López, Ricardo; Ferro, Gonzalo; Quiles, Javier; Souto, Miguel

    2017-04-01

    The aim of this paper is to describe a project designed to achieve a total integration of different CAD algorithms into the PACS environment by using a wide computing infrastructure. The aim is to build a system for the entire region of Galicia, Spain, to make CAD accessible to multiple hospitals by employing different PACSs and clinical workstations. The new CAD model seeks to connect different devices (CAD systems, acquisition modalities, workstations and PACS) by means of networking based on a platform that will offer different CAD services. This paper describes some aspects related to the health services of the region where the project was developed, CAD algorithms that were either employed or selected for inclusion in the project, and several technical aspects and results. We have built a standard-based platform with which users can request a CAD service and receive the results in their local PACS. The process runs through a web interface that allows sending data to the different CAD services. A DICOM SR object is received with the results of the algorithms stored inside the original study in the proper folder with the original images. As a result, a homogeneous service to the different hospitals of the region will be offered. End users will benefit from a homogeneous workflow and a standardised integration model to request and obtain results from CAD systems in any modality, not dependant on commercial integration models. This new solution will foster the deployment of these technologies in the entire region of Galicia.

  17. Implementation of a Big Data Accessing and Processing Platform for Medical Records in Cloud.

    PubMed

    Yang, Chao-Tung; Liu, Jung-Chun; Chen, Shuo-Tsung; Lu, Hsin-Wen

    2017-08-18

    Big Data analysis has become a key factor of being innovative and competitive. Along with population growth worldwide and the trend aging of population in developed countries, the rate of the national medical care usage has been increasing. Due to the fact that individual medical data are usually scattered in different institutions and their data formats are varied, to integrate those data that continue increasing is challenging. In order to have scalable load capacity for these data platforms, we must build them in good platform architecture. Some issues must be considered in order to use the cloud computing to quickly integrate big medical data into database for easy analyzing, searching, and filtering big data to obtain valuable information.This work builds a cloud storage system with HBase of Hadoop for storing and analyzing big data of medical records and improves the performance of importing data into database. The data of medical records are stored in HBase database platform for big data analysis. This system performs distributed computing on medical records data processing through Hadoop MapReduce programming, and to provide functions, including keyword search, data filtering, and basic statistics for HBase database. This system uses the Put with the single-threaded method and the CompleteBulkload mechanism to import medical data. From the experimental results, we find that when the file size is less than 300MB, the Put with single-threaded method is used and when the file size is larger than 300MB, the CompleteBulkload mechanism is used to improve the performance of data import into database. This system provides a web interface that allows users to search data, filter out meaningful information through the web, and analyze and convert data in suitable forms that will be helpful for medical staff and institutions.

  18. seismo-live: Training in Computational Seismology using Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Igel, H.; Krischer, L.; van Driel, M.; Tape, C.

    2016-12-01

    Practical training in computational methodologies is still underrepresented in Earth science curriculae despite the increasing use of sometimes highly sophisticated simulation technologies in research projects. At the same time well-engineered community codes make it easy to return simulation-based results yet with the danger that the inherent traps of numerical solutions are not well understood. It is our belief that training with highly simplified numerical solutions (here to the equations describing elastic wave propagation) with carefully chosen elementary ingredients of simulation technologies (e.g., finite-differencing, function interpolation, spectral derivatives, numerical integration) could substantially improve this situation. For this purpose we have initiated a community platform (www.seismo-live.org) where Python-based Jupyter notebooks can be accessed and run without and necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow combining markup language, graphics, equations with interactive, executable python codes. We demonstrate the potential with training notebooks for the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin method. The platform already includes general Python training, introduction to the ObsPy library for seismology as well as seismic data processing and noise analysis. Submission of Jupyter notebooks for general seismology are encouraged. The platform can be used for complementary teaching in Earth Science courses on compute-intensive research areas.

  19. Computational, Integrative, and Comparative Methods for the Elucidation of Genetic Coexpression Networks

    DOE PAGES

    Baldwin, Nicole E.; Chesler, Elissa J.; Kirov, Stefan; ...

    2005-01-01

    Gene expression microarray data can be used for the assembly of genetic coexpression network graphs. Using mRNA samples obtained from recombinant inbred Mus musculus strains, it is possible to integrate allelic variation with molecular and higher-order phenotypes. The depth of quantitative genetic analysis of microarray data can be vastly enhanced utilizing this mouse resource in combination with powerful computational algorithms, platforms, and data repositories. The resulting network graphs transect many levels of biological scale. This approach is illustrated with the extraction of cliques of putatively co-regulated genes and their annotation using gene ontology analysis and cis -regulatory element discovery. Themore » causal basis for co-regulation is detected through the use of quantitative trait locus mapping.« less

  20. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  1. Singularity: Scientific containers for mobility of compute.

    PubMed

    Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.

  2. Singularity: Scientific containers for mobility of compute

    PubMed Central

    Kurtzer, Gregory M.; Bauer, Michael W.

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014

  3. MPHASYS: a mouse phenotype analysis system

    PubMed Central

    Calder, R Brent; Beems, Rudolf B; van Steeg, Harry; Mian, I Saira; Lohman, Paul HM; Vijg, Jan

    2007-01-01

    Background Systematic, high-throughput studies of mouse phenotypes have been hampered by the inability to analyze individual animal data from a multitude of sources in an integrated manner. Studies generally make comparisons at the level of genotype or treatment thereby excluding associations that may be subtle or involve compound phenotypes. Additionally, the lack of integrated, standardized ontologies and methodologies for data exchange has inhibited scientific collaboration and discovery. Results Here we introduce a Mouse Phenotype Analysis System (MPHASYS), a platform for integrating data generated by studies of mouse models of human biology and disease such as aging and cancer. This computational platform is designed to provide a standardized methodology for working with animal data; a framework for data entry, analysis and sharing; and ontologies and methodologies for ensuring accurate data capture. We describe the tools that currently comprise MPHASYS, primarily ones related to mouse pathology, and outline its use in a study of individual animal-specific patterns of multiple pathology in mice harboring a specific germline mutation in the DNA repair and transcription-specific gene Xpd. Conclusion MPHASYS is a system for analyzing multiple data types from individual animals. It provides a framework for developing data analysis applications, and tools for collecting and distributing high-quality data. The software is platform independent and freely available under an open-source license [1]. PMID:17553167

  4. Multiple advanced logic gates made of DNA-Ag nanocluster and the application for intelligent detection of pathogenic bacterial genes.

    PubMed

    Lin, Xiaodong; Liu, Yaqing; Deng, Jiankang; Lyu, Yanlong; Qian, Pengcheng; Li, Yunfei; Wang, Shuo

    2018-02-21

    The integration of multiple DNA logic gates on a universal platform to implement advance logic functions is a critical challenge for DNA computing. Herein, a straightforward and powerful strategy in which a guanine-rich DNA sequence lighting up a silver nanocluster and fluorophore was developed to construct a library of logic gates on a simple DNA-templated silver nanoclusters (DNA-AgNCs) platform. This library included basic logic gates, YES, AND, OR, INHIBIT, and XOR, which were further integrated into complex logic circuits to implement diverse advanced arithmetic/non-arithmetic functions including half-adder, half-subtractor, multiplexer, and demultiplexer. Under UV irradiation, all the logic functions could be instantly visualized, confirming an excellent repeatability. The logic operations were entirely based on DNA hybridization in an enzyme-free and label-free condition, avoiding waste accumulation and reducing cost consumption. Interestingly, a DNA-AgNCs-based multiplexer was, for the first time, used as an intelligent biosensor to identify pathogenic genes, E. coli and S. aureus genes, with a high sensitivity. The investigation provides a prototype for the wireless integration of multiple devices on even the simplest single-strand DNA platform to perform diverse complex functions in a straightforward and cost-effective way.

  5. CGI: Java Software for Mapping and Visualizing Data from Array-based Comparative Genomic Hybridization and Expression Profiling

    PubMed Central

    Gu, Joyce Xiuweu-Xu; Wei, Michael Yang; Rao, Pulivarthi H.; Lau, Ching C.; Behl, Sanjiv; Man, Tsz-Kwong

    2007-01-01

    With the increasing application of various genomic technologies in biomedical research, there is a need to integrate these data to correlate candidate genes/regions that are identified by different genomic platforms. Although there are tools that can analyze data from individual platforms, essential software for integration of genomic data is still lacking. Here, we present a novel Java-based program called CGI (Cytogenetics-Genomics Integrator) that matches the BAC clones from array-based comparative genomic hybridization (aCGH) to genes from RNA expression profiling datasets. The matching is computed via a fast, backend MySQL database containing UCSC Genome Browser annotations. This program also provides an easy-to-use graphical user interface for visualizing and summarizing the correlation of DNA copy number changes and RNA expression patterns from a set of experiments. In addition, CGI uses a Java applet to display the copy number values of a specific BAC clone in aCGH experiments side by side with the expression levels of genes that are mapped back to that BAC clone from the microarray experiments. The CGI program is built on top of extensible, reusable graphic components specifically designed for biologists. It is cross-platform compatible and the source code is freely available under the General Public License. PMID:19936083

  6. CGI: Java software for mapping and visualizing data from array-based comparative genomic hybridization and expression profiling.

    PubMed

    Gu, Joyce Xiuweu-Xu; Wei, Michael Yang; Rao, Pulivarthi H; Lau, Ching C; Behl, Sanjiv; Man, Tsz-Kwong

    2007-10-06

    With the increasing application of various genomic technologies in biomedical research, there is a need to integrate these data to correlate candidate genes/regions that are identified by different genomic platforms. Although there are tools that can analyze data from individual platforms, essential software for integration of genomic data is still lacking. Here, we present a novel Java-based program called CGI (Cytogenetics-Genomics Integrator) that matches the BAC clones from array-based comparative genomic hybridization (aCGH) to genes from RNA expression profiling datasets. The matching is computed via a fast, backend MySQL database containing UCSC Genome Browser annotations. This program also provides an easy-to-use graphical user interface for visualizing and summarizing the correlation of DNA copy number changes and RNA expression patterns from a set of experiments. In addition, CGI uses a Java applet to display the copy number values of a specific BAC clone in aCGH experiments side by side with the expression levels of genes that are mapped back to that BAC clone from the microarray experiments. The CGI program is built on top of extensible, reusable graphic components specifically designed for biologists. It is cross-platform compatible and the source code is freely available under the General Public License.

  7. Single-chip photonic transceiver based on bulk-silicon, as a chip-level photonic I/O platform for optical interconnects.

    PubMed

    Kim, Gyungock; Park, Hyundai; Joo, Jiho; Jang, Ki-Seok; Kwack, Myung-Joon; Kim, Sanghoon; Kim, In Gyoo; Oh, Jin Hyuk; Kim, Sun Ae; Park, Jaegyu; Kim, Sanggi

    2015-06-10

    When silicon photonic integrated circuits (PICs), defined for transmitting and receiving optical data, are successfully monolithic-integrated into major silicon electronic chips as chip-level optical I/Os (inputs/outputs), it will bring innovative changes in data computing and communications. Here, we propose new photonic integration scheme, a single-chip optical transceiver based on a monolithic-integrated vertical photonic I/O device set including light source on bulk-silicon. This scheme can solve the major issues which impede practical implementation of silicon-based chip-level optical interconnects. We demonstrated a prototype of a single-chip photonic transceiver with monolithic-integrated vertical-illumination type Ge-on-Si photodetectors and VCSELs-on-Si on the same bulk-silicon substrate operating up to 50 Gb/s and 20 Gb/s, respectively. The prototype realized 20 Gb/s low-power chip-level optical interconnects for λ ~ 850 nm between fabricated chips. This approach can have a significant impact on practical electronic-photonic integration in high performance computers (HPC), cpu-memory interface, hybrid memory cube, and LAN, SAN, data center and network applications.

  8. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  9. Protecting Digital Evidence Integrity by Using Smart Cards

    NASA Astrophysics Data System (ADS)

    Saleem, Shahzad; Popov, Oliver

    RFC 3227 provides general guidelines for digital evidence collection and archiving, while the International Organization on Computer Evidence offers guidelines for best practice in the digital forensic examination. In the light of these guidelines we will analyze integrity protection mechanism provided by EnCase and FTK which is mainly based on Message Digest Codes (MDCs). MDCs for integrity protection are not tamper proof, hence they can be forged. With the proposed model for protecting digital evidence integrity by using smart cards (PIDESC) that establishes a secure platform for digitally signing the MDC (in general for a whole range of cryptographic services) in combination with Public Key Cryptography (PKC), one can show that this weakness might be overcome.

  10. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.

    PubMed

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon

    2017-02-28

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  11. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    NASA Astrophysics Data System (ADS)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  12. Seismic waveform modeling over cloud

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  13. Video Editing System

    NASA Technical Reports Server (NTRS)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  14. Integrated Environment for Development and Assurance

    DTIC Science & Technology

    2015-01-26

    Jan 26, 2015 © 2015 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems introduce a new class of...eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects...Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of

  15. Towards an Open, Distributed Software Architecture for UxS Operations

    NASA Technical Reports Server (NTRS)

    Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette

    2015-01-01

    To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.

  16. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  17. A reconfigurable cryogenic platform for the classical control of quantum processors

    NASA Astrophysics Data System (ADS)

    Homulle, Harald; Visser, Stefan; Patra, Bishnu; Ferrari, Giorgio; Prati, Enrico; Sebastiano, Fabio; Charbon, Edoardo

    2017-04-01

    The implementation of a classical control infrastructure for large-scale quantum computers is challenging due to the need for integration and processing time, which is constrained by coherence time. We propose a cryogenic reconfigurable platform as the heart of the control infrastructure implementing the digital error-correction control loop. The platform is implemented on a field-programmable gate array (FPGA) that supports the functionality required by several qubit technologies and that can operate close to the physical qubits over a temperature range from 4 K to 300 K. This work focuses on the extensive characterization of the electronic platform over this temperature range. All major FPGA building blocks (such as look-up tables (LUTs), carry chains (CARRY4), mixed-mode clock manager (MMCM), phase-locked loop (PLL), block random access memory, and IDELAY2 (programmable delay element)) operate correctly and the logic speed is very stable. The logic speed of LUTs and CARRY4 changes less then 5%, whereas the jitter of MMCM and PLL clock managers is reduced by 20%. The stability is finally demonstrated by operating an integrated 1.2 GSa/s analog-to-digital converter (ADC) with a relatively stable performance over temperature. The ADCs effective number of bits drops from 6 to 4.5 bits when operating at 15 K.

  18. A reconfigurable cryogenic platform for the classical control of quantum processors.

    PubMed

    Homulle, Harald; Visser, Stefan; Patra, Bishnu; Ferrari, Giorgio; Prati, Enrico; Sebastiano, Fabio; Charbon, Edoardo

    2017-04-01

    The implementation of a classical control infrastructure for large-scale quantum computers is challenging due to the need for integration and processing time, which is constrained by coherence time. We propose a cryogenic reconfigurable platform as the heart of the control infrastructure implementing the digital error-correction control loop. The platform is implemented on a field-programmable gate array (FPGA) that supports the functionality required by several qubit technologies and that can operate close to the physical qubits over a temperature range from 4 K to 300 K. This work focuses on the extensive characterization of the electronic platform over this temperature range. All major FPGA building blocks (such as look-up tables (LUTs), carry chains (CARRY4), mixed-mode clock manager (MMCM), phase-locked loop (PLL), block random access memory, and IDELAY2 (programmable delay element)) operate correctly and the logic speed is very stable. The logic speed of LUTs and CARRY4 changes less then 5%, whereas the jitter of MMCM and PLL clock managers is reduced by 20%. The stability is finally demonstrated by operating an integrated 1.2 GSa/s analog-to-digital converter (ADC) with a relatively stable performance over temperature. The ADCs effective number of bits drops from 6 to 4.5 bits when operating at 15 K.

  19. Rodent motor and neuropsychological behaviour measured in home cages using the integrated modular platform SmartCage™

    PubMed Central

    Khroyan, Taline V; Zhang, Jingxi; Yang, Liya; Zou, Bende; Xie, James; Pascual, Conrado; Malik, Adam; Xie, Julian; Zaveri, Nurulain T; Vazquez, Jacqueline; Polgar, Willma; Toll, Lawrence; Fang, Jidong; Xie, Xinmin

    2017-01-01

    SUMMARY To facilitate investigation of diverse rodent behaviours in rodents’ home cages, we have developed an integrated modular platform, the SmartCage™ system (AfaSci, Inc. Burlingame, CA, USA), which enables automated neurobehavioural phenotypic analysis and in vivo drug screening in a relatively higher-throughput and more objective manner.The individual platform consists of an infrared array, a vibration floor sensor and a variety of modular devices. One computer can simultaneously operate up to 16 platforms via USB cables.The SmartCage™ detects drug-induced increases and decreases in activity levels, as well as changes in movement patterns. Wake and sleep states of mice can be detected using the vibration floor sensor. The arousal state classification achieved up to 98% accuracy compared with results obtained by electroencephalography and electromyography. More complex behaviours, including motor coordination, anxiety-related behaviours and social approach behaviour, can be assessed using appropriate modular devices and the results obtained are comparable with results obtained using conventional methods.In conclusion, the SmartCage™ system provides an automated and accurate tool to quantify various rodent behaviours in a ‘stress-free’ environment. This system, combined with the validated testing protocols, offers powerful a tool kit for transgenic phenotyping and in vivo drug screening. PMID:22540540

  20. Combining a Multi-Agent System and Communication Middleware for Smart Home Control: A Universal Control Platform Architecture

    PubMed Central

    Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu

    2017-01-01

    In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices. PMID:28926957

  1. Combining a Multi-Agent System and Communication Middleware for Smart Home Control: A Universal Control Platform Architecture.

    PubMed

    Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu

    2017-09-16

    In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices.

  2. Support for Taverna workflows in the VPH-Share cloud platform.

    PubMed

    Kasztelnik, Marek; Coto, Ernesto; Bubak, Marian; Malawski, Maciej; Nowakowski, Piotr; Arenas, Juan; Saglimbeni, Alfredo; Testi, Debora; Frangi, Alejandro F

    2017-07-01

    To address the increasing need for collaborative endeavours within the Virtual Physiological Human (VPH) community, the VPH-Share collaborative cloud platform allows researchers to expose and share sequences of complex biomedical processing tasks in the form of computational workflows. The Taverna Workflow System is a very popular tool for orchestrating complex biomedical & bioinformatics processing tasks in the VPH community. This paper describes the VPH-Share components that support the building and execution of Taverna workflows, and explains how they interact with other VPH-Share components to improve the capabilities of the VPH-Share platform. Taverna workflow support is delivered by the Atmosphere cloud management platform and the VPH-Share Taverna plugin. These components are explained in detail, along with the two main procedures that were developed to enable this seamless integration: workflow composition and execution. 1) Seamless integration of VPH-Share with other components and systems. 2) Extended range of different tools for workflows. 3) Successful integration of scientific workflows from other VPH projects. 4) Execution speed improvement for medical applications. The presented workflow integration provides VPH-Share users with a wide range of different possibilities to compose and execute workflows, such as desktop or online composition, online batch execution, multithreading, remote execution, etc. The specific advantages of each supported tool are presented, as are the roles of Atmosphere and the VPH-Share plugin within the VPH-Share project. The combination of the VPH-Share plugin and Atmosphere engenders the VPH-Share infrastructure with far more flexible, powerful and usable capabilities for the VPH-Share community. As both components can continue to evolve and improve independently, we acknowledge that further improvements are still to be developed and will be described. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  4. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  5. A community proposal to integrate proteomics activities in ELIXIR.

    PubMed

    Vizcaíno, Juan Antonio; Walzer, Mathias; Jiménez, Rafael C; Bittremieux, Wout; Bouyssié, David; Carapito, Christine; Corrales, Fernando; Ferro, Myriam; Heck, Albert J R; Horvatovich, Peter; Hubalek, Martin; Lane, Lydie; Laukens, Kris; Levander, Fredrik; Lisacek, Frederique; Novak, Petr; Palmblad, Magnus; Piovesan, Damiano; Pühler, Alfred; Schwämmle, Veit; Valkenborg, Dirk; van Rijswijk, Merlijn; Vondrasek, Jiri; Eisenacher, Martin; Martens, Lennart; Kohlbacher, Oliver

    2017-01-01

    Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on 'The Future of Proteomics in ELIXIR' that took place in March 2017 in Tübingen (Germany), and involved representatives of eleven ELIXIR nodes. These discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR's existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper.

  6. A community proposal to integrate proteomics activities in ELIXIR

    PubMed Central

    Vizcaíno, Juan Antonio; Walzer, Mathias; Jiménez, Rafael C.; Bittremieux, Wout; Bouyssié, David; Carapito, Christine; Corrales, Fernando; Ferro, Myriam; Heck, Albert J.R.; Horvatovich, Peter; Hubalek, Martin; Lane, Lydie; Laukens, Kris; Levander, Fredrik; Lisacek, Frederique; Novak, Petr; Palmblad, Magnus; Piovesan, Damiano; Pühler, Alfred; Schwämmle, Veit; Valkenborg, Dirk; van Rijswijk, Merlijn; Vondrasek, Jiri; Eisenacher, Martin; Martens, Lennart; Kohlbacher, Oliver

    2017-01-01

    Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on ‘The Future of Proteomics in ELIXIR’ that took place in March 2017 in Tübingen (Germany), and involved representatives of eleven ELIXIR nodes. These discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR’s existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper. PMID:28713550

  7. Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis.

    PubMed

    Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E

    2018-04-15

    Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele. darrell.hurt@nih.gov.

  8. Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis

    PubMed Central

    Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E

    2018-01-01

    Abstract Motivation Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. Results The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. Availability and implementation https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele Contact darrell.hurt@nih.gov PMID:29028892

  9. The Ettention software package.

    PubMed

    Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp

    2016-02-01

    We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  11. Efficient Sensor Integration on Platforms (NeXOS)

    NASA Astrophysics Data System (ADS)

    Memè, S.; Delory, E.; Del Rio, J.; Jirka, S.; Toma, D. M.; Martinez, E.; Frommhold, L.; Barrera, C.; Pearlman, J.

    2016-12-01

    In-situ ocean observing platforms provide power and information transmission capability to sensors. Ocean observing platforms can be mobile, such as ships, autonomous underwater vehicles, drifters and profilers, or fixed, such as buoys, moorings and cabled observatories. The process of integrating sensors on platforms can imply substantial engineering time and resources. Constraints range from stringent mechanical constraints to proprietary communication and control firmware. In NeXOS, the implementation of a PUCK plug and play capability is being done with applications to multiple sensors and platforms. This is complemented with a sensor web enablement that addresses the flow of information from sensor to user. Open standards are being tested in order to assess their costs and benefits in existing and future observing systems. Part of the testing implied open-source coding and hardware prototyping of specific control devices in particular for closed commercial platforms where firmware upgrading is not straightforward or possible without prior agreements or service fees. Some platform manufacturers such as European companies ALSEAMAR[1] and NKE Instruments [2] are currently upgrading their control and communication firmware as part of their activities in NeXOS. The sensor development companies Sensorlab[3] SMID[4] and TRIOS [5]upgraded their firmware with this plug and play functionality. Other industrial players in Europe and the US have been sent NeXOS sensors emulators to test the new protocol on their platforms. We are currently demonstrating that with little effort, it is also possible to have such middleware implemented on very low-cost compact computers such as the open Raspberry Pi[6], and have a full end-to-end interoperable communication path from sensor to user with sensor plug and play capability. The result is an increase in sensor integration cost-efficiency and the demonstration will be used to highlight the benefit to users and ocean observatory operators. [1] http://www.alseamar-alcen.com [2] http://www.nke-instrumentation.com [3] http://sensorlab.es [4] http://www.smidtechnology.it/ [5] http://www.trios.de/en/products/ [6] Raspberry Pi is a trademark of the Raspberry Pi Foundation

  12. An integrated system for land resources supervision based on the IoT and cloud computing

    NASA Astrophysics Data System (ADS)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  13. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  14. Implementation of Arithmetic and Nonarithmetic Functions on a Label-free and DNA-based Platform

    NASA Astrophysics Data System (ADS)

    Wang, Kun; He, Mengqi; Wang, Jin; He, Ronghuan; Wang, Jianhua

    2016-10-01

    A series of complex logic gates were constructed based on graphene oxide and DNA-templated silver nanoclusters to perform both arithmetic and nonarithmetic functions. For the purpose of satisfying the requirements of progressive computational complexity and cost-effectiveness, a label-free and universal platform was developed by integration of various functions, including half adder, half subtractor, multiplexer and demultiplexer. The label-free system avoided laborious modification of biomolecules. The designed DNA-based logic gates can be implemented with readout of near-infrared fluorescence, and exhibit great potential applications in the field of bioimaging as well as disease diagnosis.

  15. HERA: A New Platform for Embedding Agents in Heterogeneous Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo S.; de Paz, Juan F.; García, Óscar; Gil, Óscar; González, Angélica

    Ambient Intelligence (AmI) based systems require the development of innovative solutions that integrate distributed intelligent systems with context-aware technologies. In this sense, Multi-Agent Systems (MAS) and Wireless Sensor Networks (WSN) are two key technologies for developing distributed systems based on AmI scenarios. This paper presents the new HERA (Hardware-Embedded Reactive Agents) platform, that allows using dynamic and self-adaptable heterogeneous WSNs on which agents are directly embedded on the wireless nodes This approach facilitates the inclusion of context-aware capabilities in AmI systems to gather data from their surrounding environments, achieving a higher level of ubiquitous and pervasive computing.

  16. Development of fast wireless detection system for fixed offshore platform

    NASA Astrophysics Data System (ADS)

    Li, Zhigang; Yu, Yan; Jiao, Dong; Wang, Jie; Li, Zhirui; Ou, Jinping

    2011-04-01

    Offshore platforms' security is concerned since in 1950s and 1960s, and in the early 1980s some important specifications and standards are built, and all these provide technical basis of fixed platform design, construction, installation and evaluation. With the condition that more and more platforms are in serving over age, the research about the evaluation and detection technology of offshore platform has been a hotspot, especially underwater detection, and assessment method based on the finite element calculation. For fixed platform structure detection, conventional NDT methods, such as eddy current, magnetic powder, permeate, X-ray and ultrasonic, etc, are generally used. These techniques are more mature, intuitive, but underwater detection needs underwater robot, the necessary supporting tools of auxiliary equipment, and trained professional team, thus resources and cost used are considerable, installation time of test equipment is long. This project presents a new kind of fast wireless detection and damage diagnosis system for fixed offshore platform using wireless sensor networks, that is, wireless sensor nodes can be put quickly on the offshore platform, detect offshore platform structure global status by wireless communication, and then make diagnosis. This system is operated simply, suitable for offshore platform integrity states rapid assessment. The designed system consists in intelligence acquisition equipment and 8 wireless collection nodes, the whole system has 64 collection channels, namely every wireless collection node has eight 16-bit accuracy of A/D channels. Wireless collection node, integrated with vibration sensing unit, embedded low-power micro-processing unit, wireless transceiver unit, large-capacity power unit, and GPS time synchronization unit, can finish the functions such as vibration data collection, initial analysis, data storage, data wireless transmission. Intelligence acquisition equipment, integrated with high-performance computation unit, wireless transceiver unit, mobile power unit and embedded data analysis software, can totally control multi-wireless collection nodes, receive and analyze data, parameter identification. Data is transmitted at the 2.4GHz wireless communication channel, every sensing data channel in charge of data transmission is in a stable frequency band, control channel responsible for the control of power parameters is in a public frequency band. The test is initially conducted for the designed system, experimental results show that the system has good application prospects and practical value with fast arrangement, high sampling rate, high resolution, capacity of low frequency detection.

  17. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  18. ART/Ada design project, phase 1. Task 3 report: Test plan

    NASA Technical Reports Server (NTRS)

    Allen, Bradley P.

    1988-01-01

    The plan is described for the integrated testing and benchmark of Phase Ada based ESBT Design Research Project. The integration testing is divided into two phases: (1) the modules that do not rely on the Ada code generated by the Ada Generator are tested before the Ada Generator is implemented; and (2) all modules are integrated and tested with the Ada code generated by the Ada Generator. Its performance and size as well as its functionality is verified in this phase. The target platform is a DEC Ada compiler on VAX mini-computers and VAX stations running the VMS operating system.

  19. Building A Community Focused Data and Modeling Collaborative platform with Hardware Virtualization Technology

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.

    2009-12-01

    As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.

  20. Provenance based data integrity checking and verification in cloud environments

    PubMed Central

    Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais

    2017-01-01

    Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user’s data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user’s data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called “Data Provenance”. Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking. PMID:28545151

  1. Provenance based data integrity checking and verification in cloud environments.

    PubMed

    Imran, Muhammad; Hlavacs, Helmut; Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais

    2017-01-01

    Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user's data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user's data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called "Data Provenance". Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking.

  2. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming frameworks and how a developer might prepare their software for application streaming. We will also examine the secondary benefits realized by moving legacy software to the cloud. Finally, we will examine the process by which a legacy Java application, the Integrated Data Viewer (IDV), is to be adapted for tablet computing via Application Streaming.

  3. A semantic problem solving environment for integrative parasite research: identification of intervention targets for Trypanosoma cruzi.

    PubMed

    Parikh, Priti P; Minning, Todd A; Nguyen, Vinh; Lalithsena, Sarasi; Asiaee, Amir H; Sahoo, Satya S; Doshi, Prashant; Tarleton, Rick; Sheth, Amit P

    2012-01-01

    Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge. We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results. The SPSE facilitates parasitologists in leveraging the growing, but disparate, parasite data resources by offering an integrative platform that utilizes Semantic Web techniques, while keeping their workload increase minimal.

  4. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  5. EPOS Thematic Core Service ANTHROPOGENIC HAZARDS (TCS AH) - development of e-research platform

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata

    2017-04-01

    TCS AH is based on IS-EPOS Platform. The Platform facilitates research on anthropogenic hazards and is available online, free of charge https://tcs.ah-epos.eu/. The Platform is a final product of the IS-EPOS project, founded by the national programme - POIG - which was implemented in 2013-2015 (POIG.02.03.00-14-090/13-00). The platform is a result of a joint work of scientific community and industrial partners. Currently, the development of TCS AH is carried under EPOS IP project (H2020-INFRADEV-1-2015-1, INFRADEV-3-2015). Platform is an open virtual access point for researchers and Ph. D. students interested in anthropogenic seismicity and related hazards. This environment is designed to ensure a researcher the maximum possible liberty for experimentation by providing a virtual laboratory, in which the researcher can design own processing streams and process the data integrated on the platform. TCS AH integrates: data and specific high-level services. Data gathered in the so-called "episodes", comprehensively describing a geophysical process, induced or triggered by human technological activity, which, under certain circumstances can become hazardous for people, infrastructure and the environment. 7 sets of seismic, geological and technological data were made available on the Platform. The data come from Poland, Germany, UK and Vietnam, and refer to underground mining, reservoir impoundment, shale gas exploitation and geothermal energy production. The next at least 19 new episodes related to conventional hydrocarbon extraction, reservoir treatment, underground mining and geothermal energy production are being integrated within the framework of EPOS IP project. The heterogeneous multi-disciplinary data (seismic, displacement, geomechanical data, production data etc.) are transformed to unified structures to form integrated and validated datasets. To deal with this various data the problem-oriented services were designed and implemented. The particular attention devoted to methods analyzing correlations between technology, geophysical response and resulting hazard was stressed out in service preparation. TCS AH contains a number of computing and data visualization services, which give opportunity to make graphical presentations of the available data. Further development of the Platform, except integration of at least new episodes of all types of anthropogenic hazards, will be covering gradually implementation of new services. TCS AH platform is open for the whole research community. The platform is also designated to be used in research projects, eg. it serves "Shale gas exploration and exploitation induced risks (SHEER)" project (Horizon 2020, call LCE 16-2014). In addition, it is also meant to serve the public sector expert knowledge and background information. In order to fulfill this aim the services for outreach, dissemination & communication will be implemented. TCS AH was used as a teaching tool in Ph. D. students education within IG PAS seismology course for Ph. D. candidates, Interdisciplinary Polar Studies as well as in several workshops for Polish and international students. Additionally, the platform is also used within educational project ERIS (Exploitation of Research results In School practice) aimed for junior high and high schools, funded with support from the European Commission within ERASMUS+ Programme.

  6. Simulator platform for fast reactor operation and safety technology demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, R. B.; Park, Y. S.; Grandy, C.

    2012-07-30

    A simulator platform for visualization and demonstration of innovative concepts in fast reactor technology is described. The objective is to make more accessible the workings of fast reactor technology innovations and to do so in a human factors environment that uses state-of-the art visualization technologies. In this work the computer codes in use at Argonne National Laboratory (ANL) for the design of fast reactor systems are being integrated to run on this platform. This includes linking reactor systems codes with mechanical structures codes and using advanced graphics to depict the thermo-hydraulic-structure interactions that give rise to an inherently safe responsemore » to upsets. It also includes visualization of mechanical systems operation including advanced concepts that make use of robotics for operations, in-service inspection, and maintenance.« less

  7. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  8. Engineering integrated digital circuits with allosteric ribozymes for scaling up molecular computation and diagnostics.

    PubMed

    Penchovsky, Robert

    2012-10-19

    Here we describe molecular implementations of integrated digital circuits, including a three-input AND logic gate, a two-input multiplexer, and 1-to-2 decoder using allosteric ribozymes. Furthermore, we demonstrate a multiplexer-decoder circuit. The ribozymes are designed to seek-and-destroy specific RNAs with a certain length by a fully computerized procedure. The algorithm can accurately predict one base substitution that alters the ribozyme's logic function. The ability to sense the length of RNA molecules enables single ribozymes to be used as platforms for multiple interactions. These ribozymes can work as integrated circuits with the functionality of up to five logic gates. The ribozyme design is universal since the allosteric and substrate domains can be altered to sense different RNAs. In addition, the ribozymes can specifically cleave RNA molecules with triplet-repeat expansions observed in genetic disorders such as oculopharyngeal muscular dystrophy. Therefore, the designer ribozymes can be employed for scaling up computing and diagnostic networks in the fields of molecular computing and diagnostics and RNA synthetic biology.

  9. Digital video applications in radiologic education: theory, technique, and applications.

    PubMed

    Hennessey, J G; Fishman, E K; Ney, D R

    1994-05-01

    Computer-assisted instruction (CAI) has great potential in medical education. The recent explosion of multimedia platforms provides an environment for the seemless integration of text, images, and sound into a single program. This article discusses the role of digital video in the current educational environment as well as its future potential. An indepth review of the technical decisions of this new technology is also presented.

  10. Design and Empirical Validation of Effectiveness of LANGA, an Online Game-Based Platform for Second Language Learning

    ERIC Educational Resources Information Center

    Usai, Francesco; O'Neil, Kiera G. R.; Newman, Aaron J.

    2018-01-01

    Computer and smartphone-based applications for second language (L2) learning have become popular tools, being integrated in many classroom-based courses and adopted by the public at large. Yet, despite a significant body of research that suggests that individuals differ in their ability to learn L2, it is still unclear what factors predict…

  11. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  12. Model-driven analysis of experimentally determined growth phenotypes for 465 yeast gene deletion mutants under 16 different conditions

    PubMed Central

    Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel

    2008-01-01

    Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699

  13. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    NASA Astrophysics Data System (ADS)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  14. Automated platform for designing multiple robot work cells

    NASA Astrophysics Data System (ADS)

    Osman, N. S.; Rahman, M. A. A.; Rahman, A. A. Abdul; Kamsani, S. H.; Bali Mohamad, B. M.; Mohamad, E.; Zaini, Z. A.; Rahman, M. F. Ab; Mohamad Hatta, M. N. H.

    2017-06-01

    Designing the multiple robot work cells is very knowledge-intensive, intricate, and time-consuming process. This paper elaborates the development process of a computer-aided design program for generating the multiple robot work cells which offer a user-friendly interface. The primary purpose of this work is to provide a fast and easy platform for less cost and human involvement with minimum trial and errors adjustments. The automated platform is constructed based on the variant-shaped configuration concept with its mathematical model. A robot work cell layout, system components, and construction procedure of the automated platform are discussed in this paper where integration of these items will be able to automatically provide the optimum robot work cell design according to the information set by the user. This system is implemented on top of CATIA V5 software and utilises its Part Design, Assembly Design, and Macro tool. The current outcomes of this work provide a basis for future investigation in developing a flexible configuration system for the multiple robot work cells.

  15. The architecture of a virtual grid GIS server

    NASA Astrophysics Data System (ADS)

    Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting

    2008-10-01

    The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.

  16. Chem/bio sensing with non-classical light and integrated photonics.

    PubMed

    Haas, J; Schwartz, M; Rengstl, U; Jetter, M; Michler, P; Mizaikoff, B

    2018-01-29

    Modern quantum technology currently experiences extensive advances in applicability in communications, cryptography, computing, metrology and lithography. Harnessing this technology platform for chem/bio sensing scenarios is an appealing opportunity enabling ultra-sensitive detection schemes. This is further facilliated by the progress in fabrication, miniaturization and integration of visible and infrared quantum photonics. Especially, the combination of efficient single-photon sources together with waveguiding/sensing structures, serving as active optical transducer, as well as advanced detector materials is promising integrated quantum photonic chem/bio sensors. Besides the intrinsic molecular selectivity and non-destructive character of visible and infrared light based sensing schemes, chem/bio sensors taking advantage of non-classical light sources promise sensitivities beyond the standard quantum limit. In the present review, recent achievements towards on-chip chem/bio quantum photonic sensing platforms based on N00N states are discussed along with appropriate recognition chemistries, facilitating the detection of relevant (bio)analytes at ultra-trace concentration levels. After evaluating recent developments in this field, a perspective for a potentially promising sensor testbed is discussed for reaching integrated quantum sensing with two fiber-coupled GaAs chips together with semiconductor quantum dots serving as single-photon sources.

  17. A collaborative platform for consensus sessions in pathology over Internet.

    PubMed

    Zapletal, Eric; Le Bozec, Christel; Degoulet, Patrice; Jaulent, Marie-Christine

    2003-01-01

    The design of valid databases in pathology faces the problem of diagnostic disagreement between pathologists. Organizing consensus sessions between experts to reduce the variability is a difficult task. The TRIDEM platform addresses the issue to organize consensus sessions in pathology over the Internet. In this paper, we present the basis to achieve such collaborative platform. On the one hand, the platform integrates the functionalities of the IDEM consensus module that alleviates the consensus task by presenting to pathologists preliminary computed consensus through ergonomic interfaces (automatic step). On the other hand, a set of lightweight interaction tools such as vocal annotations are implemented to ease the communication between experts as they discuss a case (interactive step). The architecture of the TRIDEM platform is based on a Java-Server-Page web server that communicate with the ObjectStore PSE/PRO database used for the object storage. The HTML pages generated by the web server run Java applets to perform the different steps (automatic and interactive) of the consensus. The current limitations of the platform is to only handle a synchronous process. Moreover, improvements like re-writing the consensus workflow with a protocol such as BPML are already forecast.

  18. Framework Design of Unified Cross-Authentication Based on the Fourth Platform Integrated Payment

    NASA Astrophysics Data System (ADS)

    Yong, Xu; Yujin, He

    The essay advances a unified authentication based on the fourth integrated payment platform. The research aims at improving the compatibility of the authentication in electronic business and providing a reference for the establishment of credit system by seeking a way to carry out a standard unified authentication on a integrated payment platform. The essay introduces the concept of the forth integrated payment platform and finally put forward the whole structure and different components. The main issue of the essay is about the design of the credit system of the fourth integrated payment platform and the PKI/CA structure design.

  19. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  20. Earth Observation oriented teaching materials development based on OGC Web services and Bashyt generated reports

    NASA Astrophysics Data System (ADS)

    Stefanut, T.; Gorgan, D.; Giuliani, G.; Cau, P.

    2012-04-01

    Creating e-Learning materials in the Earth Observation domain is a difficult task especially for non-technical specialists who have to deal with distributed repositories, large amounts of information and intensive processing requirements. Furthermore, due to the lack of specialized applications for developing teaching resources, technical knowledge is required also for defining data presentation structures or in the development and customization of user interaction techniques for better teaching results. As a response to these issues during the GiSHEO FP7 project [1] and later in the EnviroGRIDS FP7 [2] project, we have developed the eGLE e-Learning Platform [3], a tool based application that provides dedicated functionalities to the Earth Observation specialists for developing teaching materials. The proposed architecture is built around a client-server design that provides the core functionalities (e.g. user management, tools integration, teaching materials settings, etc.) and has been extended with a distributed component implemented through the tools that are integrated into the platform, as described further. Our approach in dealing with multiple transfer protocol types, heterogeneous data formats or various user interaction techniques involve the development and integration of very specialized elements (tools) that can be customized by the trainers in a visual manner through simple user interfaces. In our concept each tool is dedicated to a specific data type, implementing optimized mechanisms for searching, retrieving, visualizing and interacting with it. At the same time, in each learning resource can be integrated any number of tools, through drag-and-drop interaction, allowing the teacher to retrieve pieces of data of various types (e.g. images, charts, tables, text, videos etc.) from different sources (e.g. OGC web services, charts created through Bashyt application, etc.) through different protocols (ex. WMS, BASHYT API, FTP, HTTP etc.) and to display them all together in a unitary manner using the same visual structure [4]. Addressing the High Power Computation requirements that are met while processing environmental data, our platform can be easily extended through tools that connect to GRID infrastructures, WCS web services, Bashyt API (for creating specialized hydrological reports) or any other specialized services (ex. graphics cluster visualization) that can be reached over the Internet. At run time, on the trainee's computer each tool is launched in an asynchronous running mode and connects to the data source that has been established by the teacher, retrieving and displaying the information to the user. The data transfer is accomplished directly between the trainee's computer and the corresponding services (e.g. OGC, Bashyt API, etc.) without passing through the core server platform. In this manner, the eGLE application can provide better and more responsive connections to a large number of users.

  1. The MaxQuant computational platform for mass spectrometry-based shotgun proteomics.

    PubMed

    Tyanova, Stefka; Temu, Tikira; Cox, Juergen

    2016-12-01

    MaxQuant is one of the most frequently used platforms for mass-spectrometry (MS)-based proteomics data analysis. Since its first release in 2008, it has grown substantially in functionality and can be used in conjunction with more MS platforms. Here we present an updated protocol covering the most important basic computational workflows, including those designed for quantitative label-free proteomics, MS1-level labeling and isobaric labeling techniques. This protocol presents a complete description of the parameters used in MaxQuant, as well as of the configuration options of its integrated search engine, Andromeda. This protocol update describes an adaptation of an existing protocol that substantially modifies the technique. Important concepts of shotgun proteomics and their implementation in MaxQuant are briefly reviewed, including different quantification strategies and the control of false-discovery rates (FDRs), as well as the analysis of post-translational modifications (PTMs). The MaxQuant output tables, which contain information about quantification of proteins and PTMs, are explained in detail. Furthermore, we provide a short version of the workflow that is applicable to data sets with simple and standard experimental designs. The MaxQuant algorithms are efficiently parallelized on multiple processors and scale well from desktop computers to servers with many cores. The software is written in C# and is freely available at http://www.maxquant.org.

  2. pyPaSWAS: Python-based multi-core CPU and GPU sequence alignment.

    PubMed

    Warris, Sven; Timal, N Roshan N; Kempenaar, Marcel; Poortinga, Arne M; van de Geest, Henri; Varbanescu, Ana L; Nap, Jan-Peter

    2018-01-01

    Our previously published CUDA-only application PaSWAS for Smith-Waterman (SW) sequence alignment of any type of sequence on NVIDIA-based GPUs is platform-specific and therefore adopted less than could be. The OpenCL language is supported more widely and allows use on a variety of hardware platforms. Moreover, there is a need to promote the adoption of parallel computing in bioinformatics by making its use and extension more simple through more and better application of high-level languages commonly used in bioinformatics, such as Python. The novel application pyPaSWAS presents the parallel SW sequence alignment code fully packed in Python. It is a generic SW implementation running on several hardware platforms with multi-core systems and/or GPUs that provides accurate sequence alignments that also can be inspected for alignment details. Additionally, pyPaSWAS support the affine gap penalty. Python libraries are used for automated system configuration, I/O and logging. This way, the Python environment will stimulate further extension and use of pyPaSWAS. pyPaSWAS presents an easy Python-based environment for accurate and retrievable parallel SW sequence alignments on GPUs and multi-core systems. The strategy of integrating Python with high-performance parallel compute languages to create a developer- and user-friendly environment should be considered for other computationally intensive bioinformatics algorithms.

  3. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  4. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  5. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    PubMed Central

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930

  6. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.

    PubMed

    Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.

  7. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform

    PubMed Central

    Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150

  8. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    PubMed

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  9. Biomaterial science meets computational biology.

    PubMed

    Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela

    2015-05-01

    There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.

  10. Cots Correlator Platform

    NASA Astrophysics Data System (ADS)

    Schaaf, Kjeld; Overeem, Ruud

    2004-06-01

    Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.

  11. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  12. Precision instrument placement using a 4-DOF robot with integrated fiducials for minimally invasive interventions

    NASA Astrophysics Data System (ADS)

    Stenzel, Roland; Lin, Ralph; Cheng, Peng; Kronreif, Gernot; Kornfeld, Martin; Lindisch, David; Wood, Bradford J.; Viswanathan, Anand; Cleary, Kevin

    2007-03-01

    Minimally invasive procedures are increasingly attractive to patients and medical personnel because they can reduce operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a very limited view of the interventional field and the exact position of surgical instruments. We present an image-guided platform for precision placement of surgical instruments based upon a small four degree-of-freedom robot (B-RobII; ARC Seibersdorf Research GmbH, Vienna, Austria). This platform includes a custom instrument guide with an integrated spiral fiducial pattern as the robot's end-effector, and it uses intra-operative computed tomography (CT) to register the robot to the patient directly before the intervention. The physician can then use a graphical user interface (GUI) to select a path for percutaneous access, and the robot will automatically align the instrument guide along this path. Potential anatomical targets include the liver, kidney, prostate, and spine. This paper describes the robotic platform, workflow, software, and algorithms used by the system. To demonstrate the algorithmic accuracy and suitability of the custom instrument guide, we also present results from experiments as well as estimates of the maximum error between target and instrument tip.

  13. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  14. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    PubMed Central

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon

    2017-01-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871

  15. GABBs: Cyberinfrastructure for Self-Service Geospatial Data Exploration, Computation, and Sharing

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Zhao, L.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2016-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. In addressing these needs, the Geospatial data Analysis Building Blocks (GABBs) project aims at building geospatial modeling, data analysis and visualization capabilities in an open source web platform, HUBzero. Funded by NSF's Data Infrastructure Building Blocks initiative, GABBs is creating a geospatial data architecture that integrates spatial data management, mapping and visualization, and interfaces in the HUBzero platform for scientific collaborations. The geo-rendering enabled Rappture toolkit, a generic Python mapping library, geospatial data exploration and publication tools, and an integrated online geospatial data management solution are among the software building blocks from the project. The GABBS software will be available through Amazon's AWS Marketplace VM images and open source. Hosting services are also available to the user community. The outcome of the project will enable researchers and educators to self-manage their scientific data, rapidly create GIS-enable tools, share geospatial data and tools on the web, and build dynamic workflows connecting data and tools, all without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the GABBs architecture, toolkits and libraries, and showcase the scientific use cases that utilize GABBs capabilities, as well as the challenges and solutions for GABBs to interoperate with other cyberinfrastructure platforms.

  16. Low Cost Electroencephalographic Acquisition Amplifier to serve as Teaching and Research Tool

    PubMed Central

    Jain, Ankit; Kim, Insoo; Gluckman, Bruce J.

    2012-01-01

    We describe the development and testing of a low cost, easily constructed electroencephalographic acquisition amplifier for noninvasive Brain Computer Interface (BCI) education and research. The acquisition amplifier is constructed from newly available off-the-shelf integrated circuit components, and readily sends a 24-bit data stream via USB bus to a computer platform. We demonstrate here the hardware’s use in the analysis of a visually evoked P300 paradigm for a choose one-of-eight task. This clearly shows the applicability of this system as a low cost teaching and research tool. PMID:22254699

  17. Single-chip photonic transceiver based on bulk-silicon, as a chip-level photonic I/O platform for optical interconnects

    PubMed Central

    Kim, Gyungock; Park, Hyundai; Joo, Jiho; Jang, Ki-Seok; Kwack, Myung-Joon; Kim, Sanghoon; Gyoo Kim, In; Hyuk Oh, Jin; Ae Kim, Sun; Park, Jaegyu; Kim, Sanggi

    2015-01-01

    When silicon photonic integrated circuits (PICs), defined for transmitting and receiving optical data, are successfully monolithic-integrated into major silicon electronic chips as chip-level optical I/Os (inputs/outputs), it will bring innovative changes in data computing and communications. Here, we propose new photonic integration scheme, a single-chip optical transceiver based on a monolithic-integrated vertical photonic I/O device set including light source on bulk-silicon. This scheme can solve the major issues which impede practical implementation of silicon-based chip-level optical interconnects. We demonstrated a prototype of a single-chip photonic transceiver with monolithic-integrated vertical-illumination type Ge-on-Si photodetectors and VCSELs-on-Si on the same bulk-silicon substrate operating up to 50 Gb/s and 20 Gb/s, respectively. The prototype realized 20 Gb/s low-power chip-level optical interconnects for λ ~ 850 nm between fabricated chips. This approach can have a significant impact on practical electronic-photonic integration in high performance computers (HPC), cpu-memory interface, hybrid memory cube, and LAN, SAN, data center and network applications. PMID:26061463

  18. Analysis of lipid experiments (ALEX): a software framework for analysis of high-resolution shotgun lipidomics data.

    PubMed

    Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S

    2013-01-01

    Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  19. Rodent motor and neuropsychological behaviour measured in home cages using the integrated modular platform SmartCage™.

    PubMed

    Khroyan, Taline V; Zhang, Jingxi; Yang, Liya; Zou, Bende; Xie, James; Pascual, Conrado; Malik, Adam; Xie, Julian; Zaveri, Nurulain T; Vazquez, Jacqueline; Polgar, Willma; Toll, Lawrence; Fang, Jidong; Xie, Xinmin

    2012-07-01

    1. To facilitate investigation of diverse rodent behaviours in rodents' home cages, we have developed an integrated modular platform, the SmartCage(™) system (AfaSci, Inc. Burlingame, CA, USA), which enables automated neurobehavioural phenotypic analysis and in vivo drug screening in a relatively higher-throughput and more objective manner. 2, The individual platform consists of an infrared array, a vibration floor sensor and a variety of modular devices. One computer can simultaneously operate up to 16 platforms via USB cables. 3. The SmartCage(™) detects drug-induced increases and decreases in activity levels, as well as changes in movement patterns. Wake and sleep states of mice can be detected using the vibration floor sensor. The arousal state classification achieved up to 98% accuracy compared with results obtained by electroencephalography and electromyography. More complex behaviours, including motor coordination, anxiety-related behaviours and social approach behaviour, can be assessed using appropriate modular devices and the results obtained are comparable with results obtained using conventional methods. 4. In conclusion, the SmartCage(™) system provides an automated and accurate tool to quantify various rodent behaviours in a 'stress-free' environment. This system, combined with the validated testing protocols, offers powerful a tool kit for transgenic phenotyping and in vivo drug screening. © 2012 The Authors. Clinical and Experimental Pharmacology and Physiology © 2012 Blackwell Publishing Asia Pty Ltd.

  20. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  1. Tunable quantum interference in a 3D integrated circuit.

    PubMed

    Chaboyer, Zachary; Meany, Thomas; Helt, L G; Withford, Michael J; Steel, M J

    2015-04-27

    Integrated photonics promises solutions to questions of stability, complexity, and size in quantum optics. Advances in tunable and non-planar integrated platforms, such as laser-inscribed photonics, continue to bring the realisation of quantum advantages in computation and metrology ever closer, perhaps most easily seen in multi-path interferometry. Here we demonstrate control of two-photon interference in a chip-scale 3D multi-path interferometer, showing a reduced periodicity and enhanced visibility compared to single photon measurements. Observed non-classical visibilities are widely tunable, and explained well by theoretical predictions based on classical measurements. With these predictions we extract Fisher information approaching a theoretical maximum. Our results open a path to quantum enhanced phase measurements.

  2. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  3. Rapid Reconstitution Packages (RRPs) implemented by integration of computational fluid dynamics (CFD) and 3D printed microfluidics.

    PubMed

    Chi, Albert; Curi, Sebastian; Clayton, Kevin; Luciano, David; Klauber, Kameron; Alexander-Katz, Alfredo; D'hers, Sebastian; Elman, Noel M

    2014-08-01

    Rapid Reconstitution Packages (RRPs) are portable platforms that integrate microfluidics for rapid reconstitution of lyophilized drugs. Rapid reconstitution of lyophilized drugs using standard vials and syringes is an error-prone process. RRPs were designed using computational fluid dynamics (CFD) techniques to optimize fluidic structures for rapid mixing and integrating physical properties of targeted drugs and diluents. Devices were manufactured using stereo lithography 3D printing for micrometer structural precision and rapid prototyping. Tissue plasminogen activator (tPA) was selected as the initial model drug to test the RRPs as it is unstable in solution. tPA is a thrombolytic drug, stored in lyophilized form, required in emergency settings for which rapid reconstitution is of critical importance. RRP performance and drug stability were evaluated by high-performance liquid chromatography (HPLC) to characterize release kinetics. In addition, enzyme-linked immunosorbent assays (ELISAs) were performed to test for drug activity after the RRPs were exposed to various controlled temperature conditions. Experimental results showed that RRPs provided effective reconstitution of tPA that strongly correlated with CFD results. Simulation and experimental results show that release kinetics can be adjusted by tuning the device structural dimensions and diluent drug physical parameters. The design of RRPs can be tailored for a number of applications by taking into account physical parameters of the active pharmaceutical ingredients (APIs), excipients, and diluents. RRPs are portable platforms that can be utilized for reconstitution of emergency drugs in time-critical therapies.

  4. [Exploiture and application of an internet-based Computation Platform for Integrative Pharmacology of Traditional Chinese Medicine].

    PubMed

    Xu, Hai-Yu; Liu, Zhen-Ming; Fu, Yan; Zhang, Yan-Qiong; Yu, Jian-Jun; Guo, Fei-Fei; Tang, Shi-Huan; Lv, Chuan-Yu; Su, Jin; Cui, Ru-Yi; Yang, Hong-Jun

    2017-09-01

    Recently, integrative pharmacology(IP) has become a pivotal paradigm for the modernization of traditional Chinese medicines(TCM) and combinatorial drugs discovery, which is an interdisciplinary science for establishing the in vitro and in vivo correlation between absorption, distribution, metabolism, and excretion/pharmacokinetic(ADME/PK) profiles of TCM and the molecular networks of disease by the integration of the knowledge of multi-disciplinary and multi-stages. In the present study, an internet-based Computation Platform for IP of TCM(TCM-IP, www.tcmip.cn) is established to promote the development of the emerging discipline. Among them, a big data of TCM is an important resource for TCM-IP including Chinese Medicine Formula Database, Chinese Medical Herbs Database, Chemical Database of Chinese Medicine, Target Database for Disease and Symptoms, et al. Meanwhile, some data mining and bioinformatics approaches are critical technology for TCM-IP including the identification of the TCM constituents, ADME prediction, target prediction for the TCM constituents, network construction and analysis, et al. Furthermore, network beautification and individuation design are employed to meet the consumer's requirement. We firmly believe that TCM-IP is a very useful tool for the identification of active constituents of TCM and their involving potential molecular mechanism for therapeutics, which would wildly applied in quality evaluation, clinical repositioning, scientific discovery based on original thinking, prescription compatibility and new drug of TCM, et al. Copyright© by the Chinese Pharmaceutical Association.

  5. Description of the NCAR Community Climate Model (CCM3). Technical note

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiehl, J.T.; Hack, J.J.; Bonan, G.B.

    This repor presents the details of the governing equations, physical parameterizations, and numerical algorithms defining the version of the NCAR Community Climate Model designated CCM3. The material provides an overview of the major model components, and the way in which they interact as the numerical integration proceeds. This version of the CCM incorporates significant improvements to the physic package, new capabilities such as the incorporation of a slab ocean component, and a number of enhancements to the implementation (e.g., the ability to integrate the model on parallel distributed-memory computational platforms).

  6. PaaS for web applications with OpenShift Origin

    NASA Astrophysics Data System (ADS)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  7. Health care informatics research implementation of the VA-DHCP Spanish version for Latin America.

    PubMed Central

    Samper, R.; Marin, C. J.; Ospina, J. A.; Varela, C. A.

    1992-01-01

    The VA DHCP, hospital computer program represents an integral solution to the complex clinical and administrative functions of any hospital world wide. Developed by the Department of Veterans Administration, it has until lately run exclusively in mainframe platforms. The recent implementation in PCs opens the opportunity for use in Latinamerica. Detailed description of the strategy for Spanish, local implementation in Colombia is made. PMID:1482994

  8. Health care informatics research implementation of the VA-DHCP Spanish version for Latin America.

    PubMed

    Samper, R; Marin, C J; Ospina, J A; Varela, C A

    1992-01-01

    The VA DHCP, hospital computer program represents an integral solution to the complex clinical and administrative functions of any hospital world wide. Developed by the Department of Veterans Administration, it has until lately run exclusively in mainframe platforms. The recent implementation in PCs opens the opportunity for use in Latinamerica. Detailed description of the strategy for Spanish, local implementation in Colombia is made.

  9. The BioExtract Server: a web-based bioinformatic workflow platform

    PubMed Central

    Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.

    2011-01-01

    The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552

  10. MRMer, an interactive open source and cross-platform system for data extraction and visualization of multiple reaction monitoring experiments.

    PubMed

    Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin

    2008-11-01

    Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.

  11. Reconfigurable intelligent sensors for health monitoring: a case study of pulse oximeter sensor.

    PubMed

    Jovanov, E; Milenkovic, A; Basham, S; Clark, D; Kelley, D

    2004-01-01

    Design of low-cost, miniature, lightweight, ultra low-power, intelligent sensors capable of customization and seamless integration into a body area network for health monitoring applications presents one of the most challenging tasks for system designers. To answer this challenge we propose a reconfigurable intelligent sensor platform featuring a low-power microcontroller, a low-power programmable logic device, a communication interface, and a signal conditioning circuit. The proposed solution promises a cost-effective, flexible platform that allows easy customization, run-time reconfiguration, and energy-efficient computation and communication. The development of a common platform for multiple physical sensors and a repository of both software procedures and soft intellectual property cores for hardware acceleration will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of a reconfigurable pulse oximeter sensor.

  12. Patterns across multiple memories are identified over time.

    PubMed

    Richards, Blake A; Xia, Frances; Santoro, Adam; Husse, Jana; Woodin, Melanie A; Josselyn, Sheena A; Frankland, Paul W

    2014-07-01

    Memories are not static but continue to be processed after encoding. This is thought to allow the integration of related episodes via the identification of patterns. Although this idea lies at the heart of contemporary theories of systems consolidation, it has yet to be demonstrated experimentally. Using a modified water-maze paradigm in which platforms are drawn stochastically from a spatial distribution, we found that mice were better at matching platform distributions 30 d compared to 1 d after training. Post-training time-dependent improvements in pattern matching were associated with increased sensitivity to new platforms that conflicted with the pattern. Increased sensitivity to pattern conflict was reduced by pharmacogenetic inhibition of the medial prefrontal cortex (mPFC). These results indicate that pattern identification occurs over time, which can lead to conflicts between new information and existing knowledge that must be resolved, in part, by computations carried out in the mPFC.

  13. Robust integration schemes for junction-based modulators in a 200mm CMOS compatible silicon photonic platform (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Szelag, Bertrand; Abraham, Alexis; Brision, Stéphane; Gindre, Paul; Blampey, Benjamin; Myko, André; Olivier, Segolene; Kopp, Christophe

    2017-05-01

    Silicon photonic is becoming a reality for next generation communication system addressing the increasing needs of HPC (High Performance Computing) systems and datacenters. CMOS compatible photonic platforms are developed in many foundries integrating passive and active devices. The use of existing and qualified microelectronics process guarantees cost efficient and mature photonic technologies. Meanwhile, photonic devices have their own fabrication constraints, not similar to those of cmos devices, which can affect their performances. In this paper, we are addressing the integration of PN junction Mach Zehnder modulator in a 200mm CMOS compatible photonic platform. Implantation based device characteristics are impacted by many process variations among which screening layer thickness, dopant diffusion, implantation mask overlay. CMOS devices are generally quite robust with respect to these processes thanks to dedicated design rules. For photonic devices, the situation is different since, most of the time, doped areas must be carefully located within waveguides and CMOS solutions like self-alignment to the gate cannot be applied. In this work, we present different robust integration solutions for junction-based modulators. A simulation setup has been built in order to optimize of the process conditions. It consist in a Mathlab interface coupling process and device electro-optic simulators in order to run many iterations. Illustrations of modulator characteristic variations with process parameters are done using this simulation setup. Parameters under study are, for instance, X and Y direction lithography shifts, screening oxide and slab thicknesses. A robust process and design approach leading to a pn junction Mach Zehnder modulator insensitive to lithography misalignment is then proposed. Simulation results are compared with experimental datas. Indeed, various modulators have been fabricated with different process conditions and integration schemes. Extensive electro-optic characterization of these components will be presented.

  14. Integrating Urban Infrastructure and Health System Impact Modeling for Disasters and Mass-Casualty Events

    NASA Astrophysics Data System (ADS)

    Balbus, J. M.; Kirsch, T.; Mitrani-Reiser, J.

    2017-12-01

    Over recent decades, natural disasters and mass-casualty events in United States have repeatedly revealed the serious consequences of health care facility vulnerability and the subsequent ability to deliver care for the affected people. Advances in predictive modeling and vulnerability assessment for health care facility failure, integrated infrastructure, and extreme weather events have now enabled a more rigorous scientific approach to evaluating health care system vulnerability and assessing impacts of natural and human disasters as well as the value of specific interventions. Concurrent advances in computing capacity also allow, for the first time, full integration of these multiple individual models, along with the modeling of population behaviors and mass casualty responses during a disaster. A team of federal and academic investigators led by the National Center for Disaster Medicine and Public Health (NCDMPH) is develoing a platform for integrating extreme event forecasts, health risk/impact assessment and population simulations, critical infrastructure (electrical, water, transportation, communication) impact and response models, health care facility-specific vulnerability and failure assessments, and health system/patient flow responses. The integration of these models is intended to develop much greater understanding of critical tipping points in the vulnerability of health systems during natural and human disasters and build an evidence base for specific interventions. Development of such a modeling platform will greatly facilitate the assessment of potential concurrent or sequential catastrophic events, such as a terrorism act following a severe heat wave or hurricane. This presentation will highlight the development of this modeling platform as well as applications not just for the US health system, but also for international science-based disaster risk reduction efforts, such as the Sendai Framework and the WHO SMART hospital project.

  15. Energy Consumption Management of Virtual Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Li, Lin

    2017-11-01

    For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.

  16. Digital Radiography and Computed Tomography Project -- Fully Integrated Linear Detector ArrayStatus Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tim Roney; Robert Seifert; Bob Pink

    2011-09-01

    The field-portable Digital Radiography and Computed Tomography (DRCT) x-ray inspection systems developed for the Project Manager for NonStockpile Chemical Materiel (PMNSCM) over the past 13 years have used linear diode detector arrays from two manufacturers; Thomson and Thales. These two manufacturers no longer produce this type of detector. In the interest of insuring the long term viability of the portable DRCT single munitions inspection systems and to improve the imaging capabilities, this project has been investigating improved, commercially available detectors. During FY-10, detectors were evaluated and one in particular, manufactured by Detection Technologies (DT), Inc, was acquired for possible integrationmore » into the DRCT systems. The remainder of this report describes the work performed in FY-11 to complete evaluations and fully integrate the detector onto a representative DRCT platform.« less

  17. Web-based Tsunami Early Warning System with instant Tsunami Propagation Calculations in the GPU Cloud

    NASA Astrophysics Data System (ADS)

    Hammitzsch, M.; Spazier, J.; Reißland, S.

    2014-12-01

    Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the concept a whirl and to shape science's future. Further functionality, improvements and possible profound changes have to implemented successively based on the users' evolving needs.

  18. Cloud Computing-based Platform for Drought Decision-Making using Remote Sensing and Modeling Products: Preliminary Results for Brazil

    NASA Astrophysics Data System (ADS)

    Vivoni, E.; Mascaro, G.; Shupe, J. W.; Hiatt, C.; Potter, C. S.; Miller, R. L.; Stanley, J.; Abraham, T.; Castilla-Rubio, J.

    2012-12-01

    Droughts and their hydrological consequences are a major threat to food security throughout the world. In arid and semiarid regions dependent on irrigated agriculture, prolonged droughts lead to significant and recurring economic and social losses. In this contribution, we present preliminary results on integrating a set of multi-resolution drought indices into a cloud computing-based visualization platform. We focused our initial efforts on Brazil due to a severe, on-going drought in a large agricultural area in the northeastern part of the country. The online platform includes drought products developed from: (1) a MODIS-based water stress index (WSI) based on inferences from normalized difference vegetation index and land surface temperature fields, (2) a volumetric water content (VWC) index obtained from application of the NASA CASA model, and (3) a set of AVHRR-based vegetation health indices obtained from NOAA/NESDIS. The drought indices are also presented in terms of anomalies with respect to a baseline period. Since our main objective is to engage stakeholders and decision-makers in Brazil, we incorporated other relevant geospatial data into the platform, including irrigation areas, dams and reservoirs, administrative units and annual climate information. We will also present a set of use cases developed to help stakeholders explore, query and provide feedback that allowed fine-tuning of the drought product delivery, presentation and analysis tools. Finally, we discuss potential next steps in development of the online platform, including applications at finer resolutions in specific basins and at a coarser global scale.

  19. iSPHERE - A New Approach to Collaborative Research and Cloud Computing

    NASA Astrophysics Data System (ADS)

    Al-Ubaidi, T.; Khodachenko, M. L.; Kallio, E. J.; Harry, A.; Alexeev, I. I.; Vázquez-Poletti, J. L.; Enke, H.; Magin, T.; Mair, M.; Scherf, M.; Poedts, S.; De Causmaecker, P.; Heynderickx, D.; Congedo, P.; Manolescu, I.; Esser, B.; Webb, S.; Ruja, C.

    2015-10-01

    The project iSPHERE (integrated Scientific Platform for HEterogeneous Research and Engineering) that has been proposed for Horizon 2020 (EINFRA-9- 2015, [1]) aims at creating a next generation Virtual Research Environment (VRE) that embraces existing and emerging technologies and standards in order to provide a versatile platform for scientific investigations and collaboration. The presentation will introduce the large project consortium, provide a comprehensive overview of iSPHERE's basic concepts and approaches and outline general user requirements that the VRE will strive to satisfy. An overview of the envisioned architecture will be given, focusing on the adapted Service Bus concept, i.e. the "Scientific Service Bus" as it is called in iSPHERE. The bus will act as a central hub for all communication and user access, and will be implemented in the course of the project. The agile approach [2] that has been chosen for detailed elaboration and documentation of user requirements, as well as for the actual implementation of the system, will be outlined and its motivation and basic structure will be discussed. The presentation will show which user communities will benefit and which concrete problems, scientific investigations are facing today, will be tackled by the system. Another focus of the presentation is iSPHERE's seamless integration of cloud computing resources and how these will benefit scientific modeling teams by providing a reliable and web based environment for cloud based model execution, storage of results, and comparison with measurements, including fully web based tools for data mining, analysis and visualization. Also the envisioned creation of a dedicated data model for experimental plasma physics will be discussed. It will be shown why the Scientific Service Bus provides an ideal basis to integrate a number of data models and communication protocols and to provide mechanisms for data exchange across multiple and even multidisciplinary platforms.

  20. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  1. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  2. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  3. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    NASA Astrophysics Data System (ADS)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  4. MicroScope in 2017: an expanding and evolving integrated resource for community expertise of microbial genomes

    PubMed Central

    Vallenet, David; Calteau, Alexandra; Cruveiller, Stéphane; Gachet, Mathieu; Lajus, Aurélie; Josso, Adrien; Mercier, Jonathan; Renaux, Alexandre; Rollin, Johan; Rouy, Zoe; Roche, David; Scarpelli, Claude; Médigue, Claudine

    2017-01-01

    The annotation of genomes from NGS platforms needs to be automated and fully integrated. However, maintaining consistency and accuracy in genome annotation is a challenging problem because millions of protein database entries are not assigned reliable functions. This shortcoming limits the knowledge that can be extracted from genomes and metabolic models. Launched in 2005, the MicroScope platform (http://www.genoscope.cns.fr/agc/microscope) is an integrative resource that supports systematic and efficient revision of microbial genome annotation, data management and comparative analysis. Effective comparative analysis requires a consistent and complete view of biological data, and therefore, support for reviewing the quality of functional annotation is critical. MicroScope allows users to analyze microbial (meta)genomes together with post-genomic experiment results if any (i.e. transcriptomics, re-sequencing of evolved strains, mutant collections, phenotype data). It combines tools and graphical interfaces to analyze genomes and to perform the expert curation of gene functions in a comparative context. Starting with a short overview of the MicroScope system, this paper focuses on some major improvements of the Web interface, mainly for the submission of genomic data and on original tools and pipelines that have been developed and integrated in the platform: computation of pan-genomes and prediction of biosynthetic gene clusters. Today the resource contains data for more than 6000 microbial genomes, and among the 2700 personal accounts (65% of which are now from foreign countries), 14% of the users are performing expert annotations, on at least a weekly basis, contributing to improve the quality of microbial genome annotations. PMID:27899624

  5. COSBID-M3: a platform for multimodal monitoring, data collection, and research in neurocritical care.

    PubMed

    Wilson, J Adam; Shutter, Lori A; Hartings, Jed A

    2013-01-01

    Neuromonitoring in patients with severe brain trauma and stroke is often limited to intracranial pressure (ICP); advanced neuroscience intensive care units may also monitor brain oxygenation (partial pressure of brain tissue oxygen, P(bt)O(2)), electroencephalogram (EEG), cerebral blood flow (CBF), or neurochemistry. For example, cortical spreading depolarizations (CSDs) recorded by electrocorticography (ECoG) are associated with delayed cerebral ischemia after subarachnoid hemorrhage and are an attractive target for novel therapeutic approaches. However, to better understand pathophysiologic relations and realize the potential of multimodal monitoring, a common platform for data collection and integration is needed. We have developed a multimodal system that integrates clinical, research, and imaging data into a single research and development (R&D) platform. Our system is adapted from the widely used BCI2000, a brain-computer interface tool which is written in the C++ language and supports over 20 data acquisition systems. It is optimized for real-time analysis of multimodal data using advanced time and frequency domain analyses and is extensible for research development using a combination of C++, MATLAB, and Python languages. Continuous streams of raw and processed data, including BP (blood pressure), ICP, PtiO2, CBF, ECoG, EEG, and patient video are stored in an open binary data format. Selected events identified in raw (e.g., ICP) or processed (e.g., CSD) measures are displayed graphically, can trigger alarms, or can be sent to researchers or clinicians via text message. For instance, algorithms for automated detection of CSD have been incorporated, and processed ECoG signals are projected onto three-dimensional (3D) brain models based on patient magnetic resonance imaging (MRI) and computed tomographic (CT) scans, allowing real-time correlation of pathoanatomy and cortical function. This platform will provide clinicians and researchers with an advanced tool to investigate pathophysiologic relationships and novel measures of cerebral status, as well as implement treatment algorithms based on such multimodal measures.

  6. Artificial Neuron Based on Integrated Semiconductor Quantum Dot Mode-Locked Lasers

    NASA Astrophysics Data System (ADS)

    Mesaritakis, Charis; Kapsalis, Alexandros; Bogris, Adonis; Syvridis, Dimitris

    2016-12-01

    Neuro-inspired implementations have attracted strong interest as a power efficient and robust alternative to the digital model of computation with a broad range of applications. Especially, neuro-mimetic systems able to produce and process spike-encoding schemes can offer merits like high noise-resiliency and increased computational efficiency. Towards this direction, integrated photonics can be an auspicious platform due to its multi-GHz bandwidth, its high wall-plug efficiency and the strong similarity of its dynamics under excitation with biological spiking neurons. Here, we propose an integrated all-optical neuron based on an InAs/InGaAs semiconductor quantum-dot passively mode-locked laser. The multi-band emission capabilities of these lasers allows, through waveband switching, the emulation of the excitation and inhibition modes of operation. Frequency-response effects, similar to biological neural circuits, are observed just as in a typical two-section excitable laser. The demonstrated optical building block can pave the way for high-speed photonic integrated systems able to address tasks ranging from pattern recognition to cognitive spectrum management and multi-sensory data processing.

  7. Artificial Neuron Based on Integrated Semiconductor Quantum Dot Mode-Locked Lasers

    PubMed Central

    Mesaritakis, Charis; Kapsalis, Alexandros; Bogris, Adonis; Syvridis, Dimitris

    2016-01-01

    Neuro-inspired implementations have attracted strong interest as a power efficient and robust alternative to the digital model of computation with a broad range of applications. Especially, neuro-mimetic systems able to produce and process spike-encoding schemes can offer merits like high noise-resiliency and increased computational efficiency. Towards this direction, integrated photonics can be an auspicious platform due to its multi-GHz bandwidth, its high wall-plug efficiency and the strong similarity of its dynamics under excitation with biological spiking neurons. Here, we propose an integrated all-optical neuron based on an InAs/InGaAs semiconductor quantum-dot passively mode-locked laser. The multi-band emission capabilities of these lasers allows, through waveband switching, the emulation of the excitation and inhibition modes of operation. Frequency-response effects, similar to biological neural circuits, are observed just as in a typical two-section excitable laser. The demonstrated optical building block can pave the way for high-speed photonic integrated systems able to address tasks ranging from pattern recognition to cognitive spectrum management and multi-sensory data processing. PMID:27991574

  8. Web services as applications' integration tool: QikProp case study.

    PubMed

    Laoui, Abdel; Polyakov, Valery R

    2011-07-15

    Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.

  9. The EPA CompTox Chemistry Dashboard - an online resource ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data. Recent work has focused on the development of a new architecture that assembles the resources into a single platform. With a focus on delivering access to Open Data streams, web service integration accessibility and a user-friendly web application the CompTox Dashboard provides access to data associated with ~720,000 chemical substances. These data include research data in the form of bioassay screening data associated with the ToxCast program, experimental and predicted physicochemical properties, product and functional use information and related data of value to environmental scientists. This presentation will provide an overview of the CompTox Dashboard and its va

  10. Development of process control capability through the Browns Ferry Integrated Computer System using Reactor Water Clanup System as an example. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.; Mowrey, J.

    1995-12-01

    This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less

  11. Using e-Learning Platforms for Mastery Learning in Developmental Mathematics Courses

    ERIC Educational Resources Information Center

    Boggs, Stacey; Shore, Mark; Shore, JoAnna

    2004-01-01

    Many colleges and universities have adopted e-learning platforms to utilize computers as an instructional tool in developmental (i.e., beginning and intermediate algebra) mathematics courses. An e-learning platform is a computer program used to enhance course instruction via computers and the Internet. Allegany College of Maryland is currently…

  12. wft4galaxy: a workflow testing tool for galaxy.

    PubMed

    Piras, Marco Enrico; Pireddu, Luca; Zanetti, Gianluigi

    2017-12-01

    Workflow managers for scientific analysis provide a high-level programming platform facilitating standardization, automation, collaboration and access to sophisticated computing resources. The Galaxy workflow manager provides a prime example of this type of platform. As compositions of simpler tools, workflows effectively comprise specialized computer programs implementing often very complex analysis procedures. To date, no simple way to automatically test Galaxy workflows and ensure their correctness has appeared in the literature. With wft4galaxy we offer a tool to bring automated testing to Galaxy workflows, making it feasible to bring continuous integration to their development and ensuring that defects are detected promptly. wft4galaxy can be easily installed as a regular Python program or launched directly as a Docker container-the latter reducing installation effort to a minimum. Available at https://github.com/phnmnl/wft4galaxy under the Academic Free License v3.0. marcoenrico.piras@crs4.it. © The Author 2017. Published by Oxford University Press.

  13. Arkas: Rapid reproducible RNAseq analysis

    PubMed Central

    Colombo, Anthony R.; J. Triche Jr, Timothy; Ramsingh, Giridharan

    2017-01-01

    The recently introduced Kallisto pseudoaligner has radically simplified the quantification of transcripts in RNA-sequencing experiments.  We offer cloud-scale RNAseq pipelines Arkas-Quantification, and Arkas-Analysis available within Illumina’s BaseSpace cloud application platform which expedites Kallisto preparatory routines, reliably calculates differential expression, and performs gene-set enrichment of REACTOME pathways .  Due to inherit inefficiencies of scale, Illumina's BaseSpace computing platform offers a massively parallel distributive environment improving data management services and data importing.   Arkas-Quantification deploys Kallisto for parallel cloud computations and is conveniently integrated downstream from the BaseSpace Sequence Read Archive (SRA) import/conversion application titled SRA Import.  Arkas-Analysis annotates the Kallisto results by extracting structured information directly from source FASTA files with per-contig metadata, calculates the differential expression and gene-set enrichment analysis on both coding genes and transcripts. The Arkas cloud pipeline supports ENSEMBL transcriptomes and can be used downstream from the SRA Import facilitating raw sequencing importing, SRA FASTQ conversion, RNA quantification and analysis steps. PMID:28868134

  14. Patient-Centered e-Health Record over the Cloud.

    PubMed

    Koumaditis, Konstantinos; Themistocleous, Marinos; Vassilacopoulos, George; Prentza, Andrianna; Kyriazis, Dimosthenis; Malamateniou, Flora; Maglaveras, Nicos; Chouvarda, Ioanna; Mourouzis, Alexandros

    2014-01-01

    The purpose of this paper is to introduce the Patient-Centered e-Health (PCEH) conceptual aspects alongside a multidisciplinary project that combines state-of-the-art technologies like cloud computing. The project, by combining several aspects of PCEH, such as: (a) electronic Personal Healthcare Record (e-PHR), (b) homecare telemedicine technologies, (c) e-prescribing, e-referral, e-learning, with advanced technologies like cloud computing and Service Oriented Architecture (SOA), will lead to an innovative integrated e-health platform of many benefits to the society, the economy, the industry, and the research community. To achieve this, a consortium of experts, both from industry (two companies, one hospital and one healthcare organization) and academia (three universities), was set to investigate, analyse, design, build and test the new platform. This paper provides insights to the PCEH concept and to the current stage of the project. In doing so, we aim at increasing the awareness of this important endeavor and sharing the lessons learned so far throughout our work.

  15. Arkheia: Data Management and Communication for Open Computational Neuroscience

    PubMed Central

    Antolík, Ján; Davison, Andrew P.

    2018-01-01

    Two trends have been unfolding in computational neuroscience during the last decade. First, a shift of focus to increasingly complex and heterogeneous neural network models, with a concomitant increase in the level of collaboration within the field (whether direct or in the form of building on top of existing tools and results). Second, a general trend in science toward more open communication, both internally, with other potential scientific collaborators, and externally, with the wider public. This multi-faceted development toward more integrative approaches and more intense communication within and outside of the field poses major new challenges for modelers, as currently there is a severe lack of tools to help with automatic communication and sharing of all aspects of a simulation workflow to the rest of the community. To address this important gap in the current computational modeling software infrastructure, here we introduce Arkheia. Arkheia is a web-based open science platform for computational models in systems neuroscience. It provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and interactive exploration of parameter searches, in a web browser-based application. Arkheia is focused on automatic presentation of these resources with minimal manual input from users. Arkheia is written in a modular fashion with a focus on future development of the platform. The platform is designed in an open manner, with a clearly defined and separated API for database access, so that any project can write its own backend translating its data into the Arkheia database format. Arkheia is not a centralized platform, but allows any user (or group of users) to set up their own repository, either for public access by the general population, or locally for internal use. Overall, Arkheia provides users with an automatic means to communicate information about not only their models but also individual simulation results and the entire experimental context in an approachable graphical manner, thus facilitating the user's ability to collaborate in the field and outreach to a wider audience. PMID:29556187

  16. Arkheia: Data Management and Communication for Open Computational Neuroscience.

    PubMed

    Antolík, Ján; Davison, Andrew P

    2018-01-01

    Two trends have been unfolding in computational neuroscience during the last decade. First, a shift of focus to increasingly complex and heterogeneous neural network models, with a concomitant increase in the level of collaboration within the field (whether direct or in the form of building on top of existing tools and results). Second, a general trend in science toward more open communication, both internally, with other potential scientific collaborators, and externally, with the wider public. This multi-faceted development toward more integrative approaches and more intense communication within and outside of the field poses major new challenges for modelers, as currently there is a severe lack of tools to help with automatic communication and sharing of all aspects of a simulation workflow to the rest of the community. To address this important gap in the current computational modeling software infrastructure, here we introduce Arkheia. Arkheia is a web-based open science platform for computational models in systems neuroscience. It provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and interactive exploration of parameter searches, in a web browser-based application. Arkheia is focused on automatic presentation of these resources with minimal manual input from users. Arkheia is written in a modular fashion with a focus on future development of the platform. The platform is designed in an open manner, with a clearly defined and separated API for database access, so that any project can write its own backend translating its data into the Arkheia database format. Arkheia is not a centralized platform, but allows any user (or group of users) to set up their own repository, either for public access by the general population, or locally for internal use. Overall, Arkheia provides users with an automatic means to communicate information about not only their models but also individual simulation results and the entire experimental context in an approachable graphical manner, thus facilitating the user's ability to collaborate in the field and outreach to a wider audience.

  17. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.

    2013-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  18. Apparatus, method and system to control accessibility of platform resources based on an integrity level

    DOEpatents

    Jenkins, Chris; Pierson, Lyndon G.

    2016-10-25

    Techniques and mechanism to selectively provide resource access to a functional domain of a platform. In an embodiment, the platform includes both a report domain to monitor the functional domain and a policy domain to identify, based on such monitoring, a transition of the functional domain from a first integrity level to a second integrity level. In response to a change in integrity level, the policy domain may configure the enforcement domain to enforce against the functional domain one or more resource accessibility rules corresponding to the second integrity level. In another embodiment, the policy domain automatically initiates operations in aid of transitioning the platform from the second integrity level to a higher integrity level.

  19. The Geohazards Exploitation Platform: an advanced cloud-based environment for the Earth Science community

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Pacini, Fabrizio; Caumont, Hervé; Brito, Fabrice; Blanco, Pablo; Iglesias, Ruben; López, Álex; Briole, Pierre; Musacchio, Massimo; Buongiorno, Fabrizia; Stumpf, Andre; Malet, Jean-Philippe; Brcic, Ramon; Rodriguez Gonzalez, Fernando; Elias, Panagiotis

    2017-04-01

    The idea to create advanced platforms for the Earth Observation community, where the users can find data but also state-of-art algorithms, processing tools, computing facilities, and instruments for dissemination and sharing, has been launched several years ago. The initiatives developed in this context have been supported firstly by the Framework Programmes of European Commission and the European Space Agency (ESA) and, progressively, by the Copernicus programme. In particular, ESA created and supported the Grid Processing on Demand (G-POD) environment, where the users can access to advanced processing tools implemented in a GRID environment, satellite data and computing facilities. All these components are located in the same datacentre to significantly reduce and make negligible the time to move the satellite data from the archive. From the experience of G-POD was born the idea of ESA to have an ecosystem of Thematic Exploitation Platforms (TEP) focused on the integration of Ground Segment capabilities and ICT technologies to maximize the exploitation of EO data from past and future missions. A TEP refers to a computing platform that deals with a set of user scenarios involving scientists, data providers and ICT developers, aggregated around an Earth Science thematic area. Among the others, the Geohazards Exploitation Platform (GEP) aims at providing on-demand and systematic processing services to address the need of the geohazards community for common information layers and to integrate newly developed processors for scientists and other expert users. Within GEP, the community benefits from a cloud-based environment, specifically designed for the advanced exploitation of EO data. A partner can bring its own tools and processing chains, but also has access in the same workspace to large satellite datasets and shared data processing tools. GEP is currently in the pre-operations phase under a consortium led by Terradue Srl and six pilot projects concerning different EO applications have been selected: time-series stereo-photogrammetric processing using optical images for landslides and tectonics movement monitoring with CNRS/EOST (FR), optical based processing method for volcanic hazard monitoring with INGV (IT), systematic generation of deformation time-series with Sentinel-1 data with CNR-IREA (IT), systematic processing of Sentinel-1 interferometric imagery with DLR (DE), terrain motion velocity map generation based on PSI processing by TRE-ALTAMIRA (ES) and a campaign to test and employ GEP applications with the Corinth Rift EPOS Near Fault Observatory. Finally, GEP is significantly contributing to the development of the satellite component of the European Plate Observing System (EPOS), a long-term plan to facilitate the integrated use of data, data products, and facilities from distributed research infrastructures for solid Earth science in Europe. In particular, GEP has been identified as gateway for the Thematic Core Service "Satellite Data" of EPOS, namely the platform through which the satellite EPOS services will be delivered. In the current work, latest activities and achievements of GEP, including the impact in the context of the distributed Research Infrastructures such as EPOS, will be presented and discussed.

  20. PRANAS: A New Platform for Retinal Analysis and Simulation.

    PubMed

    Cessac, Bruno; Kornprobst, Pierre; Kraria, Selim; Nasser, Hassan; Pamplona, Daniela; Portelli, Geoffrey; Viéville, Thierry

    2017-01-01

    The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscientists and modelers by providing a unique set of retina-related tools. PRANAS integrates a retina simulator allowing large scale simulations while keeping a strong biological plausibility and a toolbox for the analysis of spike train population statistics. The statistical method (entropy maximization under constraints) takes into account both spatial and temporal correlations as constraints, allowing to analyze the effects of memory on statistics. PRANAS also integrates a tool computing and representing in 3D (time-space) receptive fields. All these tools are accessible through a friendly graphical user interface. The most CPU-costly of them have been implemented to run in parallel.

  1. Coordinating complex decision support activities across distributed applications

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1994-01-01

    Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.

  2. Gameplay as a source of intrinsic motivation in a randomized controlled trial of auditory training for tinnitus.

    PubMed

    Hoare, Derek J; Van Labeke, Nicolas; McCormack, Abby; Sereda, Magdalena; Smith, Sandra; Al Taher, Hala; Kowalkowski, Victoria L; Sharples, Mike; Hall, Deborah A

    2014-01-01

    Previous studies of frequency discrimination training (FDT) for tinnitus used repetitive task-based training programmes relying on extrinsic factors to motivate participation. Studies reported limited improvement in tinnitus symptoms. To evaluate FDT exploiting intrinsic motivations by integrating training with computer-gameplay. Sixty participants were randomly assigned to train on either a conventional task-based training, or one of two interactive game-based training platforms over six weeks. Outcomes included assessment of motivation, tinnitus handicap, and performance on tests of attention. Participants reported greater intrinsic motivation to train on the interactive game-based platforms, yet compliance of all three groups was similar (∼ 70%) and changes in self-reported tinnitus severity were not significant. There was no difference between groups in terms of change in tinnitus severity or performance on measures of attention. FDT can be integrated within an intrinsically motivating game. Whilst this may improve participant experience, in this instance it did not translate to additional compliance or therapeutic benefit. ClinicalTrials.gov NCT02095262.

  3. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  4. Computer aided system engineering for space construction

    NASA Technical Reports Server (NTRS)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  5. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  6. Systems Engineering Building Advances Power Grid Research

    ScienceCinema

    Virden, Jud; Huang, Henry; Skare, Paul; Dagle, Jeff; Imhoff, Carl; Stoustrup, Jakob; Melton, Ron; Stiles, Dennis; Pratt, Rob

    2018-01-16

    Researchers and industry are now better equipped to tackle the nation’s most pressing energy challenges through PNNL’s new Systems Engineering Building – including challenges in grid modernization, buildings efficiency and renewable energy integration. This lab links real-time grid data, software platforms, specialized laboratories and advanced computing resources for the design and demonstration of new tools to modernize the grid and increase buildings energy efficiency.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn D.

    Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less

  8. The transformative potential of an integrative approach to pregnancy.

    PubMed

    Eidem, Haley R; McGary, Kriston L; Capra, John A; Abbot, Patrick; Rokas, Antonis

    2017-09-01

    Complex traits typically involve diverse biological pathways and are shaped by numerous genetic and environmental factors. Pregnancy-associated traits and pathologies are further complicated by extensive communication across multiple tissues in two individuals, interactions between two genomes-maternal and fetal-that obscure causal variants and lead to genetic conflict, and rapid evolution of pregnancy-associated traits across mammals and in the human lineage. Given the multi-faceted complexity of human pregnancy, integrative approaches that synthesize diverse data types and analyses harbor tremendous promise to identify the genetic architecture and environmental influences underlying pregnancy-associated traits and pathologies. We review current research that addresses the extreme complexities of traits and pathologies associated with human pregnancy. We find that successful efforts to address the many complexities of pregnancy-associated traits and pathologies often harness the power of many and diverse types of data, including genome-wide association studies, evolutionary analyses, multi-tissue transcriptomic profiles, and environmental conditions. We propose that understanding of pregnancy and its pathologies will be accelerated by computational platforms that provide easy access to integrated data and analyses. By simplifying the integration of diverse data, such platforms will provide a comprehensive synthesis that transcends many of the inherent challenges present in studies of pregnancy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Route Sanitizer: Connected Vehicle Trajectory De-Identification Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Jason M; Ferber, Aaron E

    Route Sanitizer is ORNL's connected vehicle moving object database de-identification tool and a graphical user interface to ORNL's connected vehicle de-identification algorithm. It uses the Google Chrome (soon to be Electron) platform so it will run on different computing platforms. The basic de-identification strategy is record redaction: portions of a vehicle trajectory (e.g. sequences of precise temporal spatial records) are removed. It does not alter retained records. The algorithm uses custom techniques to find areas within trajectories that may be considered private, then it suppresses those in addition to enough of the trajectory surrounding those locations to protect against "inferencemore » attacks" in a mathematically sound way. Map data is integrated into the process to make this possible.« less

  10. Internet of things for an age-friendly healthcare.

    PubMed

    Konstantinidis, Evdokimos I; Bamparopoulos, Giorgos; Billis, Antonis; Bamidis, Panagiotis D

    2015-01-01

    In healthcare applications a large cohort of recent implementations utilises IoT-oriented infrastructures (XMPP) as well as smart mobile devices as communication gateways. IoT characteristi Communication/Connectivity, Pervasive Computing and Ambient Intelligence, are all highly related to Active and Healthy Aging environments. This paper presents a new idea, that of IoT enabled devices which are directly connected to the IoT (a glucose meter is used as an example herein), complying with the XMPP messaging protocol and the incorporation of a recently released Controller Application Communication (CAC) framework for distributed, cross-platform communication. A web based exergaming platform and a disease management tool, provide the vehicles for the demonstration of the feasibility and the successful implementation and integration of the aforementioned infrastructure.

  11. BioAcoustica: a free and open repository and analysis platform for bioacoustics

    PubMed Central

    Baker, Edward; Price, Ben W.; Rycroft, S. D.; Smith, Vincent S.

    2015-01-01

    We describe an online open repository and analysis platform, BioAcoustica (http://bio.acousti.ca), for recordings of wildlife sounds. Recordings can be annotated using a crowdsourced approach, allowing voice introductions and sections with extraneous noise to be removed from analyses. This system is based on the Scratchpads virtual research environment, the BioVeL portal and the Taverna workflow management tool, which allows for analysis of recordings using a grid computing service. At present the analyses include spectrograms, oscillograms and dominant frequency analysis. Further analyses can be integrated to meet the needs of specific researchers or projects. Researchers can upload and annotate their recordings to supplement traditional publication. Database URL: http://bio.acousti.ca PMID:26055102

  12. Xyce Parallel Electronic Simulator : users' guide, version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont

    2004-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less

  13. Neuromechanic: a computational platform for simulation and analysis of the neural control of movement

    PubMed Central

    Bunderson, Nathan E.; Bingham, Jeffrey T.; Sohn, M. Hongchul; Ting, Lena H.; Burkholder, Thomas J.

    2015-01-01

    Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states as well as muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization and stability analysis tools to provide structural insights into the neural control of movement. PMID:23027632

  14. Neuromechanic: a computational platform for simulation and analysis of the neural control of movement.

    PubMed

    Bunderson, Nathan E; Bingham, Jeffrey T; Sohn, M Hongchul; Ting, Lena H; Burkholder, Thomas J

    2012-10-01

    Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states and muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization, and stability analysis tools to provide structural insights into the neural control of movement. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Molecular deconstruction, detection, and computational prediction of microenvironment-modulated cellular responses to cancer therapeutics

    PubMed Central

    LaBarge, Mark A; Parvin, Bahram; Lorens, James B

    2014-01-01

    The field of bioengineering has pioneered the application of new precision fabrication technologies to model the different geometric, physical or molecular components of tissue microenvironments on solid-state substrata. Tissue engineering approaches building on these advances are used to assemble multicellular mimetic-tissues where cells reside within defined spatial contexts. The functional responses of cells in fabricated microenvironments has revealed a rich interplay between the genome and extracellular effectors in determining cellular phenotypes, and in a number of cases has revealed the dominance of microenvironment over genotype. Precision bioengineered substrata are limited to a few aspects, whereas cell/tissue-derived microenvironments have many undefined components. Thus introducing a computational module may serve to integrate these types of platforms to create reasonable models of drug responses in human tissues. This review discusses how combinatorial microenvironment microarrays and other biomimetic microenvironments have revealed emergent properties of cells in particular microenvironmental contexts, the platforms that can measure phenotypic changes within those contexts, and the computational tools that can unify the microenvironment-imposed functional phenotypes with underlying constellations of proteins and genes. Ultimately we propose that a merger of these technologies will enable more accurate pre-clinical drug discovery. PMID:24582543

  16. Generation of multiphoton entangled quantum states by means of integrated frequency combs.

    PubMed

    Reimer, Christian; Kues, Michael; Roztocki, Piotr; Wetzel, Benjamin; Grazioso, Fabio; Little, Brent E; Chu, Sai T; Johnston, Tudor; Bromberg, Yaron; Caspani, Lucia; Moss, David J; Morandotti, Roberto

    2016-03-11

    Complex optical photon states with entanglement shared among several modes are critical to improving our fundamental understanding of quantum mechanics and have applications for quantum information processing, imaging, and microscopy. We demonstrate that optical integrated Kerr frequency combs can be used to generate several bi- and multiphoton entangled qubits, with direct applications for quantum communication and computation. Our method is compatible with contemporary fiber and quantum memory infrastructures and with chip-scale semiconductor technology, enabling compact, low-cost, and scalable implementations. The exploitation of integrated Kerr frequency combs, with their ability to generate multiple, customizable, and complex quantum states, can provide a scalable, practical, and compact platform for quantum technologies. Copyright © 2016, American Association for the Advancement of Science.

  17. ICECAP: an integrated, general-purpose, automation-assisted IC50/EC50 assay platform.

    PubMed

    Li, Ming; Chou, Judy; King, Kristopher W; Jing, Jing; Wei, Dong; Yang, Liyu

    2015-02-01

    IC50 and EC50 values are commonly used to evaluate drug potency. Mass spectrometry (MS)-centric bioanalytical and biomarker labs are now conducting IC50/EC50 assays, which, if done manually, are tedious and error-prone. Existing bioanalytical sample preparation automation systems cannot meet IC50/EC50 assay throughput demand. A general-purpose, automation-assisted IC50/EC50 assay platform was developed to automate the calculations of spiking solutions and the matrix solutions preparation scheme, the actual spiking and matrix solutions preparations, as well as the flexible sample extraction procedures after incubation. In addition, the platform also automates the data extraction, nonlinear regression curve fitting, computation of IC50/EC50 values, graphing, and reporting. The automation-assisted IC50/EC50 assay platform can process the whole class of assays of varying assay conditions. In each run, the system can handle up to 32 compounds and up to 10 concentration levels per compound, and it greatly improves IC50/EC50 assay experimental productivity and data processing efficiency. © 2014 Society for Laboratory Automation and Screening.

  18. Theoretical modeling of multiprotein complexes by iSPOT: Integration of small-angle X-ray scattering, hydroxyl radical footprinting, and computational docking.

    PubMed

    Huang, Wei; Ravikumar, Krishnakumar M; Parisien, Marc; Yang, Sichun

    2016-12-01

    Structural determination of protein-protein complexes such as multidomain nuclear receptors has been challenging for high-resolution structural techniques. Here, we present a combined use of multiple biophysical methods, termed iSPOT, an integration of shape information from small-angle X-ray scattering (SAXS), protection factors probed by hydroxyl radical footprinting, and a large series of computationally docked conformations from rigid-body or molecular dynamics (MD) simulations. Specifically tested on two model systems, the power of iSPOT is demonstrated to accurately predict the structures of a large protein-protein complex (TGFβ-FKBP12) and a multidomain nuclear receptor homodimer (HNF-4α), based on the structures of individual components of the complexes. Although neither SAXS nor footprinting alone can yield an unambiguous picture for each complex, the combination of both, seamlessly integrated in iSPOT, narrows down the best-fit structures that are about 3.2Å and 4.2Å in RMSD from their corresponding crystal structures, respectively. Furthermore, this proof-of-principle study based on the data synthetically derived from available crystal structures shows that the iSPOT-using either rigid-body or MD-based flexible docking-is capable of overcoming the shortcomings of standalone computational methods, especially for HNF-4α. By taking advantage of the integration of SAXS-based shape information and footprinting-based protection/accessibility as well as computational docking, this iSPOT platform is set to be a powerful approach towards accurate integrated modeling of many challenging multiprotein complexes. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Multiple advanced logic gates made of DNA-Ag nanocluster and the application for intelligent detection of pathogenic bacterial genes† †Electronic supplementary information (ESI) available: Chemicals, materials and DNA sequences used in the investigation, the construction of YES, AND, OR, XOR and INH logic gates, CD and PAGE experimental results. See DOI: 10.1039/c7sc05246d

    PubMed Central

    Lin, Xiaodong; Deng, Jiankang; Lyu, Yanlong; Qian, Pengcheng; Li, Yunfei

    2018-01-01

    The integration of multiple DNA logic gates on a universal platform to implement advance logic functions is a critical challenge for DNA computing. Herein, a straightforward and powerful strategy in which a guanine-rich DNA sequence lighting up a silver nanocluster and fluorophore was developed to construct a library of logic gates on a simple DNA-templated silver nanoclusters (DNA-AgNCs) platform. This library included basic logic gates, YES, AND, OR, INHIBIT, and XOR, which were further integrated into complex logic circuits to implement diverse advanced arithmetic/non-arithmetic functions including half-adder, half-subtractor, multiplexer, and demultiplexer. Under UV irradiation, all the logic functions could be instantly visualized, confirming an excellent repeatability. The logic operations were entirely based on DNA hybridization in an enzyme-free and label-free condition, avoiding waste accumulation and reducing cost consumption. Interestingly, a DNA-AgNCs-based multiplexer was, for the first time, used as an intelligent biosensor to identify pathogenic genes, E. coli and S. aureus genes, with a high sensitivity. The investigation provides a prototype for the wireless integration of multiple devices on even the simplest single-strand DNA platform to perform diverse complex functions in a straightforward and cost-effective way. PMID:29675221

  20. OWLS as platform technology in OPTOS satellite

    NASA Astrophysics Data System (ADS)

    Rivas Abalo, J.; Martínez Oter, J.; Arruego Rodríguez, I.; Martín-Ortega Rico, A.; de Mingo Martín, J. R.; Jiménez Martín, J. J.; Martín Vodopivec, B.; Rodríguez Bustabad, S.; Guerrero Padrón, H.

    2017-12-01

    The aim of this work is to show the Optical Wireless Link to intraSpacecraft Communications (OWLS) technology as a platform technology for space missions, and more specifically its use within the On-Board Communication system of OPTOS satellite. OWLS technology was proposed by Instituto Nacional de Técnica Aeroespacial (INTA) at the end of the 1990s and developed along 10 years through a number of ground demonstrations, technological developments and in-orbit experiments. Its main benefits are: mass reduction, flexibility, and simplification of the Assembly, Integration and Tests phases. The final step was to go from an experimental technology to a platform one. This step was carried out in the OPTOS satellite, which makes use of optical wireless links in a distributed network based on an OLWS implementation of the CAN bus. OPTOS is the first fully wireless satellite. It is based on the triple configuration (3U) of the popular Cubesat standard, and was completely built at INTA. It was conceived to procure a fast development, low cost, and yet reliable platform to the Spanish scientific community, acting as a test bed for space born science and technology. OPTOS presents a distributed OBDH architecture in which all satellite's subsystems and payloads incorporate a small Distributed On-Board Computer (OBC) Terminal (DOT). All DOTs (7 in total) communicate between them by means of the OWLS-CAN that enables full data sharing capabilities. This collaboration allows them to perform all tasks that would normally be carried out by a centralized On-Board Computer.

  1. Seismo-Live: Training in Seismology with Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Tape, Carl; Igel, Heiner

    2016-04-01

    Seismological training tends to occur within the isolation of a particular institution with a limited set of tools (codes, libraries) that are often not transferrable outside. Here, we propose to overcome these limitations with a community-driven library of Jupyter notebooks dedicated to training on any aspect of seismology for purposes of education and outreach, on-site or archived tutorials for codes, classroom instruction, and research. A Jupyter notebook (jupyter.org) is an open-source interactive computational environment that allows combining code execution, rich text, mathematics, and plotting. It can be considered a platform that supports reproducible research, as all inputs and outputs may be stored. Text, external graphics, equations can be handled using Markdown (incl. LaTeX) format. Jupyter notebooks are driven by standard web browsers, can be easily exchanged in text format, or converted to other documents (e.g. PDF, slide shows). They provide an ideal format for practical training in seismology. A pilot-platform was setup with a dedicated server such that the Jupyter notebooks can be run in any browser (PC, notepad, smartphone). We show the functionalities of the Seismo-Live platform with examples from computational seismology, seismic data access and processing using the ObsPy library, seismic inverse problems, and others. The current examples are all using the Python programming language but any free language can be used. Potentially, such community platforms could be integrated with the EPOS-IT infrastructure and extended to other fields of Earth sciences.

  2. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  3. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  4. Portable Computer Technology (PCT) Research and Development Program Phase 2

    NASA Technical Reports Server (NTRS)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  5. TTEthernet for Integrated Spacecraft Networks

    NASA Technical Reports Server (NTRS)

    Loveless, Andrew

    2015-01-01

    Aerospace projects have traditionally employed federated avionics architectures, in which each computer system is designed to perform one specific function (e.g. navigation). There are obvious downsides to this approach, including excessive weight (from so much computing hardware), and inefficient processor utilization (since modern processors are capable of performing multiple tasks). There has therefore been a push for integrated modular avionics (IMA), in which common computing platforms can be leveraged for different purposes. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and design complexity. However, the application of IMA principles introduces significant challenges, as the data network must accommodate traffic of mixed criticality and performance levels - potentially all related to the same shared computer hardware. Because individual network technologies are rarely so competent, the development of truly integrated network architectures often proves unreasonable. Several different types of networks are utilized - each suited to support a specific vehicle function. Critical functions are typically driven by precise timing loops, requiring networks with strict guarantees regarding message latency (i.e. determinism) and fault-tolerance. Alternatively, non-critical systems generally employ data networks prioritizing flexibility and high performance over reliable operation. Switched Ethernet has seen widespread success filling this role in terrestrial applications. Its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components make it desirable for inclusion in spacecraft platforms. Basic Ethernet configurations have been incorporated into several preexisting aerospace projects, including both the Space Shuttle and International Space Station (ISS). However, classical switched Ethernet cannot provide the high level of network determinism required by real-time spacecraft applications. Even with modern advancements, the uncoordinated (i.e. event-driven) nature of Ethernet communication unavoidably leads to message contention within network switches. The arbitration process used to resolve such conflicts introduces variation in the time it takes for messages to be forwarded. TTEthernet1 introduces decentralized clock synchronization to switched Ethernet, enabling message transmission according to a time-triggered (TT) paradigm. A network planning tool is used to allocate each device a finite amount of time in which it may transmit a frame. Each time slot is repeated sequentially to form a periodic communication schedule that is then loaded onto each TTEthernet device (e.g. switches and end systems). Each network participant references the synchronized time in order to dispatch messages at predetermined instances. This schedule guarantees that no contention exists between time-triggered Ethernet frames in the network switches, therefore eliminating the need for arbitration (and the timing variation it causes). Besides time-triggered messaging, TTEthernet networks may provide two additional traffic classes to support communication of different criticality levels. In the rate-constrained (RC) traffic class, the frame payload size and rate of transmission along each communication channel are limited to predetermined maximums. The network switches can therefore be configured to accommodate the known worst-case traffic pattern, and buffer overflows can be eliminated. The best-effort (BE) traffic class behaves akin to classical Ethernet. No guarantees are provided regarding transmission latency or successful message delivery. TTEthernet coordinates transmission of all three traffic classes over the same physical connections, therefore accommodating the full spectrum of traffic criticality levels required in IMA architectures. Common computing platforms (e.g. LRUs) can share networking resources in such a way that failures in non-critical systems (using BE or RC communication modes) cannot impact flight-critical functions (using TT communication). Furthermore, TTEthernet hardware (e.g. switches, cabling) can be shared by both TTEthernet and classical Ethernet traffic.

  6. Research and Application of Autodesk Fusion360 in Industrial Design

    NASA Astrophysics Data System (ADS)

    Song, P. P.; Qi, Y. M.; Cai, D. C.

    2018-05-01

    In 2016, Fusion 360, a productintroduced byAutodesk and integrating industrial design, structural design, mechanical simulation, and CAM, turns out a design platform supportingcollaboration and sharing both cross-platform and via the cloud. In previous products, design and manufacturing use to be isolated. In the course of design, research and development, the communication between designers and engineers used to go on through different software products, tool commands, and even industry terms. Moreover, difficulty also lies with the communication between design thoughts and machining strategies. Naturally, a difficult product design and R & D process would trigger a noticeable gap between the design model and the actual product. A complete product development process tends to cover several major areas, such as industrial design, mechanical design, rendering and animation, computer aided emulation (CAE), and computer aided manufacturing (CAM). Fusion 360, a perfect design solving the technical problems of cross-platform data exchange, realizes the effective control of cross-regional collaboration and presents an overview of collaboration and breaks the barriers between art and manufacturing, andblocks between design and processing. The “Eco-development of Fusion360 Industrial Chain” is both a significant means to and an inevitable trend forthe manufacturers and industrial designers to carry out innovation in China.

  7. Analytical investigation of the dynamics of tethered constellations in earth orbit

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.; Gullahorn, Gordon E.; Estes, Robert D.

    1988-01-01

    This Quarterly Report on Tethering in Earth Orbit deals with three topics: (1) Investigation of the propagation of longitudinal and transverse waves along the upper tether. Specifically, the upper tether is modeled as three massive platforms connected by two perfectly elastic continua (tether segments). The tether attachment point to the station is assumed to vibrate both longitudinally and transversely at a given frequency. Longitudinal and transverse waves propagate along the tethers affecting the acceleration levels at the elevator and at the upper platform. The displacement and acceleration frequency-response functions at the elevator and at the upper platform are computed for both longitudinal and transverse waves. An analysis to optimize the damping time of the longitudinal dampers is also carried out in order to select optimal parameters. The analytical evaluation of the performance of tuned vs. detuned longitudinal dampers is also part of this analysis. (2) The use of the Shuttle primary Reaction Control System (RCS) thrusters for blowing away a recoiling broken tether is discussed. A microcomputer system was set up to support this operation. (3) Most of the effort in the tether plasma physics study was devoted to software development. A particle simulation code has been integrated into the Macintosh II computer system and will be utilized for studying the physics of hollow cathodes.

  8. Integrated Component-based Data Acquisition Systems for Aerospace Test Facilities

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.

    2001-01-01

    The Multi-Instrument Integrated Data Acquisition System (MIIDAS), developed by the NASA Langley Research Center, uses commercial off the shelf (COTS) products, integrated with custom software, to provide a broad range of capabilities at a low cost throughout the system s entire life cycle. MIIDAS combines data acquisition capabilities with online and post-test data reduction computations. COTS products lower purchase and maintenance costs by reducing the level of effort required to meet system requirements. Object-oriented methods are used to enhance modularity, encourage reusability, and to promote adaptability, reducing software development costs. Using only COTS products and custom software supported on multiple platforms reduces the cost of porting the system to other platforms. The post-test data reduction capabilities of MIIDAS have been installed at four aerospace testing facilities at NASA Langley Research Center. The systems installed at these facilities provide a common user interface, reducing the training time required for personnel that work across multiple facilities. The techniques employed by MIIDAS enable NASA to build a system with a lower initial purchase price and reduced sustaining maintenance costs. With MIIDAS, NASA has built a highly flexible next generation data acquisition and reduction system for aerospace test facilities that meets customer expectations.

  9. Study of a Satellite Attitude Control System Using Integrating Gyros as Torque Sources

    NASA Technical Reports Server (NTRS)

    White, John S.; Hansen, Q. Marion

    1961-01-01

    This report considers the use of single-degree-of-freedom integrating gyros as torque sources for precise control of satellite attitude. Some general design criteria are derived and applied to the specific example of the Orbiting Astronomical Observatory. The results of the analytical design are compared with the results of an analog computer study and also with experimental results from a low-friction platform. The steady-state and transient behavior of the system, as determined by the analysis, by the analog study, and by the experimental platform agreed quite well. The results of this study show that systems using integrating gyros for precise satellite attitude control can be designed to have a reasonably rapid and well-damped transient response, as well as very small steady-state errors. Furthermore, it is shown that the gyros act as rate sensors, as well as torque sources, so that no rate stabilization networks are required, and when no error sensor is available, the vehicle is still rate stabilized. Hence, it is shown that a major advantage of a gyro control system is that when the target is occulted, an alternate reference is not required.

  10. Status report of the end-to-end ASKAP software system: towards early science operations

    NASA Astrophysics Data System (ADS)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  11. LK Scripting Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The LK scripting language is a simple and fast computer programming language designed for easy integration with existing software to enable automation of tasks. The LK language is used by NREL’s System Advisor Model (SAM), the SAM Software Development Kit (SDK), and SolTrace products. LK is easy extensible and adaptable to new software due to its small footprint and is designed to be statically linked into other software. It is written in standard C++, is cross-platform (Windows, Linux, and OSX), and includes optional portions that enable direct integration with graphical user interfaces written in the open source C++ wxWidgets Versionmore » 3.0+ toolkit.« less

  12. Grid-wide neuroimaging data federation in the context of the NeuroLOG project

    PubMed Central

    Michel, Franck; Gaignard, Alban; Ahmad, Farooq; Barillot, Christian; Batrancourt, Bénédicte; Dojat, Michel; Gibaud, Bernard; Girard, Pascal; Godard, David; Kassel, Gilles; Lingrand, Diane; Malandain, Grégoire; Montagnat, Johan; Pélégrini-Issac, Mélanie; Pennec, Xavier; Rojas Balderrama, Javier; Wali, Bacem

    2010-01-01

    Grid technologies are appealing to deal with the challenges raised by computational neurosciences and support multi-centric brain studies. However, core grids middleware hardly cope with the complex neuroimaging data representation and multi-layer data federation needs. Moreover, legacy neuroscience environments need to be preserved and cannot be simply superseded by grid services. This paper describes the NeuroLOG platform design and implementation, shedding light on its Data Management Layer. It addresses the integration of brain image files, associated relational metadata and neuroscience semantic data in a heterogeneous distributed environment, integrating legacy data managers through a mediation layer. PMID:20543431

  13. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  14. Resealable, optically accessible, PDMS-free fluidic platform for ex vivo interrogation of pancreatic islets.

    PubMed

    Lenguito, Giovanni; Chaimov, Deborah; Weitz, Jonathan R; Rodriguez-Diaz, Rayner; Rawal, Siddarth A K; Tamayo-Garcia, Alejandro; Caicedo, Alejandro; Stabler, Cherie L; Buchwald, Peter; Agarwal, Ashutosh

    2017-02-28

    We report the design and fabrication of a robust fluidic platform built out of inert plastic materials and micromachined features that promote optimized convective fluid transport. The platform is tested for perfusion interrogation of rodent and human pancreatic islets, dynamic secretion of hormones, concomitant live-cell imaging, and optogenetic stimulation of genetically engineered islets. A coupled quantitative fluid dynamics computational model of glucose stimulated insulin secretion and fluid dynamics was first utilized to design device geometries that are optimal for complete perfusion of three-dimensional islets, effective collection of secreted insulin, and minimization of system volumes and associated delays. Fluidic devices were then fabricated through rapid prototyping techniques, such as micromilling and laser engraving, as two interlocking parts from materials that are non-absorbent and inert. Finally, the assembly was tested for performance using both rodent and human islets with multiple assays conducted in parallel, such as dynamic perfusion, staining and optogenetics on standard microscopes, as well as for integration with commercial perfusion machines. The optimized design of convective fluid flows, use of bio-inert and non-absorbent materials, reversible assembly, manual access for loading and unloading of islets, and straightforward integration with commercial imaging and fluid handling systems proved to be critical for perfusion assay, and particularly suited for time-resolved optogenetics studies.

  15. The Osseus platform: a prototype for advanced web-based distributed simulation

    NASA Astrophysics Data System (ADS)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  16. Advances in the TRIDEC Cloud

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin; Spazier, Johannes; Reißland, Sven

    2016-04-01

    The TRIDEC Cloud is a platform that merges several complementary cloud-based services for instant tsunami propagation calculations and automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The platform offers a modern web-based graphical user interface so that operators in warning centres and stakeholders of other involved parties (e.g. CPAs, ministries) just need a standard web browser to access a full-fledged early warning and information system with unique interactive features such as Cloud Messages and Shared Maps. Furthermore, the TRIDEC Cloud can be accessed in different modes, e.g. the monitoring mode, which provides important functionality required to act in a real event, and the exercise-and-training mode, which enables training and exercises with virtual scenarios re-played by a scenario player. The software system architecture and open interfaces facilitate global coverage so that the system is applicable for any region in the world and allow the integration of different sensor systems as well as the integration of other hazard types and use cases different to tsunami early warning. Current advances of the TRIDEC Cloud platform will be summarized in this presentation.

  17. Low-Loss Photonic Reservoir Computing with Multimode Photonic Integrated Circuits.

    PubMed

    Katumba, Andrew; Heyvaert, Jelle; Schneider, Bendix; Uvin, Sarah; Dambre, Joni; Bienstman, Peter

    2018-02-08

    We present a numerical study of a passive integrated photonics reservoir computing platform based on multimodal Y-junctions. We propose a novel design of this junction where the level of adiabaticity is carefully tailored to capture the radiation loss in higher-order modes, while at the same time providing additional mode mixing that increases the richness of the reservoir dynamics. With this design, we report an overall average combination efficiency of 61% compared to the standard 50% for the single-mode case. We demonstrate that with this design, much more power is able to reach the distant nodes of the reservoir, leading to increased scaling prospects. We use the example of a header recognition task to confirm that such a reservoir can be used for bit-level processing tasks. The design itself is CMOS-compatible and can be fabricated through the known standard fabrication procedures.

  18. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  19. Consolidation of cloud computing in ATLAS

    NASA Astrophysics Data System (ADS)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  20. Development of a Computer-Assisted Instrumentation Curriculum for Physics Students: Using LabVIEW and Arduino Platform

    NASA Astrophysics Data System (ADS)

    Kuan, Wen-Hsuan; Tseng, Chi-Hung; Chen, Sufen; Wong, Ching-Chang

    2016-06-01

    We propose an integrated curriculum to establish essential abilities of computer programming for the freshmen of a physics department. The implementation of the graphical-based interfaces from Scratch to LabVIEW then to LabVIEW for Arduino in the curriculum `Computer-Assisted Instrumentation in the Design of Physics Laboratories' brings rigorous algorithm and syntax protocols together with imagination, communication, scientific applications and experimental innovation. The effectiveness of the curriculum was evaluated via statistical analysis of questionnaires, interview responses, the increase in student numbers majoring in physics, and performance in a competition. The results provide quantitative support that the curriculum remove huge barriers to programming which occur in text-based environments, helped students gain knowledge of programming and instrumentation, and increased the students' confidence and motivation to learn physics and computer languages.

  1. A service platform architecture design towards a light integration of heterogeneous systems in the wellbeing domain.

    PubMed

    Yang, Yaojin; Ahtinen, Aino; Lahteenmaki, Jaakko; Nyman, Petri; Paajanen, Henrik; Peltoniemi, Teijo; Quiroz, Carlos

    2007-01-01

    System integration is one of the major challenges for building wellbeing or healthcare related information systems. In this paper, we are going to share our experiences on how to design a service platform called Nuadu service platform, for providing integrated services in occupational health promotion and health risk management through two heterogeneous systems. Our design aims for a light integration covering the layers, from data through service up to presentation, while maintaining the integrity of the underlying systems.

  2. S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry

    NASA Astrophysics Data System (ADS)

    Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime

    2013-05-01

    S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.

  3. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  4. MicroScope in 2017: an expanding and evolving integrated resource for community expertise of microbial genomes.

    PubMed

    Vallenet, David; Calteau, Alexandra; Cruveiller, Stéphane; Gachet, Mathieu; Lajus, Aurélie; Josso, Adrien; Mercier, Jonathan; Renaux, Alexandre; Rollin, Johan; Rouy, Zoe; Roche, David; Scarpelli, Claude; Médigue, Claudine

    2017-01-04

    The annotation of genomes from NGS platforms needs to be automated and fully integrated. However, maintaining consistency and accuracy in genome annotation is a challenging problem because millions of protein database entries are not assigned reliable functions. This shortcoming limits the knowledge that can be extracted from genomes and metabolic models. Launched in 2005, the MicroScope platform (http://www.genoscope.cns.fr/agc/microscope) is an integrative resource that supports systematic and efficient revision of microbial genome annotation, data management and comparative analysis. Effective comparative analysis requires a consistent and complete view of biological data, and therefore, support for reviewing the quality of functional annotation is critical. MicroScope allows users to analyze microbial (meta)genomes together with post-genomic experiment results if any (i.e. transcriptomics, re-sequencing of evolved strains, mutant collections, phenotype data). It combines tools and graphical interfaces to analyze genomes and to perform the expert curation of gene functions in a comparative context. Starting with a short overview of the MicroScope system, this paper focuses on some major improvements of the Web interface, mainly for the submission of genomic data and on original tools and pipelines that have been developed and integrated in the platform: computation of pan-genomes and prediction of biosynthetic gene clusters. Today the resource contains data for more than 6000 microbial genomes, and among the 2700 personal accounts (65% of which are now from foreign countries), 14% of the users are performing expert annotations, on at least a weekly basis, contributing to improve the quality of microbial genome annotations. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. BrainBrowser: distributed, web-based neurological data visualization.

    PubMed

    Sherif, Tarek; Kassis, Nicolas; Rousseau, Marc-Étienne; Adalat, Reza; Evans, Alan C

    2014-01-01

    Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used to analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible.

  6. BrainBrowser: distributed, web-based neurological data visualization

    PubMed Central

    Sherif, Tarek; Kassis, Nicolas; Rousseau, Marc-Étienne; Adalat, Reza; Evans, Alan C.

    2015-01-01

    Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used to analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible. PMID:25628562

  7. Formal design and verification of a reliable computing platform for real-time control. Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.

    1990-01-01

    A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.

  8. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    NASA Astrophysics Data System (ADS)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).

  9. An evolving computational platform for biological mass spectrometry: workflows, statistics and data mining with MASSyPup64.

    PubMed

    Winkler, Robert

    2015-01-01

    In biological mass spectrometry, crude instrumental data need to be converted into meaningful theoretical models. Several data processing and data evaluation steps are required to come to the final results. These operations are often difficult to reproduce, because of too specific computing platforms. This effect, known as 'workflow decay', can be diminished by using a standardized informatic infrastructure. Thus, we compiled an integrated platform, which contains ready-to-use tools and workflows for mass spectrometry data analysis. Apart from general unit operations, such as peak picking and identification of proteins and metabolites, we put a strong emphasis on the statistical validation of results and Data Mining. MASSyPup64 includes e.g., the OpenMS/TOPPAS framework, the Trans-Proteomic-Pipeline programs, the ProteoWizard tools, X!Tandem, Comet and SpiderMass. The statistical computing language R is installed with packages for MS data analyses, such as XCMS/metaXCMS and MetabR. The R package Rattle provides a user-friendly access to multiple Data Mining methods. Further, we added the non-conventional spreadsheet program teapot for editing large data sets and a command line tool for transposing large matrices. Individual programs, console commands and modules can be integrated using the Workflow Management System (WMS) taverna. We explain the useful combination of the tools by practical examples: (1) A workflow for protein identification and validation, with subsequent Association Analysis of peptides, (2) Cluster analysis and Data Mining in targeted Metabolomics, and (3) Raw data processing, Data Mining and identification of metabolites in untargeted Metabolomics. Association Analyses reveal relationships between variables across different sample sets. We present its application for finding co-occurring peptides, which can be used for target proteomics, the discovery of alternative biomarkers and protein-protein interactions. Data Mining derived models displayed a higher robustness and accuracy for classifying sample groups in targeted Metabolomics than cluster analyses. Random Forest models do not only provide predictive models, which can be deployed for new data sets, but also the variable importance. We demonstrate that the later is especially useful for tracking down significant signals and affected pathways in untargeted Metabolomics. Thus, Random Forest modeling supports the unbiased search for relevant biological features in Metabolomics. Our results clearly manifest the importance of Data Mining methods to disclose non-obvious information in biological mass spectrometry . The application of a Workflow Management System and the integration of all required programs and data in a consistent platform makes the presented data analyses strategies reproducible for non-expert users. The simple remastering process and the Open Source licenses of MASSyPup64 (http://www.bioprocess.org/massypup/) enable the continuous improvement of the system.

  10. CAD-centric Computation Management System for a Virtual TBM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of themore » analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.« less

  11. Mapping Depositional Facies on Great Bahama Bank: An Integration of Groundtruthing and Remote Sensing Methods

    NASA Astrophysics Data System (ADS)

    Hariss, M.; Purkis, S.; Ellis, J. M.; Swart, P. K.; Reijmer, J.

    2013-12-01

    Great Bahama Bank (GBB) has been used in many models to illustrate depositional facies variation across flat-topped, isolated carbonate platforms. Such models have served as subsurface analogs at a variety of scales. In this presentation we have integrated Landsat TM imagery, a refined bathymetric digital elevation model, and seafloor sample data compiled into ArcGIS and analyzed with eCognition to develop a depositional facies map that is more robust than previous versions. For the portion of the GBB lying to the west of Andros Island, the facies map was generated by pairing an extensive set of GPS-constrained field observations and samples (n=275) (Reijmer et al., 2009, IAS Spec Pub 41) with computer and manual interpretation of the Landsat imagery. For the remainder of the platform, which lacked such rigorous ground-control, the Landsat imagery was segmented into lithotopes - interpreted to be distinct bodies of uniform sediment - using a combination of edge detection, spectral and textural analysis, and manual editing. A map was then developed by assigning lithotopes to facies classes on the basis of lessons derived from the portion of the platform for which we had rigorous conditioning. The new analysis reveals that GBB is essentially a very grainy platform with muddier accumulations only in the lee of substantial island barriers; in this regard Andros Island, which is the largest island on GBB, exerts a direct control over the muddiest portion of GBB. Mudstones, wackestones, and mud-rich packstones cover 7%, 6%, and 15%, respectively, of the GBB platform top. By contrast, mud-poor packstones, grainstones, and rudstones account for 19%, 44%, and 3%, respectively. Of the 44% of the platform-top classified as grainstone, only 4% is composed of 'high-energy' deposits characterized by the development of sandbar complexes. The diversity and size of facies bodies is broadly the same on the eastern and western limb of the GBB platform, though the narrower eastern limb, the New Providence Platform, hosts a higher prevalence of high energy grainstones. The most abrupt lateral facies changes are observed leeward of islands, areas which also hold the highest diversity in facies type.

  12. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  13. Bringing education to your virtual doorstep

    NASA Astrophysics Data System (ADS)

    Kaurov, Vitaliy

    2013-03-01

    We currently witness significant migration of academic resources towards online CMS, social networking, and high-end computerized education. This happens for traditional academic programs as well as for outreach initiatives. The talk will go over a set of innovative integrated technologies, many of which are free. These were developed by Wolfram Research in order to facilitate and enhance the learning process in mathematical and physical sciences. Topics include: cloud computing with Mathematica Online; natural language programming; interactive educational resources and web publishing at the Wolfram Demonstrations Project; the computational knowledge engine Wolfram Alpha; Computable Document Format (CDF) and self-publishing with interactive e-books; course assistant apps for mobile platforms. We will also discuss outreach programs where such technologies are extensively used, such as the Wolfram Science Summer School and the Mathematica Summer Camp.

  14. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    NASA Astrophysics Data System (ADS)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  15. Linear and passive silicon diodes, isolators, and logic gates

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Yuan

    2013-12-01

    Silicon photonic integrated devices and circuits have offered a promising means to revolutionalize information processing and computing technologies. One important reason is that these devices are compatible with conventional complementary metal oxide semiconductor (CMOS) processing technology that overwhelms current microelectronics industry. Yet, the dream to build optical computers has yet to come without the breakthrough of several key elements including optical diodes, isolators, and logic gates with low power, high signal contrast, and large bandwidth. Photonic crystal has a great power to mold the flow of light in micrometer/nanometer scale and is a promising platform for optical integration. In this paper we present our recent efforts of design, fabrication, and characterization of ultracompact, linear, passive on-chip optical diodes, isolators and logic gates based on silicon two-dimensional photonic crystal slabs. Both simulation and experiment results show high performance of these novel designed devices. These linear and passive silicon devices have the unique properties of small fingerprint, low power request, large bandwidth, fast response speed, easy for fabrication, and being compatible with COMS technology. Further improving their performance would open up a road towards photonic logics and optical computing and help to construct nanophotonic on-chip processor architectures for future optical computers.

  16. TRIDEC Cloud - a Web-based Platform for Tsunami Early Warning tested with NEAMWave14 Scenarios

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin; Spazier, Johannes; Reißland, Sven; Necmioglu, Ocal; Comoglu, Mustafa; Ozer Sozdinler, Ceren; Carrilho, Fernando; Wächter, Joachim

    2015-04-01

    In times of cloud computing and ubiquitous computing the use of concepts and paradigms introduced by information and communications technology (ICT) have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in research projects new technologies are exploited to implement a cloud-based and web-based platform - the TRIDEC Cloud - to open up new prospects for EWS. The platform in its current version addresses tsunami early warning and mitigation. It merges several complementary external and in-house cloud-based services for instant tsunami propagation calculations and automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The TRIDEC Cloud can be accessed in two different modes, the monitoring mode and the exercise-and-training mode. The monitoring mode provides important functionality required to act in a real event. So far, the monitoring mode integrates historic and real-time sea level data and latest earthquake information. The integration of sources is supported by a simple and secure interface. The exercise and training mode enables training and exercises with virtual scenarios. This mode disconnects real world systems and connects with a virtual environment that receives virtual earthquake information and virtual sea level data re-played by a scenario player. Thus operators and other stakeholders are able to train skills and prepare for real events and large exercises. The GFZ German Research Centre for Geosciences (GFZ), the Kandilli Observatory and Earthquake Research Institute (KOERI), and the Portuguese Institute for the Sea and Atmosphere (IPMA) have used the opportunity provided by NEAMWave14 to test the TRIDEC Cloud as a collaborative activity based on previous partnership and commitments at the European scale. The TRIDEC Cloud has not been involved officially in Part B of the NEAMWave14 scenarios. However, the scenarios have been used by GFZ, KOERI, and IPMA for testing in exercise runs on October 27-28, 2014. Additionally, the Greek NEAMWave14 scenario has been tested in an exercise run by GFZ only on October 29, 2014 (see ICG/NEAMTWS-XI/13). The exercise runs demonstrated that operators in warning centres and stakeholders of other involved parties just need a standard web browser to access a full-fledged TEWS. The integration of GPU accelerated tsunami simulation computations have been an integral part to foster early warning with on-demand tsunami predictions based on actual source parameters. Thus tsunami travel times, estimated times of arrival and estimated wave heights are available immediately for visualization and for further analysis and processing. The generation of warning messages is based on internationally agreed message structures and includes static and dynamic information based on earthquake information, instant computations of tsunami simulations, and actual measurements. Generated messages are served for review, modification, and addressing in one simple form for dissemination via Cloud Messages, Shared Maps, e-mail, FTP/GTS, SMS, and FAX. Cloud Messages and Shared Maps are complementary channels and integrate interactive event and simulation data. Thus recipients are enabled to interact dynamically with a map and diagrams beyond traditional text information.

  17. Cyber-workstation for computational neuroscience.

    PubMed

    Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C

    2010-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.

  18. Cyber-Workstation for Computational Neuroscience

    PubMed Central

    DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.

    2009-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436

  19. Development of CCSDS DCT to Support Spacecraft Dynamic Events

    NASA Technical Reports Server (NTRS)

    Sidhwa, Anahita F

    2011-01-01

    This report discusses the development of Consultative Committee for Space Data Systems (CCSDS) Design Control Table (DCT) to support spacecraft dynamic events. The Consultative Committee for Space Data Systems (CCSDS) Design Control Table (DCT) is a versatile link calculation tool to analyze different kinds of radio frequency links. It started out as an Excel-based program, and is now being evolved into a Mathematica-based link analysis tool. The Mathematica platform offers a rich set of advanced analysis capabilities, and can be easily extended to a web-based architecture. Last year the CCSDS DCT's for the uplink, downlink, two-way, and ranging models were developed as well as the corresponding input and output interfaces. Another significant accomplishment is the integration of the NAIF SPICE library into the Mathematica computation platform.

  20. RayPlus: a Web-Based Platform for Medical Image Processing.

    PubMed

    Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo

    2017-04-01

    Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.

  1. Autonomous integrated GPS/INS navigation experiment for OMV. Phase 1: Feasibility study

    NASA Technical Reports Server (NTRS)

    Upadhyay, Triveni N.; Priovolos, George J.; Rhodehamel, Harley

    1990-01-01

    The phase 1 research focused on the experiment definition. A tightly integrated Global Positioning System/Inertial Navigation System (GPS/INS) navigation filter design was analyzed and was shown, via detailed computer simulation, to provide precise position, velocity, and attitude (alignment) data to support navigation and attitude control requirements of future NASA missions. The application of the integrated filter was also shown to provide the opportunity to calibrate inertial instrument errors which is particularly useful in reducing INS error growth during times of GPS outages. While the Orbital Maneuvering Vehicle (OMV) provides a good target platform for demonstration and for possible flight implementation to provide improved capability, a successful proof-of-concept ground demonstration can be obtained using any simulated mission scenario data, such as Space Transfer Vehicle, Shuttle-C, Space Station.

  2. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE PAGES

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...

    2017-08-17

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  3. NASA's Integrated Instrument Simulator Suite for Atmospheric Remote Sensing from Spaceborne Platforms (ISSARS) and Its Role for the ACE and GPM Missions

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Tao, Wei-Kuo; Hostetler, Chris; Kuo, Kwo-Sen; Matsui, Toshihisa; Jacob, Joseph C.; Niamsuwam, Noppasin; Johnson, Michael P.; Hair, John; Butler, Carolyn; hide

    2011-01-01

    Forward simulation is an indispensable tool for evaluation of precipitation retrieval algorithms as well as for studying snow/ice microphysics and their radiative properties. The main challenge of the implementation arises due to the size of the problem domain. To overcome this hurdle, assumptions need to be made to simplify compiles cloud microphysics. It is important that these assumptions are applied consistently throughout the simulation process. ISSARS addresses this issue by providing a computationally efficient and modular framework that can integrate currently existing models and is also capable of expanding for future development. ISSARS is designed to accommodate the simulation needs of the Aerosol/Clouds/Ecosystems (ACE) mission and the Global Precipitation Measurement (GPM) mission: radars, microwave radiometers, and optical instruments such as lidars and polarimeter. ISSARS's computation is performed in three stages: input reconditioning (IRM), electromagnetic properties (scattering/emission/absorption) calculation (SEAM), and instrument simulation (ISM). The computation is implemented as a web service while its configuration can be accessed through a web-based interface.

  4. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  5. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  6. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  7. Development of a computer model to predict platform station keeping requirements in the Gulf of Mexico using remote sensing data

    NASA Technical Reports Server (NTRS)

    Barber, Bryan; Kahn, Laura; Wong, David

    1990-01-01

    Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented.

  8. Structurally Integrated Photoluminescent Chemical and Biological Sensors: An Organic Light-Emitting Diode-Based Platform

    NASA Astrophysics Data System (ADS)

    Shinar, J.; Shinar, R.

    The chapter describes the development, advantages, challenges, and potential of an emerging, compact photoluminescence-based sensing platform for chemical and biological analytes, including multiple analytes. In this platform, the excitation source is an array of organic light-emitting device (OLED) pixels that is structurally integrated with the sensing component. Steps towards advanced integration with additionally a thin-film-based photodetector are also described. The performance of the OLED-based sensing platform is examined for gas-phase and dissolved oxygen, glucose, lactate, ethanol, hydrazine, and anthrax lethal factor.

  9. DEEP SPACE: High Resolution VR Platform for Multi-user Interactive Narratives

    NASA Astrophysics Data System (ADS)

    Kuka, Daniela; Elias, Oliver; Martins, Ronald; Lindinger, Christopher; Pramböck, Andreas; Jalsovec, Andreas; Maresch, Pascal; Hörtner, Horst; Brandl, Peter

    DEEP SPACE is a large-scale platform for interactive, stereoscopic and high resolution content. The spatial and the system design of DEEP SPACE are facing constraints of CAVETM-like systems in respect to multi-user interactive storytelling. To be used as research platform and as public exhibition space for many people, DEEP SPACE is capable to process interactive, stereoscopic applications on two projection walls with a size of 16 by 9 meters and a resolution of four times 1080p (4K) each. The processed applications are ranging from Virtual Reality (VR)-environments to 3D-movies to computationally intensive 2D-productions. In this paper, we are describing DEEP SPACE as an experimental VR platform for multi-user interactive storytelling. We are focusing on the system design relevant for the platform, including the integration of the Apple iPod Touch technology as VR control, and a special case study that is demonstrating the research efforts in the field of multi-user interactive storytelling. The described case study, entitled "Papyrate's Island", provides a prototypical scenario of how physical drawings may impact on digital narratives. In this special case, DEEP SPACE helps us to explore the hypothesis that drawing, a primordial human creative skill, gives us access to entirely new creative possibilities in the domain of interactive storytelling.

  10. OSCAR: A Compact, Powerful and Versatile On Board Computer Based on LEON3 Core

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Lefevre, Aurelien; Koebel, Franck

    2011-08-01

    Satellites are controlled via a platform On Board Computer (OBC) that manages different parameters (attitude, orbit, modes, temperatures ...) with respect to its payload mission (telecommunication, earth observation, scientific mission). The platform OBC is connected to the satellite and the ground control via digital links, and executes on board software.The main functions of a platform OBC are to provide the satellite flight segment with the following features: o Processing resources for the flight mission software o TM/TC services and interfaces with the RF communication chaino General communication services with the Avionicsand payload equipments through an on-board communication bus based on the MIL-1553B standard or CANo Time synchronization and distributiono Failure tolerant architecture based on the use of redounded reconfiguration units and redundancyimplementationFrom a hardware point of view, it groups a lot of digital functions usually dispatched on numerous chips (processor, co-processor, digital links IP ...) together. In order to reach an ultimate level of integration, Astrium has designed an ASIC gathering on a single chip all the required digital functions: the SCOC3 ASIC.Astrium has developed an OBC based on this SCOC3 ASIC: the OSCAR (Optimized Spacecraft Computer Architecture with Reconfiguration). It is now available off-the-shelf as the new OBC product family of Astrium.This paper presents the major innovations introduced by Astrium for SCOC3 and OSCAR with the objective to save cost and mass through a solution compatible with any class quality project, using a unique software development environment for user.

  11. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  12. SERS diagnostic platforms, methods and systems microarrays, biosensors and biochips

    DOEpatents

    Vo-Dinh, Tuan [Knoxville, TN

    2007-09-11

    A Raman integrated sensor system for the detection of targets including biotargets includes at least one sampling platform, at least one receptor probe disposed on the sampling platform, and an integrated circuit detector system communicably connected to the receptor. The sampling platform is preferably a Raman active surface-enhanced scattering (SERS) platform, wherein the Raman sensor is a SERS sensor. The receptors can include at least one protein receptor and at least one nucleic acid receptor.

  13. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  14. Electrolytic synthesis of aqueous aluminum nanoclusters and in situ characterization by femtosecond Raman spectroscopy and computations

    PubMed Central

    Wang, Wei; Liu, Weimin; Chang, I-Ya; Wills, Lindsay A.; Zakharov, Lev N.; Boettcher, Shannon W.; Cheong, Paul Ha-Yeon; Fang, Chong; Keszler, Douglas A.

    2013-01-01

    The selective synthesis and in situ characterization of aqueous Al-containing clusters is a long-standing challenge. We report a newly developed integrated platform that combines (i) a selective, atom-economical, step-economical, scalable synthesis of Al-containing nanoclusters in water via precision electrolysis with strict pH control and (ii) an improved femtosecond stimulated Raman spectroscopic method covering a broad spectral range of ca. 350–1,400 cm−1 with high sensitivity, aided by ab initio computations, to elucidate Al aqueous cluster structures and formation mechanisms in real time. Using this platform, a unique view of flat [Al13(μ3-OH)6(μ2-OH)18(H2O)24](NO3)15 nanocluster formation is observed in water, in which three distinct reaction stages are identified. The initial stage involves the formation of an [Al7(μ3-OH)6(μ2-OH)6(H2O)12]9+ cluster core as an important intermediate toward the flat Al13 aqueous cluster. PMID:24167254

  15. NASA Computational Mobility

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This blue sky study was conducted in order to study the feasibility and scope of the notion of Computational Mobility to potential NASA applications such as control of multiple robotic platforms. The study was started on July lst, 2003 and concluded on September 30th, 2004. During the course of that period, four meetings were held for the participants to meet and discuss the concept, its viability, and potential applications. The study involved, at various stages, the following personnel: James Allen (IHMC), Albert0 Canas (IHMC), Daniel Cooke (Texas Tech), Kenneth Ford (IHMC - PI), Patrick Hayes (IHMC), Butler Hine (NASA), Robert Morris (NASA), Liam Pedersen (NASA), Jerry Pratt (IHMC), Raul Saavedra (IHMC), Niranjan Suri (IHMC), and Milind Tambe (USC). A white paper describing the notion of a Process Integrated Mechanism (PIM) was generated as a result of this study. The white paper is attached to this report. In addition, a number of presentations were generated during the four meetings, which are included in this report. Finally, an execution platform and a simulation environment were developed, which are available upon request from Niranjan Suri (nsuri@,ihmc.us).

  16. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  17. An Interdisciplinary Approach Between Medical Informatics and Social Sciences to Transdisciplinary Requirements Engineering for an Integrated Care Setting.

    PubMed

    Vielhauer, Jan; Böckmann, Britta

    2017-01-01

    Requirements engineering of software products for elderly people faces some special challenges to ensure a maximum of user acceptance. Within the scope of a research project, a web-based platform and a mobile app are approached to enable people to live in their own home as long as possible. This paper is about a developed method of interdisciplinary requirements engineering by a team of social scientists in cooperation with computer scientists.

  18. A Web-based, secure, light weight clinical multimedia data capture and display system.

    PubMed

    Wang, S S; Starren, J

    2000-01-01

    Computer-based patient records are traditionally composed of textual data. Integration of multimedia data has been historically slow. Multimedia data such as image, audio, and video have been traditionally more difficult to handle. An implementation of a clinical system for multimedia data is discussed. The system implementation uses Java, Secure Socket Layer (SSL), and Oracle 8i. The system is on top of the Internet so it is architectural independent, cross-platform, cross-vendor, and secure. Design and implementations issues are discussed.

  19. Integrating electronic patient records into a multi-media clinic-based simulation center using a PC blade platform: a foundation for a new pedagogy in dentistry.

    PubMed

    Taylor, David; Valenza, John A; Spence, James M; Baber, Randolph H

    2007-10-11

    Simulation has been used for many years in dental education, but the educational context is typically a laboratory divorced from the clinical setting, which impairs the transfer of learning. Here we report on a true simulation clinic with multimedia communication from a central teaching station. Each of the 43 fully-functioning student operatories includes a thin-client networked computer with access to an Electronic Patient Record (EPR).

  20. From systems biology to dynamical neuropharmacology: proposal for a new methodology.

    PubMed

    Erdi, P; Kiss, T; Tóth, J; Ujfalussy, B; Zalányi, L

    2006-07-01

    The concepts and methods of systems biology are extended to neuropharmacology in order to test and design drugs for the treatment of neurological and psychiatric disorders. Computational modelling by integrating compartmental neural modelling techniques and detailed kinetic descriptions of pharmacological modulation of transmitter-receptor interaction is offered as a method to test the electrophysiological and behavioural effects of putative drugs. Even more, an inverse method is suggested as a method for controlling a neural system to realise a prescribed temporal pattern. In particular, as an application of the proposed new methodology, a computational platform is offered to analyse the generation and pharmacological modulation of theta rhythm related to anxiety.

  1. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and additional communities practices, and a foundation for new exploratory developments. To that end, NCI is already participating in numerous current and emerging collaborations internationally including the Earth System Grid Federation (ESGF); Climate and Weather Data from International agencies such as NASA, NOAA, and UK Met Office; Remotely Sensed Satellite Earth Imaging through collaborations through GEOS and CEOS; EU-led Ocean Data Interoperability Platform (ODIP) and Horizon2020 Earth Server2 project; as well as broader data infrastructure community activities such as Research Data Alliance (RDA). Each research community is heavily engaged in international standards such as ISO, OGC and W3C, adopting community-led conventions for data, supporting improved data organisation such as controlled vocabularies, and creating workflows that use mature APIs and data services. NCI is engaging with these communities on NERDIP to ensure that such standards are applied uniformly and tested in practice by working with the variety of data and technologies. This includes benchmarking exemplar cases from individual communities, documenting their use of standards, and evaluating their practical use of the different technologies. Such a process fully establishes the functionality and performance, and is required to safely transition when improvements or rationalisation is required. Work is now underway to extend the NERDIP platform for better utilisation in the subsurface geophysical community, including maximizing national uptake, as well as better integration with international science platforms.

  2. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  3. Integrated platform and API for electrophysiological data

    PubMed Central

    Sobolev, Andrey; Stoewer, Adrian; Leonhardt, Aljoscha; Rautenberg, Philipp L.; Kellner, Christian J.; Garbers, Christian; Wachtler, Thomas

    2014-01-01

    Recent advancements in technology and methodology have led to growing amounts of increasingly complex neuroscience data recorded from various species, modalities, and levels of study. The rapid data growth has made efficient data access and flexible, machine-readable data annotation a crucial requisite for neuroscientists. Clear and consistent annotation and organization of data is not only an important ingredient for reproducibility of results and re-use of data, but also essential for collaborative research and data sharing. In particular, efficient data management and interoperability requires a unified approach that integrates data and metadata and provides a common way of accessing this information. In this paper we describe GNData, a data management platform for neurophysiological data. GNData provides a storage system based on a data representation that is suitable to organize data and metadata from any electrophysiological experiment, with a functionality exposed via a common application programming interface (API). Data representation and API structure are compatible with existing approaches for data and metadata representation in neurophysiology. The API implementation is based on the Representational State Transfer (REST) pattern, which enables data access integration in software applications and facilitates the development of tools that communicate with the service. Client libraries that interact with the API provide direct data access from computing environments like Matlab or Python, enabling integration of data management into the scientist's experimental or analysis routines. PMID:24795616

  4. Influenza Virus Database (IVDB): an integrated information resource and analysis platform for influenza virus research.

    PubMed

    Chang, Suhua; Zhang, Jiajie; Liao, Xiaoyun; Zhu, Xinxing; Wang, Dahai; Zhu, Jiang; Feng, Tao; Zhu, Baoli; Gao, George F; Wang, Jian; Yang, Huanming; Yu, Jun; Wang, Jing

    2007-01-01

    Frequent outbreaks of highly pathogenic avian influenza and the increasing data available for comparative analysis require a central database specialized in influenza viruses (IVs). We have established the Influenza Virus Database (IVDB) to integrate information and create an analysis platform for genetic, genomic, and phylogenetic studies of the virus. IVDB hosts complete genome sequences of influenza A virus generated by Beijing Institute of Genomics (BIG) and curates all other published IV sequences after expert annotation. Our Q-Filter system classifies and ranks all nucleotide sequences into seven categories according to sequence content and integrity. IVDB provides a series of tools and viewers for comparative analysis of the viral genomes, genes, genetic polymorphisms and phylogenetic relationships. A search system has been developed for users to retrieve a combination of different data types by setting search options. To facilitate analysis of global viral transmission and evolution, the IV Sequence Distribution Tool (IVDT) has been developed to display the worldwide geographic distribution of chosen viral genotypes and to couple genomic data with epidemiological data. The BLAST, multiple sequence alignment and phylogenetic analysis tools were integrated for online data analysis. Furthermore, IVDB offers instant access to pre-computed alignments and polymorphisms of IV genes and proteins, and presents the results as SNP distribution plots and minor allele distributions. IVDB is publicly available at http://influenza.genomics.org.cn.

  5. Integrated platform and API for electrophysiological data.

    PubMed

    Sobolev, Andrey; Stoewer, Adrian; Leonhardt, Aljoscha; Rautenberg, Philipp L; Kellner, Christian J; Garbers, Christian; Wachtler, Thomas

    2014-01-01

    Recent advancements in technology and methodology have led to growing amounts of increasingly complex neuroscience data recorded from various species, modalities, and levels of study. The rapid data growth has made efficient data access and flexible, machine-readable data annotation a crucial requisite for neuroscientists. Clear and consistent annotation and organization of data is not only an important ingredient for reproducibility of results and re-use of data, but also essential for collaborative research and data sharing. In particular, efficient data management and interoperability requires a unified approach that integrates data and metadata and provides a common way of accessing this information. In this paper we describe GNData, a data management platform for neurophysiological data. GNData provides a storage system based on a data representation that is suitable to organize data and metadata from any electrophysiological experiment, with a functionality exposed via a common application programming interface (API). Data representation and API structure are compatible with existing approaches for data and metadata representation in neurophysiology. The API implementation is based on the Representational State Transfer (REST) pattern, which enables data access integration in software applications and facilitates the development of tools that communicate with the service. Client libraries that interact with the API provide direct data access from computing environments like Matlab or Python, enabling integration of data management into the scientist's experimental or analysis routines.

  6. A Computational Systems Biology Software Platform for Multiscale Modeling and Simulation: Integrating Whole-Body Physiology, Disease Biology, and Molecular Reaction Networks

    PubMed Central

    Eissing, Thomas; Kuepfer, Lars; Becker, Corina; Block, Michael; Coboeken, Katrin; Gaub, Thomas; Goerlitz, Linus; Jaeger, Juergen; Loosen, Roland; Ludewig, Bernd; Meyer, Michaela; Niederalt, Christoph; Sevestre, Michael; Siegmund, Hans-Ulrich; Solodenko, Juri; Thelen, Kirstin; Telle, Ulrich; Weiss, Wolfgang; Wendl, Thomas; Willmann, Stefan; Lippert, Joerg

    2011-01-01

    Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multiscale by nature, project work, and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim® and MoBi® capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug–drug, or drug–metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach. PMID:21483730

  7. Interactive Computer-Assisted Instruction in Acid-Base Physiology for Mobile Computer Platforms

    ERIC Educational Resources Information Center

    Longmuir, Kenneth J.

    2014-01-01

    In this project, the traditional lecture hall presentation of acid-base physiology in the first-year medical school curriculum was replaced by interactive, computer-assisted instruction designed primarily for the iPad and other mobile computer platforms. Three learning modules were developed, each with ~20 screens of information, on the subjects…

  8. Seqcrawler: biological data indexing and browsing platform.

    PubMed

    Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien

    2012-07-24

    Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  9. A cloud platform for remote diagnosis of breast cancer in mammography by fusion of machine and human intelligence

    NASA Astrophysics Data System (ADS)

    Jiang, Guodong; Fan, Ming; Li, Lihua

    2016-03-01

    Mammography is the gold standard for breast cancer screening, reducing mortality by about 30%. The application of a computer-aided detection (CAD) system to assist a single radiologist is important to further improve mammographic sensitivity for breast cancer detection. In this study, a design and realization of the prototype for remote diagnosis system in mammography based on cloud platform were proposed. To build this system, technologies were utilized including medical image information construction, cloud infrastructure and human-machine diagnosis model. Specifically, on one hand, web platform for remote diagnosis was established by J2EE web technology. Moreover, background design was realized through Hadoop open-source framework. On the other hand, storage system was built up with Hadoop distributed file system (HDFS) technology which enables users to easily develop and run on massive data application, and give full play to the advantages of cloud computing which is characterized by high efficiency, scalability and low cost. In addition, the CAD system was realized through MapReduce frame. The diagnosis module in this system implemented the algorithms of fusion of machine and human intelligence. Specifically, we combined results of diagnoses from doctors' experience and traditional CAD by using the man-machine intelligent fusion model based on Alpha-Integration and multi-agent algorithm. Finally, the applications on different levels of this system in the platform were also discussed. This diagnosis system will have great importance for the balanced health resource, lower medical expense and improvement of accuracy of diagnosis in basic medical institutes.

  10. A Software Defined Radio Based Airplane Communication Navigation Simulation System

    NASA Astrophysics Data System (ADS)

    He, L.; Zhong, H. T.; Song, D.

    2018-01-01

    Radio communication and navigation system plays important role in ensuring the safety of civil airplane in flight. Function and performance should be tested before these systems are installed on-board. Conventionally, a set of transmitter and receiver are needed for each system, thus all the equipment occupy a lot of space and are high cost. In this paper, software defined radio technology is applied to design a common hardware communication and navigation ground simulation system, which can host multiple airplane systems with different operating frequency, such as HF, VHF, VOR, ILS, ADF, etc. We use a broadband analog frontend hardware platform, universal software radio peripheral (USRP), to transmit/receive signal of different frequency band. Software is compiled by LabVIEW on computer, which interfaces with USRP through Ethernet, and is responsible for communication and navigation signal processing and system control. An integrated testing system is established to perform functional test and performance verification of the simulation signal, which demonstrate the feasibility of our design. The system is a low-cost and common hardware platform for multiple airplane systems, which provide helpful reference for integrated avionics design.

  11. Bringing your tools to CyVerse Discovery Environment using Docker

    PubMed Central

    Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric

    2016-01-01

    Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use. PMID:27803802

  12. Bringing your tools to CyVerse Discovery Environment using Docker.

    PubMed

    Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric

    2016-01-01

    Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.

  13. Gameplay as a Source of Intrinsic Motivation in a Randomized Controlled Trial of Auditory Training for Tinnitus

    PubMed Central

    Hoare, Derek J.; Van Labeke, Nicolas; McCormack, Abby; Sereda, Magdalena; Smith, Sandra; Taher, Hala Al; Kowalkowski, Victoria L.; Sharples, Mike; Hall, Deborah A.

    2014-01-01

    Background Previous studies of frequency discrimination training (FDT) for tinnitus used repetitive task-based training programmes relying on extrinsic factors to motivate participation. Studies reported limited improvement in tinnitus symptoms. Purpose To evaluate FDT exploiting intrinsic motivations by integrating training with computer-gameplay. Methods Sixty participants were randomly assigned to train on either a conventional task-based training, or one of two interactive game-based training platforms over six weeks. Outcomes included assessment of motivation, tinnitus handicap, and performance on tests of attention. Results Participants reported greater intrinsic motivation to train on the interactive game-based platforms, yet compliance of all three groups was similar (∼70%) and changes in self-reported tinnitus severity were not significant. There was no difference between groups in terms of change in tinnitus severity or performance on measures of attention. Conclusion FDT can be integrated within an intrinsically motivating game. Whilst this may improve participant experience, in this instance it did not translate to additional compliance or therapeutic benefit. Trial Registration ClinicalTrials.gov NCT02095262 PMID:25215617

  14. Nonclassical light sources for silicon photonics

    NASA Astrophysics Data System (ADS)

    Bajoni, Daniele; Galli, Matteo

    2017-09-01

    Quantum photonics has recently attracted a lot of attention for its disruptive potential in emerging technologies like quantum cryptography, quantum communication and quantum computing. Driven by the impressive development in nanofabrication technologies and nanoscale engineering, silicon photonics has rapidly become the platform of choice for on-chip integration of high performing photonic devices, now extending their functionalities towards quantum-based applications. Focusing on quantum Information Technology (qIT) as a key application area, we review recent progress in integrated silicon-based sources of nonclassical states of light. We assess the state of the art in this growing field and highlight the challenges that need to be overcome to make quantum photonics a reliable and widespread technology.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  16. GeoChronos: An On-line Collaborative Platform for Earth Observation Scientists

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Kiddle, C.; Curry, R.; Markatchev, N.; Zonta-Pastorello, G., Jr.; Rivard, B.; Sanchez-Azofeifa, G. A.; Simmonds, R.; Tan, T.

    2009-12-01

    Recent advances in cyberinfrastructure are offering new solutions to the growing challenges of managing and sharing large data volumes. Web 2.0 and social networking technologies, provide the means for scientists to collaborate and share information more effectively. Cloud computing technologies can provide scientists with transparent and on-demand access to applications served over the Internet in a dynamic and scalable manner. Semantic Web technologies allow for data to be linked together in a manner understandable by machines, enabling greater automation. Combining all of these technologies together can enable the creation of very powerful platforms. GeoChronos (http://geochronos.org/), part of a CANARIE Network Enabled Platforms project, is an online collaborative platform that incorporates these technologies to enable members of the earth observation science community to share data and scientific applications and to collaborate more effectively. The GeoChronos portal is built on an open source social networking platform called Elgg. Elgg provides a full set of social networking functionalities similar to Facebook including blogs, tags, media/document sharing, wikis, friends/contacts, groups, discussions, message boards, calendars, status, activity feeds and more. An underlying cloud computing infrastructure enables scientists to access dynamically provisioned applications via the portal for visualizing and analyzing data. Users are able to access and run the applications from any computer that has a Web browser and Internet connectivity and do not need to manage and maintain the applications themselves. Semantic Web Technologies, such as the Resource Description Framework (RDF) are being employed for relating and linking together spectral, satellite, meteorological and other data. Social networking functionality plays an integral part in facilitating the sharing of data and applications. Examples of recent GeoChronos users during the early testing phase have included the IAI International Wireless Sensor Networking Summer School at the University of Alberta, and the IAI Tropi-Dry community. Current GeoChronos activities include the development of a web-based spectral library and related analytical and visualization tools, in collaboration with members of the SpecNet community. The GeoChronos portal will be open to all members of the earth observation science community when the project nears completion at the end of 2010.

  17. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  18. Low-cost Citizen Science Balloon Platform for Measuring Air Pollutants to Improve Satellite Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    Potosnak, M. J.; Beck-Winchatz, B.; Ritter, P.

    2016-12-01

    High-altitude balloons (HABs) are an engaging platform for citizen science and formal and informal STEM education. However, the logistics of launching, chasing and recovering a payload on a 1200 g or 1500 g balloon can be daunting for many novice school groups and citizen scientists, and the cost can be prohibitive. In addition, there are many interesting scientific applications that do not require reaching the stratosphere, including measuring atmospheric pollutants in the planetary boundary layer. With a large number of citizen scientist flights, these data can be used to constrain satellite retrieval algorithms. In this poster presentation, we discuss a novel approach based on small (30 g) balloons that are cheap and easy to handle, and low-cost tracking devices (SPOT trackers for hikers) that do not require a radio license. Our scientific goal is to measure air quality in the lower troposphere. For example, particulate matter (PM) is an air pollutant that varies on small spatial scales and has sources in rural areas like biomass burning and farming practices such as tilling. Our HAB platform test flight incorporates an optical PM sensor, an integrated single board computer that records the PM sensor signal in addition to flight parameters (pressure, location and altitude), and a low-cost tracking system. Our goal is for the entire platform to cost less than $500. While the datasets generated by these flights are typically small, integrating a network of flight data from citizen scientists into a form usable for comparison to satellite data will require big data techniques.

  19. Cross-calibrating interferon-γ detection by using eletrochemical impedance spectroscopy and paraboloidal mirror enabled surface plasmon resonance interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Meng-Wei; Chang, Hao-Jung; Lee, Shu-sheng; Lee, Chih-Kung

    2016-03-01

    Tuberculosis is a highly contagious disease such that global latent patient can be as high as one third of the world population. Currently, latent tuberculosis was diagnosed by stimulating the T cells to produce the biomarker of tuberculosis, i.e., interferon-γ. In this paper, we developed a paraboloidal mirror enabled surface plasmon resonance (SPR) interferometer that has the potential to also integrate ellipsometry to analyze the antibody and antigen reactions. To examine the feasibility of developing a platform for cross calibrating the performance and detection limit of various bio-detection techniques, electrochemical impedance spectroscopy (EIS) method was also implemented onto a biochip that can be incorporated into this newly developed platform. The microfluidic channel of the biochip was functionalized by coating the interferon-γ antibody so as to enhance the detection specificity. To facilitate the processing steps needed for using the biochip to detect various antigen of vastly different concentrations, a kinetic mount was also developed to guarantee the biochip re-positioning accuracy whenever the biochip was removed and placed back for another round of detection. With EIS being utilized, SPR was also adopted to observe the real-time signals on the computer in order to analyze the success of each biochip processing steps such as functionalization, wash, etc. Finally, the EIS results and the optical signals obtained from the newly developed optical detection platform was cross-calibrated. Preliminary experimental results demonstrate the accuracy and performance of SPR and EIS measurement done at the newly integrated platform.

  20. Autonomous self-organizing resource manager for multiple networked platforms

    NASA Astrophysics Data System (ADS)

    Smith, James F., III

    2002-08-01

    A fuzzy logic based expert system for resource management has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar autonomous naval platforms defending their group against attackers. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. This paper provides an overview of the resource manager including the four fuzzy decision trees that make up the resource manager; the fuzzy EA model; genetic algorithm based optimization; co-evolutionary data mining through gaming; and mathematical, computational and hardware based validation. Methods of automatically designing new multi-platform EA techniques are considered. The expert system runs on each defending platform rendering it an autonomous system requiring no human intervention. There is no commanding platform. Instead the platforms work cooperatively as a function of battlespace geometry; sensor data such as range, bearing, ID, uncertainty measures for sensor output; intelligence reports; etc. Computational experiments will show the defending networked platform's ability to self- organize. The platforms' ability to self-organize is illustrated through the output of the scenario generator, a software package that automates the underlying data mining problem and creates a computer movie of the platforms' interaction for evaluation.

  1. Executable research compendia in geoscience research infrastructures

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2017-04-01

    From generation through analysis and collaboration to communication, scientific research requires the right tools. Scientists create their own software using third party libraries and platforms. Cloud computing, Open Science, public data infrastructures, and Open Source enable scientists with unprecedented opportunites, nowadays often in a field "Computational X" (e.g. computational seismology) or X-informatics (e.g. geoinformatics) [0]. This increases complexity and generates more innovation, e.g. Environmental Research Infrastructures (environmental RIs [1]). Researchers in Computational X write their software relying on both source code (e.g. from https://github.com) and binary libraries (e.g. from package managers such as APT, https://wiki.debian.org/Apt, or CRAN, https://cran.r-project.org/). They download data from domain specific (cf. https://re3data.org) or generic (e.g. https://zenodo.org) data repositories, and deploy computations remotely (e.g. European Open Science Cloud). The results themselves are archived, given persistent identifiers, connected to other works (e.g. using https://orcid.org/), and listed in metadata catalogues. A single researcher, intentionally or not, interacts with all sub-systems of RIs: data acquisition, data access, data processing, data curation, and community support [3]. To preserve computational research [3] proposes the Executable Research Compendium (ERC), a container format closing the gap of dependency preservation by encapsulating the runtime environment. ERCs and RIs can be integrated for different uses: (i) Coherence: ERC services validate completeness, integrity and results (ii) Metadata: ERCs connect the different parts of a piece of research and faciliate discovery (iii) Exchange and Preservation: ERC as usable building blocks are the shared and archived entity (iv) Self-consistency: ERCs remove dependence on ephemeral sources (v) Execution: ERC services create and execute a packaged analysis but integrate with existing platforms for display and control These integrations are vital for capturing workflows in RIs and connect key stakeholders (scientists, publishers, librarians). They are demonstrated using developments by the DFG-funded project Opening Reproducible Research (http://o2r.info). Semi-automatic creation of ERCs based on research workflows is a core goal of the project. References [0] Tony Hey, Stewart Tansley, Kristin Tolle (eds), 2009. The Fourth Paradigm: Data-Intensive Scientific Discovery. Microsoft Research. [1] P. Martin et al., Open Information Linking for Environmental Research Infrastructures, 2015 IEEE 11th International Conference on e-Science, Munich, 2015, pp. 513-520. doi: 10.1109/eScience.2015.66 [2] Y. Chen et al., Analysis of Common Requirements for Environmental Science Research Infrastructures, The International Symposium on Grids and Clouds (ISGC) 2013, Taipei, 2013, http://pos.sissa.it/archive/conferences/179/032/ISGC [3] Opening Reproducible Research, Geophysical Research Abstracts Vol. 18, EGU2016-7396, 2016, http://meetingorganizer.copernicus.org/EGU2016/EGU2016-7396.pdf

  2. Uncover the Cloud for Geospatial Sciences and Applications to Adopt Cloud Computing

    NASA Astrophysics Data System (ADS)

    Yang, C.; Huang, Q.; Xia, J.; Liu, K.; Li, J.; Xu, C.; Sun, M.; Bambacus, M.; Xu, Y.; Fay, D.

    2012-12-01

    Cloud computing is emerging as the future infrastructure for providing computing resources to support and enable scientific research, engineering development, and application construction, as well as work force education. On the other hand, there is a lot of doubt about the readiness of cloud computing to support a variety of scientific research, development and educations. This research is a project funded by NASA SMD to investigate through holistic studies how ready is the cloud computing to support geosciences. Four applications with different computing characteristics including data, computing, concurrent, and spatiotemporal intensities are taken to test the readiness of cloud computing to support geosciences. Three popular and representative cloud platforms including Amazon EC2, Microsoft Azure, and NASA Nebula as well as a traditional cluster are utilized in the study. Results illustrates that cloud is ready to some degree but more research needs to be done to fully implemented the cloud benefit as advertised by many vendors and defined by NIST. Specifically, 1) most cloud platform could help stand up new computing instances, a new computer, in a few minutes as envisioned, therefore, is ready to support most computing needs in an on demand fashion; 2) the load balance and elasticity, a defining characteristic, is ready in some cloud platforms, such as Amazon EC2, to support bigger jobs, e.g., needs response in minutes, while some are not ready to support the elasticity and load balance well. All cloud platform needs further research and development to support real time application at subminute level; 3) the user interface and functionality of cloud platforms vary a lot and some of them are very professional and well supported/documented, such as Amazon EC2, some of them needs significant improvement for the general public to adopt cloud computing without professional training or knowledge about computing infrastructure; 4) the security is a big concern in cloud computing platform, with the sharing spirit of cloud computing, it is very hard to ensure higher level security, except a private cloud is built for a specific organization without public access, public cloud platform does not support FISMA medium level yet and may never be able to support FISMA high level; 5) HPC jobs needs of cloud computing is not well supported and only Amazon EC2 supports this well. The research is being taken by NASA and other agencies to consider cloud computing adoption. We hope the publication of the research would also benefit the public to adopt cloud computing.

  3. Towards Guided Underwater Survey Using Light Visual Odometry

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.

    2017-02-01

    A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.

  4. Molecular deconstruction, detection, and computational prediction of microenvironment-modulated cellular responses to cancer therapeutics.

    PubMed

    Labarge, Mark A; Parvin, Bahram; Lorens, James B

    2014-04-01

    The field of bioengineering has pioneered the application of new precision fabrication technologies to model the different geometric, physical or molecular components of tissue microenvironments on solid-state substrata. Tissue engineering approaches building on these advances are used to assemble multicellular mimetic-tissues where cells reside within defined spatial contexts. The functional responses of cells in fabricated microenvironments have revealed a rich interplay between the genome and extracellular effectors in determining cellular phenotypes and in a number of cases have revealed the dominance of microenvironment over genotype. Precision bioengineered substrata are limited to a few aspects, whereas cell/tissue-derived microenvironments have many undefined components. Thus, introducing a computational module may serve to integrate these types of platforms to create reasonable models of drug responses in human tissues. This review discusses how combinatorial microenvironment microarrays and other biomimetic microenvironments have revealed emergent properties of cells in particular microenvironmental contexts, the platforms that can measure phenotypic changes within those contexts, and the computational tools that can unify the microenvironment-imposed functional phenotypes with underlying constellations of proteins and genes. Ultimately we propose that a merger of these technologies will enable more accurate pre-clinical drug discovery. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  6. AEGIS: a wildfire prevention and management information system

    NASA Astrophysics Data System (ADS)

    Kalabokidis, K.; Ager, A.; Finney, M.; Athanasis, N.; Palaiologou, P.; Vasilakos, C.

    2015-10-01

    A Web-GIS wildfire prevention and management platform (AEGIS) was developed as an integrated and easy-to-use decision support tool (http://aegis.aegean.gr). The AEGIS platform assists with early fire warning, fire planning, fire control and coordination of firefighting forces by providing access to information that is essential for wildfire management. Databases were created with spatial and non-spatial data to support key system functionalities. Updated land use/land cover maps were produced by combining field inventory data with high resolution multispectral satellite images (RapidEye) to be used as inputs in fire propagation modeling with the Minimum Travel Time algorithm. End users provide a minimum number of inputs such as fire duration, ignition point and weather information to conduct a fire simulation. AEGIS offers three types of simulations; i.e. single-fire propagations, conditional burn probabilities and at the landscape-level, similar to the FlamMap fire behavior modeling software. Artificial neural networks (ANN) were utilized for wildfire ignition risk assessment based on various parameters, training methods, activation functions, pre-processing methods and network structures. The combination of ANNs and expected burned area maps produced an integrated output map for fire danger prediction. The system also incorporates weather measurements from remote automatic weather stations and weather forecast maps. The structure of the algorithms relies on parallel processing techniques (i.e. High Performance Computing and Cloud Computing) that ensure computational power and speed. All AEGIS functionalities are accessible to authorized end users through a web-based graphical user interface. An innovative mobile application, AEGIS App, acts as a complementary tool to the web-based version of the system.

  7. Space Science Cloud: a Virtual Space Science Research Platform Based on Cloud Model

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoyan; Tong, Jizhou; Zou, Ziming

    Through independent and co-operational science missions, Strategic Pioneer Program (SPP) on Space Science, the new initiative of space science program in China which was approved by CAS and implemented by National Space Science Center (NSSC), dedicates to seek new discoveries and new breakthroughs in space science, thus deepen the understanding of universe and planet earth. In the framework of this program, in order to support the operations of space science missions and satisfy the demand of related research activities for e-Science, NSSC is developing a virtual space science research platform based on cloud model, namely the Space Science Cloud (SSC). In order to support mission demonstration, SSC integrates interactive satellite orbit design tool, satellite structure and payloads layout design tool, payload observation coverage analysis tool, etc., to help scientists analyze and verify space science mission designs. Another important function of SSC is supporting the mission operations, which runs through the space satellite data pipelines. Mission operators can acquire and process observation data, then distribute the data products to other systems or issue the data and archives with the services of SSC. In addition, SSC provides useful data, tools and models for space researchers. Several databases in the field of space science are integrated and an efficient retrieve system is developing. Common tools for data visualization, deep processing (e.g., smoothing and filtering tools), analysis (e.g., FFT analysis tool and minimum variance analysis tool) and mining (e.g., proton event correlation analysis tool) are also integrated to help the researchers to better utilize the data. The space weather models on SSC include magnetic storm forecast model, multi-station middle and upper atmospheric climate model, solar energetic particle propagation model and so on. All the services above-mentioned are based on the e-Science infrastructures of CAS e.g. cloud storage and cloud computing. SSC provides its users with self-service storage and computing resources at the same time.At present, the prototyping of SSC is underway and the platform is expected to be put into trial operation in August 2014. We hope that as SSC develops, our vision of Digital Space may come true someday.

  8. SPAIDE: A Real-time Research Platform for the Clarion CII/90K Cochlear Implant

    NASA Astrophysics Data System (ADS)

    Van Immerseel, L.; Peeters, S.; Dykmans, P.; Vanpoucke, F.; Bracke, P.

    2005-12-01

    SPAIDE ( sound-processing algorithm integrated development environment) is a real-time platform of Advanced Bionics Corporation (Sylmar, Calif, USA) to facilitate advanced research on sound-processing and electrical-stimulation strategies with the Clarion CII and 90K implants. The platform is meant for testing in the laboratory. SPAIDE is conceptually based on a clear separation of the sound-processing and stimulation strategies, and, in specific, on the distinction between sound-processing and stimulation channels and electrode contacts. The development environment has a user-friendly interface to specify sound-processing and stimulation strategies, and includes the possibility to simulate the electrical stimulation. SPAIDE allows for real-time sound capturing from file or audio input on PC, sound processing and application of the stimulation strategy, and streaming the results to the implant. The platform is able to cover a broad range of research applications; from noise reduction and mimicking of normal hearing, over complex (simultaneous) stimulation strategies, to psychophysics. The hardware setup consists of a personal computer, an interface board, and a speech processor. The software is both expandable and to a great extent reusable in other applications.

  9. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.

  10. Design and implementation of a UNIX based distributed computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less

  11. Rotating Desk for Collaboration by Two Computer Programmers

    NASA Technical Reports Server (NTRS)

    Riley, John Thomas

    2005-01-01

    A special-purpose desk has been designed to facilitate collaboration by two computer programmers sharing one desktop computer or computer terminal. The impetus for the design is a trend toward what is known in the software industry as extreme programming an approach intended to ensure high quality without sacrificing the quantity of computer code produced. Programmers working in pairs is a major feature of extreme programming. The present desk design minimizes the stress of the collaborative work environment. It supports both quality and work flow by making it unnecessary for programmers to get in each other s way. The desk (see figure) includes a rotating platform that supports a computer video monitor, keyboard, and mouse. The desk enables one programmer to work on the keyboard for any amount of time and then the other programmer to take over without breaking the train of thought. The rotating platform is supported by a turntable bearing that, in turn, is supported by a weighted base. The platform contains weights to improve its balance. The base includes a stand for a computer, and is shaped and dimensioned to provide adequate foot clearance for both users. The platform includes an adjustable stand for the monitor, a surface for the keyboard and mouse, and spaces for work papers, drinks, and snacks. The heights of the monitor, keyboard, and mouse are set to minimize stress. The platform can be rotated through an angle of 40 to give either user a straight-on view of the monitor and full access to the keyboard and mouse. Magnetic latches keep the platform preferentially at either of the two extremes of rotation. To switch between users, one simply grabs the edge of the platform and pulls it around. The magnetic latch is easily released, allowing the platform to rotate freely to the position of the other user

  12. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  13. Continuous measurement of breast tumor hormone receptor expression: a comparison of two computational pathology platforms

    PubMed Central

    Ahern, Thomas P.; Beck, Andrew H.; Rosner, Bernard A.; Glass, Ben; Frieling, Gretchen; Collins, Laura C.; Tamimi, Rulla M.

    2017-01-01

    Background Computational pathology platforms incorporate digital microscopy with sophisticated image analysis to permit rapid, continuous measurement of protein expression. We compared two computational pathology platforms on their measurement of breast tumor estrogen receptor (ER) and progesterone receptor (PR) expression. Methods Breast tumor microarrays from the Nurses’ Health Study were stained for ER (n=592) and PR (n=187). One expert pathologist scored cases as positive if ≥1% of tumor nuclei exhibited stain. ER and PR were then measured with the Definiens Tissue Studio (automated) and Aperio Digital Pathology (user-supervised) platforms. Platform-specific measurements were compared using boxplots, scatter plots, and correlation statistics. Classification of ER and PR positivity by platform-specific measurements was evaluated with areas under receiver operating characteristic curves (AUC) from univariable logistic regression models, using expert pathologist classification as the standard. Results Both platforms showed considerable overlap in continuous measurements of ER and PR between positive and negative groups classified by expert pathologist. Platform-specific measurements were strongly and positively correlated with one another (rho≥0.77). The user-supervised Aperio workflow performed slightly better than the automated Definiens workflow at classifying ER positivity (AUCAperio=0.97; AUCDefiniens=0.90; difference=0.07, 95% CI: 0.05, 0.09) and PR positivity (AUCAperio=0.94; AUCDefiniens=0.87; difference=0.07, 95% CI: 0.03, 0.12). Conclusion Paired hormone receptor expression measurements from two different computational pathology platforms agreed well with one another. The user-supervised workflow yielded better classification accuracy than the automated workflow. Appropriately validated computational pathology algorithms enrich molecular epidemiology studies with continuous protein expression data and may accelerate tumor biomarker discovery. PMID:27729430

  14. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  15. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  16. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  17. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  18. AEGIS: a wildfire prevention and management information system

    NASA Astrophysics Data System (ADS)

    Kalabokidis, Kostas; Ager, Alan; Finney, Mark; Athanasis, Nikos; Palaiologou, Palaiologos; Vasilakos, Christos

    2016-03-01

    We describe a Web-GIS wildfire prevention and management platform (AEGIS) developed as an integrated and easy-to-use decision support tool to manage wildland fire hazards in Greece (http://aegis.aegean.gr). The AEGIS platform assists with early fire warning, fire planning, fire control and coordination of firefighting forces by providing online access to information that is essential for wildfire management. The system uses a number of spatial and non-spatial data sources to support key system functionalities. Land use/land cover maps were produced by combining field inventory data with high-resolution multispectral satellite images (RapidEye). These data support wildfire simulation tools that allow the users to examine potential fire behavior and hazard with the Minimum Travel Time fire spread algorithm. End-users provide a minimum number of inputs such as fire duration, ignition point and weather information to conduct a fire simulation. AEGIS offers three types of simulations, i.e., single-fire propagation, point-scale calculation of potential fire behavior, and burn probability analysis, similar to the FlamMap fire behavior modeling software. Artificial neural networks (ANNs) were utilized for wildfire ignition risk assessment based on various parameters, training methods, activation functions, pre-processing methods and network structures. The combination of ANNs and expected burned area maps are used to generate integrated output map of fire hazard prediction. The system also incorporates weather information obtained from remote automatic weather stations and weather forecast maps. The system and associated computation algorithms leverage parallel processing techniques (i.e., High Performance Computing and Cloud Computing) that ensure computational power required for real-time application. All AEGIS functionalities are accessible to authorized end-users through a web-based graphical user interface. An innovative smartphone application, AEGIS App, also provides mobile access to the web-based version of the system.

  19. [The Key Technology Study on Cloud Computing Platform for ECG Monitoring Based on Regional Internet of Things].

    PubMed

    Yang, Shu; Qiu, Yuyan; Shi, Bo

    2016-09-01

    This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.

  20. System and Method for an Integrated Satellite Platform

    NASA Technical Reports Server (NTRS)

    Starin, Scott R. (Inventor); Sheikh, Salman I. (Inventor); Hesse, Michael (Inventor); Clagett, Charles E. (Inventor); Santos Soto, Luis H. (Inventor); Hesh, Scott V. (Inventor); Paschalidis, Nikolaos (Inventor); Ericsson, Aprille J. (Inventor); Johnson, Michael A. (Inventor)

    2018-01-01

    A system, method, and computer-readable storage devices for a 6U CubeSat with a magnetometer boom. The example 6U CubeSat can include an on-board computing device connected to an electrical power system, wherein the electrical power system receives power from at least one of a battery and at least one solar panel, a first fluxgate sensor attached to an extendable boom, a release mechanism for extending the extendable boom, at least one second fluxgate sensor fixed within the satellite, an ion neutral mass spectrometer, and a relativistic electron/proton telescope. The on-board computing device can receive data from the first fluxgate sensor, the at least one second fluxgate sensor, the ion neutral mass spectrometer, and the relativistic electron/proton telescope via the bus, and can then process the data via an algorithm to deduce a geophysical signal.

  1. Promoting scientific collaboration and research through integrated social networking capabilities within the OpenTopography Portal

    NASA Astrophysics Data System (ADS)

    Nandigam, V.; Crosby, C. J.; Baru, C.

    2009-04-01

    LiDAR (Light Distance And Ranging) topography data offer earth scientists the opportunity to study the earth's surface at very high resolutions. As a result, the popularity of these data is growing dramatically. However, the management, distribution, and analysis of community LiDAR data sets is a challenge due to their massive size (multi-billion point, mutli-terabyte). We have also found that many earth science users of these data sets lack the computing resources and expertise required to process these data. We have developed the OpenTopography Portal to democratize access to these large and computationally challenging data sets. The OpenTopography Portal uses cyberinfrastructure technology developed by the GEON project to provide access to LiDAR data in a variety of formats. LiDAR data products available range from simple Google Earth visualizations of LiDAR-derived hillshades to 1 km2 tiles of standard digital elevation model (DEM) products as well as LiDAR point cloud data and user generated custom-DEMs. We have found that the wide spectrum of LiDAR users have variable scientific applications, computing resources and technical experience and thus require a data system with multiple distribution mechanisms and platforms to serve a broader range of user communities. Because the volume of LiDAR topography data available is rapidly expanding, and data analysis techniques are evolving, there is a need for the user community to be able to communicate and interact to share knowledge and experiences. To address this need, the OpenTopography Portal enables social networking capabilities through a variety of collaboration tools, web 2.0 technologies and customized usage pattern tracking. Fundamentally, these tools offer users the ability to communicate, to access and share documents, participate in discussions, and to keep up to date on upcoming events and emerging technologies. The OpenTopography portal achieves the social networking capabilities by integrating various software technologies and platforms. These include the Expression Engine Content Management System (CMS) that comes with pre-packaged collaboration tools like blogs and wikis, the Gridsphere portal framework that contains the primary GEON LiDAR System portlet with user job monitoring capabilities and a java web based discussion forum (Jforums) application all seamlessly integrated under one portal. The OpenTopography Portal also provides integrated authentication mechanism between the various CMS collaboration tools and the core gridsphere based portlets. The integration of these various technologies allows for enhanced user interaction capabilities within the portal. By integrating popular collaboration tools like discussion forums and blogs we can promote conversation and openness among users. The ability to ask question and share expertise in forum discussions allows users to easily find information and interact with users facing similar challenges. The OpenTopography Blog enables our domain experts to post ideas, news items, commentary, and other resources in order to foster discussion and information sharing. The content management capabilities of the portal allow for easy updates to information in the form of publications, documents, and news articles. Access to the most current information fosters better decision-making. As has become the standard for web 2.0 technologies, the OpenTopography Portal is fully RSS enabled to allow users of the portal to keep track of news items, forum discussions, blog updates, and system outages. We are currently exploring how the information captured by user and job monitoring components of the Gridsphere based GEON LiDAR System can be harnessed to provide a recommender system that will help users to identify appropriate processing parameters and to locate related documents and data. By seamlessly integrating the various platforms and technologies under one single portal, we can take advantage of popular online collaboration tools that are either stand alone or software platform restricted. The availability of these collaboration tools along with the data will foster more community interaction and increase the strength and vibrancy of the LiDAR topography user community.

  2. All-optical SR flip-flop based on SOA-MZI switches monolithically integrated on a generic InP platform

    NASA Astrophysics Data System (ADS)

    Pitris, St.; Vagionas, Ch.; Kanellos, G. T.; Kisacik, R.; Tekin, T.; Broeke, R.; Pleros, N.

    2016-03-01

    At the dawning of the exaflop era, High Performance Computers are foreseen to exploit integrated all-optical elements, to overcome the speed limitations imposed by electronic counterparts. Drawing from the well-known Memory Wall limitation, imposing a performance gap between processor and memory speeds, research has focused on developing ultra-fast latching devices and all-optical memory elements capable of delivering buffering and switching functionalities at unprecedented bit-rates. Following the master-slave configuration of electronic Flip-Flops, coupled SOA-MZI based switches have been theoretically investigated to exceed 40 Gb/s operation, provided a short coupling waveguide. However, this flip-flop architecture has been only hybridly integrated with silica-on-silicon integration technology exhibiting a total footprint of 45x12 mm2 and intra-Flip-Flop coupling waveguide of 2.5cm, limited at 5 Gb/s operation. Monolithic integration offers the possibility to fabricate multiple active and passive photonic components on a single chip at a close proximity towards, bearing promises for fast all-optical memories. Here, we present for the first time a monolithically integrated all-optical SR Flip-Flop with coupled master-slave SOA-MZI switches. The photonic chip is integrated on a 6x2 mm2 die as a part of a multi-project wafer run using library based components of a generic InP platform, fiber-pigtailed and fully packaged on a temperature controlled ceramic submount module with electrical contacts. The intra Flip-Flop coupling waveguide is 5 mm long, reducing the total footprint by two orders of magnitude. Successful flip flop functionality is evaluated at 10 Gb/s with clear open eye diagram, achieving error free operation with a power penalty of 4dB.

  3. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  4. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  5. A self-assembled nanoscale robotic arm controlled by electric fields

    NASA Astrophysics Data System (ADS)

    Kopperger, Enzo; List, Jonathan; Madhira, Sushi; Rothfischer, Florian; Lamb, Don C.; Simmel, Friedrich C.

    2018-01-01

    The use of dynamic, self-assembled DNA nanostructures in the context of nanorobotics requires fast and reliable actuation mechanisms. We therefore created a 55-nanometer–by–55-nanometer DNA-based molecular platform with an integrated robotic arm of length 25 nanometers, which can be extended to more than 400 nanometers and actuated with externally applied electrical fields. Precise, computer-controlled switching of the arm between arbitrary positions on the platform can be achieved within milliseconds, as demonstrated with single-pair Förster resonance energy transfer experiments and fluorescence microscopy. The arm can be used for electrically driven transport of molecules or nanoparticles over tens of nanometers, which is useful for the control of photonic and plasmonic processes. Application of piconewton forces by the robot arm is demonstrated in force-induced DNA duplex melting experiments.

  6. A model-driven approach to information security compliance

    NASA Astrophysics Data System (ADS)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  7. Electrochemical impedimetric sensor based on molecularly imprinted polymers/sol-gel chemistry for methidathion organophosphorous insecticide recognition.

    PubMed

    Bakas, Idriss; Hayat, Akhtar; Piletsky, Sergey; Piletska, Elena; Chehimi, Mohamed M; Noguer, Thierry; Rouillon, Régis

    2014-12-01

    We report here a novel method to detect methidathion organophosphorous insecticides. The sensing platform was architected by the combination of molecularly imprinted polymers and sol-gel technique on inexpensive, portable and disposable screen printed carbon electrodes. Electrochemical impedimetric detection technique was employed to perform the label free detection of the target analyte on the designed MIP/sol-gel integrated platform. The selection of the target specific monomer by electrochemical impedimetric methods was consistent with the results obtained by the computational modelling method. The prepared electrochemical MIP/sol-gel based sensor exhibited a high recognition capability toward methidathion, as well as a broad linear range and a low detection limit under the optimized conditions. Satisfactory results were also obtained for the methidathion determination in waste water samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. The Requirements and Design of the Rapid Prototyping Capabilities System

    NASA Astrophysics Data System (ADS)

    Haupt, T. A.; Moorhead, R.; O'Hara, C.; Anantharaj, V.

    2006-12-01

    The Rapid Prototyping Capabilities (RPC) system will provide the capability to rapidly evaluate innovative methods of linking science observations. To this end, the RPC will provide the capability to integrate the software components and tools needed to evaluate the use of a wide variety of current and future NASA sensors, numerical models, and research results, model outputs, and knowledge, collectively referred to as "resources". It is assumed that the resources are geographically distributed, and thus RPC will provide the support for the location transparency of the resources. The RPC system requires providing support for: (1) discovery, semantic understanding, secure access and transport mechanisms for data products available from the known data provides; (2) data assimilation and geo- processing tools for all data transformations needed to match given data products to the model input requirements; (3) model management including catalogs of models and model metadata, and mechanisms for creation environments for model execution; and (4) tools for model output analysis and model benchmarking. The challenge involves developing a cyberinfrastructure for a coordinated aggregate of software, hardware and other technologies, necessary to facilitate RPC experiments, as well as human expertise to provide an integrated, "end-to-end" platform to support the RPC objectives. Such aggregation is to be achieved through a horizontal integration of loosely coupled services. The cyberinfrastructure comprises several software layers. At the bottom, the Grid fabric encompasses network protocols, optical networks, computational resources, storage devices, and sensors. At the top, applications use workload managers to coordinate their access to physical resources. Applications are not tightly bounded to a single physical resource. Instead, they bind dynamically to resources (i.e., they are provisioned) via a common grid infrastructure layer. For the RPC system, the cyberinfrastructure must support organizing computations (or "data transformations" in general) into complex workflows with resource discovery, automatic resource allocation, monitoring, preserving provenance as well as to aggregate heterogeneous, distributed data into knowledge databases. Such service orchestration is the responsibility of the "collective services" layer. For RPC, this layer will be based on Java Business Integration (JBI, [JSR-208]) specification which is a standards-based integration platform that combines messaging, web services, data transformation, and intelligent routing to reliably connect and coordinate the interaction of significant numbers of diverse applications (plug-in components) across organizational boundaries. JBI concept is a new approach to integration that can provide the underpinnings for loosely coupled, highly distributed integration network that can scale beyond the limits of currently used hub-and-spoke brokers. This presentation discusses the requirements, design and early prototype of the NASA-sponsored RPC system under development at Mississippi State University, demonstrating the integration of data provisioning mechanisms, data transformation tools and computational models into a single interoperable system enabling rapid execution of RPC experiments.

  9. Cloud computing for comparative genomics with windows azure platform.

    PubMed

    Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.

  10. Cloud Computing for Comparative Genomics with Windows Azure Platform

    PubMed Central

    Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609

  11. Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive and time consuming. One of the main contributors to the high cost and lengthy time is the need to perform many large-scale hardware tests and the inability to integrate all appropriate subsystems early in the design process. The NASA Glenn Research Center is developing the technologies required to enable simulations of full aerospace propulsion systems in sufficient detail to resolve critical design issues early in the design process before hardware is built. This concept, called the Numerical Propulsion System Simulation (NPSS), is focused on the integration of multiple disciplines such as aerodynamics, structures and heat transfer with computing and communication technologies to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS, as illustrated, is to be a "numerical test cell" that enables full engine simulation overnight on cost-effective computing platforms. There are several key elements within NPSS that are required to achieve this capability: 1) clear data interfaces through the development and/or use of data exchange standards, 2) modular and flexible program construction through the use of object-oriented programming, 3) integrated multiple fidelity analysis (zooming) techniques that capture the appropriate physics at the appropriate fidelity for the engine systems, 4) multidisciplinary coupling techniques and finally 5) high performance parallel and distributed computing. The current state of development in these five area focuses on air breathing gas turbine engines and is reported in this paper. However, many of the technologies are generic and can be readily applied to rocket based systems and combined cycles currently being considered for low-cost access-to-space applications. Recent accomplishments include: (1) the development of an industry-standard engine cycle analysis program and plug 'n play architecture, called NPSS Version 1, (2) A full engine simulation that combines a 3D low-pressure subsystem with a 0D high pressure core simulation. This demonstrates the ability to integrate analyses at different levels of detail and to aerodynamically couple components, the fan/booster and low-pressure turbine, through a 3D computational fluid dynamics simulation. (3) Simulation of all of the turbomachinery in a modern turbofan engine on parallel computing platform for rapid and cost-effective execution. This capability can also be used to generate full compressor map, requiring both design and off-design simulation. (4) Three levels of coupling characterize the multidisciplinary analysis under NPSS: loosely coupled, process coupled and tightly coupled. The loosely coupled and process coupled approaches require a common geometry definition to link CAD to analysis tools. The tightly coupled approach is currently validating the use of arbitrary Lagrangian/Eulerian formulation for rotating turbomachinery. The validation includes both centrifugal and axial compression systems. The results of the validation will be reported in the paper. (5) The demonstration of significant computing cost/performance reduction for turbine engine applications using PC clusters. The NPSS Project is supported under the NASA High Performance Computing and Communications Program.

  12. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  13. Implementation of a Message Passing Interface into a Cloud-Resolving Model for Massively Parallel Computing

    NASA Technical Reports Server (NTRS)

    Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve

    2004-01-01

    The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.

  14. Technical integration of hippocampus, Basal Ganglia and physical models for spatial navigation.

    PubMed

    Fox, Charles; Humphries, Mark; Mitchinson, Ben; Kiss, Tamas; Somogyvari, Zoltan; Prescott, Tony

    2009-01-01

    Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings.

  15. Mems: Platform for Large-Scale Integrated Vacuum Electronic Circuits

    DTIC Science & Technology

    2017-03-20

    SECURITY CLASSIFICATION OF: The objective of the LIVEC advanced study project was to develop a platform for large-scale integrated vacuum electronic ...Distribution Unlimited UU UU UU UU 20-03-2017 1-Jul-2014 30-Jun-2015 Final Report: MEMS Platform for Large-Scale Integrated Vacuum Electronic ... Electronic Circuits (LIVEC) Contract No: W911NF-14-C-0093 COR Dr. James Harvey U.S. ARO RTP, NC 27709-2211 Phone: 702-696-2533 e-mail

  16. Potential for integrated optical circuits in advanced aircraft with fiber optic control and monitoring systems

    NASA Astrophysics Data System (ADS)

    Baumbick, Robert J.

    1991-02-01

    Fiber optic technology is expected to be used in future advanced weapons platforms as well as commercial aerospace applications. Fiber optic waveguides will be used to transmit noise free high speed data between a multitude of computers as well as audio and video information to the flight crew. Passive optical sensors connected to control computers with optical fiber interconnects will serve both control and monitoring functions. Implementation of fiber optic technology has already begun. Both the military and NASA have several programs in place. A cooperative program called FOCSI (Fiber Optic Control System Integration) between NASA Lewis and the NAVY to build environmentally test and flight demonstrate sensor systems for propul sion and flight control systems is currently underway. Integrated Optical Circuits (IOC''s) are also being given serious consideration for use in advanced aircraft sys tems. IOC''s will result in miniaturization and localization of components to gener ate detect optical signals and process them for use by the control computers. In some complex systems IOC''s may be required to perform calculations optically if the technology is ready replacing some of the electronic systems used today. IOC''s are attractive because they will result in rugged components capable of withstanding severe environments in advanced aerospace vehicles. Manufacturing technology devel oped for microelectronic integrated circuits applied to IOC''s will result in cost effective manufacturing. This paper reviews the current FOCSI program and describes the role of IOC''s in FOCSI applications.

  17. A review of existing and emerging digital technologies to combat the global trade in fake medicines.

    PubMed

    Mackey, Tim K; Nayyar, Gaurvika

    2017-05-01

    The globalization of the pharmaceutical supply chain has introduced new challenges, chief among them, fighting the international criminal trade in fake medicines. As the manufacture, supply, and distribution of drugs becomes more complex, so does the need for innovative technology-based solutions to protect patients globally. Areas covered: We conducted a multidisciplinary review of the science/health, information technology, computer science, and general academic literature with the aim of identifying cutting-edge existing and emerging 'digital' solutions to combat fake medicines. Our review identified five distinct categories of technology including mobile, radio frequency identification, advanced computational methods, online verification, and blockchain technology. Expert opinion: Digital fake medicine solutions are unifying platforms that integrate different types of anti-counterfeiting technologies as complementary solutions, improve information sharing and data collection, and are designed to overcome existing barriers of adoption and implementation. Investment in this next generation technology is essential to ensure the future security and integrity of the global drug supply chain.

  18. Tier-2 Optimisation for Computational Density/Diversity and Big Data

    NASA Astrophysics Data System (ADS)

    Fay, R. B.; Bland, J.

    2014-06-01

    As the number of cores on chip continues to trend upwards and new CPU architectures emerge, increasing CPU density and diversity presents multiple challenges to site administrators. These include scheduling for massively multi-core systems (potentially including Graphical Processing Units (GPU), integrated and dedicated) and Many Integrated Core (MIC)) to ensure a balanced throughput of jobs while preserving overall cluster throughput, as well as the increasing complexity of developing for these heterogeneous platforms, and the challenge in managing this more complex mix of resources. In addition, meeting data demands as both dataset sizes increase and as the rate of demand scales with increased computational power requires additional performance from the associated storage elements. In this report, we evaluate one emerging technology, Solid State Drive (SSD) caching for RAID controllers, with consideration to its potential to assist in meeting evolving demand. We also briefly consider the broader developing trends outlined above in order to identify issues that may develop and assess what actions should be taken in the immediate term to address those.

  19. Modelling brain emergent behaviours through coevolution of neural agents.

    PubMed

    Maniadakis, Michail; Trahanias, Panos

    2006-06-01

    Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.

  20. Cloud Computing Services for Seismic Networks

    NASA Astrophysics Data System (ADS)

    Olson, Michael

    This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN---the Community Seismic Network---which uses relatively low-cost sensors deployed by members of the community, and (2) SAF---the Situation Awareness Framework---which integrates data from multiple sources, including the CSN, CISN---the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California---and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.

Top