Sample records for reliable computing platform

  1. Design Strategy for a Formally Verified Reliable Computing Platform

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.

    1991-01-01

    This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.

  2. Formal design and verification of a reliable computing platform for real-time control. Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.

    1990-01-01

    A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.

  3. Formal design and verification of a reliable computing platform for real-time control. Phase 2: Results

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.

    1992-01-01

    The design and formal verification of the Reliable Computing Platform (RCP), a fault tolerant computing system for digital flight control applications is presented. The RCP uses N-Multiply Redundant (NMR) style redundancy to mask faults and internal majority voting to flush the effects of transient faults. The system is formally specified and verified using the Ehdm verification system. A major goal of this work is to provide the system with significant capability to withstand the effects of High Intensity Radiated Fields (HIRF).

  4. [The Key Technology Study on Cloud Computing Platform for ECG Monitoring Based on Regional Internet of Things].

    PubMed

    Yang, Shu; Qiu, Yuyan; Shi, Bo

    2016-09-01

    This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.

  5. Application verification research of cloud computing technology in the field of real time aerospace experiment

    NASA Astrophysics Data System (ADS)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  6. Formal design and verification of a reliable computing platform for real-time control (phase 3 results)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael

    1994-01-01

    In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.

  7. Computational toxicology using the OpenTox application programming interface and Bioclipse

    PubMed Central

    2011-01-01

    Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173

  8. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less

  9. MSIX - A general and user-friendly platform for RAM analysis

    NASA Astrophysics Data System (ADS)

    Pan, Z. J.; Blemel, Peter

    The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.

  10. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  11. Time Triggered Protocol (TTP) for Integrated Modular Avionics

    NASA Technical Reports Server (NTRS)

    Motzet, Guenter; Gwaltney, David A.; Bauer, Guenther; Jakovljevic, Mirko; Gagea, Leonard

    2006-01-01

    Traditional avionics computing systems are federated, with each system provided on a number of dedicated hardware units. Federated applications are physically separated from one another and analysis of the systems is undertaken individually. Integrated Modular Avionics (IMA) takes these federated functions and integrates them on a common computing platform in a tightly deterministic distributed real-time network of computing modules in which the different applications can run. IMA supports different levels of criticality in the same computing resource and provides a platform for implementation of fault tolerance through hardware and application redundancy. Modular implementation has distinct benefits in design, testing and system maintainability. This paper covers the requirements for fault tolerant bus systems used to provide reliable communication between IMA computing modules. An overview of the Time Triggered Protocol (TTP) specification and implementation as a reliable solution for IMA systems is presented. Application examples in aircraft avionics and a development system for future space application are covered. The commercially available TTP controller can be also be implemented in an FPGA and the results from implementation studies are covered. Finally future direction for the application of TTP and related development activities are presented.

  12. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.

  13. Photonic elements in smart systems for use in aerospace platforms

    NASA Astrophysics Data System (ADS)

    Adamovsky, Grigory; Baumbick, Robert J.; Tabib-Azar, Massood

    1998-07-01

    To compete globally in the next millennium, designers of new transportation vehicles will have to be innovative. Keen competition will reward innovative concepts that are developed and proven first. In order to improve reliability of aerospace platforms and reduce operating cots, new technologies must be exploited to produce autonomous systems, based on highly distributed, smart systems, which can be treated as line replaceable units. These technologies include photonics, which provide sensing and information transfer functions, and micro electro mechanical systems that will produce the actuation and, in some cases, may even provide a computing capability that resembles the hydro- mechanical control system used in older aircraft systems. The combination of these technologies will provide unique systems that will enable achieving the reliability and cost goals dictated by global market. In the article we review some of these issues and discuss a role of photonics in smart system for aerospace platforms.

  14. Provable Transient Recovery for Frame-Based, Fault-Tolerant Computing Systems

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Butler, Ricky W.

    1992-01-01

    We present a formal verification of the transient fault recovery aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system architecture for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization accommodates a wide variety of voting schemes for purging the effects of transients.

  15. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  16. Time and Space Partition Platform for Safe and Secure Flight Software

    NASA Astrophysics Data System (ADS)

    Esquinas, Angel; Zamorano, Juan; de la Puente, Juan A.; Masmano, Miguel; Crespo, Alfons

    2012-08-01

    There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable realtime kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.

  17. Global Software Development with Cloud Platforms

    NASA Astrophysics Data System (ADS)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  18. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  19. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Formal Techniques for Synchronized Fault-Tolerant Systems

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.; Butler, Ricky W.

    1992-01-01

    We present the formal verification of synchronizing aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization is based on an extended state machine model incorporating snapshots of local processors clocks.

  1. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  2. Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, Raymond J.

    2010-01-01

    The Autonomous Flight Safety System (AFSS) aims to replace the human element of range safety operations, as well as reduce reliance on expensive, downrange assets for launches of expendable launch vehicles (ELVs). The system consists of multiple navigation sensors and flight computers that provide a highly reliable platform. It is designed to ensure that single-event failures in a flight computer or sensor will not bring down the whole system. The flight computer uses a rules-based structure derived from range safety requirements to make decisions whether or not to destroy the rocket.

  3. A specialized plug-in software module for computer-aided quantitative measurement of medical images.

    PubMed

    Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H

    2003-12-01

    This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.

  4. Use of nonlinear design optimization techniques in the comparison of battery discharger topologies for the space platform

    NASA Technical Reports Server (NTRS)

    Sable, Dan M.; Cho, Bo H.; Lee, Fred C.

    1990-01-01

    A detailed comparison of a boost converter, a voltage-fed, autotransformer converter, and a multimodule boost converter, designed specifically for the space platform battery discharger, is performed. Computer-based nonlinear optimization techniques are used to facilitate an objective comparison. The multimodule boost converter is shown to be the optimum topology at all efficiencies. The margin is greatest at 97 percent efficiency. The multimodule, multiphase boost converter combines the advantages of high efficiency, light weight, and ample margin on the component stresses, thus ensuring high reliability.

  5. Three-Dimensional Space to Assess Cloud Interoperability

    DTIC Science & Technology

    2013-03-01

    12 1. Portability and Mobility ...collection of network-enabled services that guarantees to provide a scalable, easy accessible, reliable, and personalized computing infrastructure , based on...are used in research to describe cloud models, such as SaaS (Software as a Service), PaaS (Platform as a service), IaaS ( Infrastructure as a Service

  6. Online Studies on Variation in Orthopedic Surgery: Computed Tomography in MPEG4 Versus DICOM Format.

    PubMed

    Mellema, Jos J; Mallee, Wouter H; Guitton, Thierry G; van Dijk, C Niek; Ring, David; Doornberg, Job N

    2017-10-01

    The purpose of this study was to compare the observer participation and satisfaction as well as interobserver reliability between two online platforms, Science of Variation Group (SOVG) and Traumaplatform Study Collaborative, for the evaluation of complex tibial plateau fractures using computed tomography in MPEG4 and DICOM format. A total of 143 observers started with the online evaluation of 15 complex tibial plateau fractures via either the SOVG or Traumaplatform Study Collaborative websites using MPEG4 videos or a DICOM viewer, respectively. Observers were asked to indicate the absence or presence of four tibial plateau fracture characteristics and to rate their satisfaction with the evaluation as provided by the respective online platforms. The observer participation rate was significantly higher in the SOVG (MPEG4 video) group compared to that in the Traumaplatform Study Collaborative (DICOM viewer) group (75 and 43%, respectively; P < 0.001). The median observer satisfaction with the online evaluation was seven (range, 0-10) using MPEG4 video compared to six (range, 1-9) using DICOM viewer (P = 0.11). The interobserver reliability for recognition of fracture characteristics in complex tibial plateau fractures was higher for the evaluation using MPEG4 video. In conclusion, observer participation and interobserver reliability for the characterization of tibial plateau fractures was greater with MPEG4 videos than with a standard DICOM viewer, while there was no difference in observer satisfaction. Future reliability studies should account for the method of delivering images.

  7. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  8. The Development, Validity, and Reliability of Communication Satisfaction in an Online Asynchronous Discussion Scale

    ERIC Educational Resources Information Center

    Hung, Min-Ling; Chou, Chien

    2014-01-01

    The purpose of this study was to identify dimensions of students' communication satisfaction in an asynchronous discussion forum. An asynchronous discussion may be defined as text-based human-to-human communication via computer networks that provides a platform for the participants to interact with one another to exchange ideas, insights, and…

  9. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  10. Validation of tablet-based evaluation of color fundus images

    PubMed Central

    Christopher, Mark; Moga, Daniela C.; Russell, Stephen R.; Folk, James C.; Scheetz, Todd; Abràmoff, Michael D.

    2012-01-01

    Purpose To compare diabetic retinopathy (DR) referral recommendations made by viewing fundus images using a tablet computer to recommendations made using a standard desktop display. Methods A tablet computer (iPad) and a desktop PC with a high-definition color display were compared. For each platform, two retinal specialists independently rated 1200 color fundus images from patients at risk for DR using an annotation program, Truthseeker. The specialists determined whether each image had referable DR, and also how urgently each patient should be referred for medical examination. Graders viewed and rated the randomly presented images independently and were masked to their ratings on the alternative platform. Tablet- and desktop display-based referral ratings were compared using cross-platform, intra-observer kappa as the primary outcome measure. Additionally, inter-observer kappa, sensitivity, specificity, and area under ROC (AUC) were determined. Results A high level of cross-platform, intra-observer agreement was found for the DR referral ratings between the platforms (κ=0.778), and for the two graders, (κ=0.812). Inter-observer agreement was similar for the two platforms (κ=0.544 and κ=0.625 for tablet and desktop, respectively). The tablet-based ratings achieved a sensitivity of 0.848, a specificity of 0.987, and an AUC of 0.950 compared to desktop display-based ratings. Conclusions In this pilot study, tablet-based rating of color fundus images for subjects at risk for DR was consistent with desktop display-based rating. These results indicate that tablet computers can be reliably used for clinical evaluation of fundus images for DR. PMID:22495326

  11. An MRI-compatible platform for one-dimensional motion management studies in MRI.

    PubMed

    Nofiele, Joris; Yuan, Qing; Kazem, Mohammad; Tatebe, Ken; Torres, Quinn; Sawant, Amit; Pedrosa, Ivan; Chopra, Rajiv

    2016-08-01

    Abdominal MRI remains challenging because of respiratory motion. Motion compensation strategies are difficult to compare clinically because of the variability across human subjects. The goal of this study was to evaluate a programmable system for one-dimensional motion management MRI research. A system comprised of a programmable motorized linear stage and computer was assembled and tested in the MRI environment. Tests of the mutual interference between the platform and a whole-body MRI were performed. Organ trajectories generated from a high-temporal resolution scan of a healthy volunteer were used in phantom tests to evaluate the effects of motion on image quality and quantitative MRI measurements. No interference between the motion platform and the MRI was observed, and reliable motion could be produced across a wide range of imaging conditions. Motion-related artifacts commensurate with motion amplitude, frequency, and waveform were observed. T2 measurement of a kidney lesion in an abdominal phantom showed that its value decreased by 67% with physiologic motion, but could be partially recovered with navigator-based motion-compensation. The motion platform can produce reliable linear motion within a whole-body MRI. The system can serve as a foundation for a research platform to investigate and develop motion management approaches for MRI. Magn Reson Med 76:702-712, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. Long-term real-time structural health monitoring using wireless smart sensor

    NASA Astrophysics Data System (ADS)

    Jang, Shinae; Mensah-Bonsu, Priscilla O.; Li, Jingcheng; Dahal, Sushil

    2013-04-01

    Improving the safety and security of civil infrastructure has become a critical issue for decades since it plays a central role in the economics and politics of a modern society. Structural health monitoring of civil infrastructure using wireless smart sensor network has emerged as a promising solution recently to increase structural reliability, enhance inspection quality, and reduce maintenance costs. Though hardware and software framework are well prepared for wireless smart sensors, the long-term real-time health monitoring strategy are still not available due to the lack of systematic interface. In this paper, the Imote2 smart sensor platform is employed, and a graphical user interface for the long-term real-time structural health monitoring has been developed based on Matlab for the Imote2 platform. This computer-aided engineering platform enables the control, visualization of measured data as well as safety alarm feature based on modal property fluctuation. A new decision making strategy to check the safety is also developed and integrated in this software. Laboratory validation of the computer aided engineering platform for the Imote2 on a truss bridge and a building structure has shown the potential of the interface for long-term real-time structural health monitoring.

  13. Network-based drug discovery by integrating systems biology and computational technologies

    PubMed Central

    Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua

    2013-01-01

    Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768

  14. Cloud@Home: A New Enhanced Computing Paradigm

    NASA Astrophysics Data System (ADS)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  15. The Army/Air Force RAMCAD (Reliability and Maintainability Computer-Aided Design) Program Progress Report Through September 1989

    DTIC Science & Technology

    1990-01-01

    deal with platforms and their weapons. Two approaches emerged from this effort. The first plan was to address the benefits and the problems involved in...inadam~ or on a remnoftd lcod oompLusr. 3. The Nwaiguior tabthe DGMSIt aD wWe rq4he datatote ApicadwProW tip FaMW Tebs. *4. The didaI formailed for...demonstrated the benefits to be gained in time and training cost of a common method for operating different computer programs. It is this common mode of

  16. Monitor weather conditions for cloud seeding control. [Colorado River Basin

    NASA Technical Reports Server (NTRS)

    Kahan, A. M. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The near real-time DCS platform data transfer to the time-share compare is a working reality. Six stations are now being automatically monitored and displayed with a system delay of 3 to 8 hours from time of data transmission to time of data accessibility on the computer. The DCS platform system has proven itself a valuable tool for near real-time monitoring of mountain precipitation. Data from Wolf Creek Pass were an important input in making the decision when to suspend seeding operations to avoid exceeding suspension criteria in that area. The DCS platforms, as deployed in this investigation, have proven themselves to be reliable weather resistant systems for winter mountain environments in the southern Colorado mountains.

  17. Assessment in health care education - modelling and implementation of a computer supported scoring process.

    PubMed

    Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil

    2012-01-01

    Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.

  18. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  19. Sensory System for Implementing a Human—Computer Interface Based on Electrooculography

    PubMed Central

    Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena

    2011-01-01

    This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes. PMID:22346579

  20. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    NASA Technical Reports Server (NTRS)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  1. Computing rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less

  2. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  3. The Clinical Relevance of Force Platform Measures in Multiple Sclerosis: A Review

    PubMed Central

    Prosperini, Luca; Pozzilli, Carlo

    2013-01-01

    Balance impairment and falls are frequent in patients with multiple sclerosis (PwMS), and they may occur even at the earliest stage of the disease and in minimally impaired patients. The introduction of computer-based force platform measures (i.e., static and dynamic posturography) has provided an objective and sensitive tool to document both deficits and improvements in balance. By using more challenging test conditions, force platform measures can also reveal subtle balance disorders undetectable by common clinical scales. Furthermore, posturographic techniques may also allow to reliably identify PwMS who are at risk of accidental falls. Although force platform measures offer several theoretical advantages, only few studies extensively investigated their role in better managing PwMS. Standardised procedures, as well as clinical relevance of changes detected by static or dynamic posturography, are still lacking. In this review, we summarized studies which investigated balance deficit by means of force platform measures, focusing on their ability in detecting patients at high risk of falls and in estimating rehabilitation-induced changes, highlighting the pros and the cons with respect to clinical scales. PMID:23766910

  4. A Conceptual Design for a Reliable Optical Bus (ROBUS)

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Malekpour, Mahyar; Torres, Wilfredo

    2002-01-01

    The Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) is a new family of fault-tolerant architectures under development at NASA Langley Research Center (LaRC). The SPIDER is a general-purpose computational platform suitable for use in ultra-reliable embedded control applications. The design scales from a small configuration supporting a single aircraft function to a large distributed configuration capable of supporting several functions simultaneously. SPIDER consists of a collection of simplex processing elements communicating via a Reliable Optical Bus (ROBUS). The ROBUS is an ultra-reliable, time-division multiple access broadcast bus with strictly enforced write access (no babbling idiots) providing basic fault-tolerant services using formally verified fault-tolerance protocols including Interactive Consistency (Byzantine Agreement), Internal Clock Synchronization, and Distributed Diagnosis. The conceptual design of the ROBUS is presented in this paper including requirements, topology, protocols, and the block-level design. Verification activities, including the use of formal methods, are also discussed.

  5. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  6. Investigation of Pearl River data collection system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The reliability of employing NASA developed remote sensing for in situ near real time monitoring of water quality in the Pearl River is evaluated. The placement, operation and maintenance of a number of NASA developed data collection platforms (DCP's) on the Pearl River are described. The reception, processing, and retransmission of water quality data from an ERTS satellite to the Mississippi Air and Water Pollution Control Commission (MAWPCC) via computer linkup are assessed.

  7. Prevention of Low Back Pain in The Military. A Randomized Clinical Trial

    DTIC Science & Technology

    2011-07-01

    the web-based follow-up survey. These measures were collected at baseline using a variety of commonly uti - lized and previously validated self-report...computer literacy, which could certainly influ- ence response rates given the web-based platform uti - lized to assess follow-up in the POLM trial. Previous...measures of reliability. As research continues to explore the changes in rectus abdominis mus- culature associated with pregnancy and its potential

  8. Reliability and Accuracy of Static Parameters Obtained From Ink and Pressure Platform Footprints.

    PubMed

    Zuil-Escobar, Juan Carlos; Martínez-Cepa, Carmen Belén; Martín-Urrialde, Jose Antonio; Gómez-Conesa, Antonia

    2016-09-01

    The purpose of this study was to evaluate the accuracy and the intrarater reliability of arch angle (AA), Staheli Index (SI), and Chippaux-Smirak Index (CSI) obtained from ink and pressure platform footprints. We obtained AA, SI, and CSI measurements from ink pedigraph footprints and pressure platform footprints in 40 healthy participants (aged 25.65 ± 5.187 years). Intrarater reliability was calculated for all parameters obtained using the 2 methods. Standard error of measurement and minimal detectable change were also calculated. A repeated-measure analysis of variance was used to identify differences between ink and pressure platform footprints. Intraclass correlation coefficient and Bland and Altman plots were used to assess similar parameters obtained using different methods. Intrarater reliability was >0.9 for all parameters and was slightly higher for the ink footprints. No statistical difference was reported in repeated-measure analysis of variance for any of the parameters. Intraclass correlation coefficient values from AA, SI, and CSI that were obtained using ink footprints and pressure platform footprints were excellent, ranging from 0.797 to 0.829. However, pressure platform overestimated AA and underestimated SI and CSI. Our study revealed that AA, SI, and CSI were similar regardless of whether the ink or pressure platform method was used. In addition, the parameters indicated high intrarater reliability and were reproducible. Copyright © 2016. Published by Elsevier Inc.

  9. A data management system to enable urgent natural disaster computing

    NASA Astrophysics Data System (ADS)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.

  10. openECA Platform and Analytics Alpha Test Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  11. openECA Platform and Analytics Beta Demonstration Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  12. Exploring the Potential of the iPad and Xbox Kinect for Cognitive Science Research.

    PubMed

    Rolle, Camarin E; Voytek, Bradley; Gazzaley, Adam

    2015-06-01

    Many studies have validated consumer-facing hardware platforms as efficient, cost-effective, and accessible data collection instruments. However, there are few reports that have assessed the reliability of these platforms as assessment tools compared with traditional data collection platforms. Here we evaluated performance on a spatial attention paradigm obtained by our standard in-lab data collection platform, the personal computer (PC), and compared performance with that of two widely adopted, consumer technology devices: the Apple (Cupertino, CA) iPad(®) 2 and Microsoft (Redmond, WA) Xbox(®) Kinect(®). The task assessed spatial attention, a fundamental ability that we use to navigate the complex sensory input we face daily in order to effectively engage in goal-directed activities. Participants were presented with a central spatial cue indicating where on the screen a stimulus would appear. We manipulated spatial cueing such that, on a given trial, the cue presented one of four levels of information indicating the upcoming target location. Based on previous research, we hypothesized that as information of the cued spatial area decreased (i.e., larger area of possible target location) there would be a parametric decrease in performance, as revealed by slower response times and lower accuracies. Identical paradigm parameters were used for each of the three platforms, and testing was performed in a single session with a counterbalanced design. We found that performance on the Kinect and iPad showed a stronger parametric effect across the cued-information levels than that on the PC. Our results suggest that not only can the Kinect and iPad be reliably used as assessment tools to yield research-quality behavioral data, but that these platforms exploit mechanics that could be useful in building more interactive, and therefore effective, cognitive assessment and training designs. We include a discussion on the possible contributing factors to the differential effects between platforms, as well as potential confounds of the study.

  13. Arkas: Rapid reproducible RNAseq analysis

    PubMed Central

    Colombo, Anthony R.; J. Triche Jr, Timothy; Ramsingh, Giridharan

    2017-01-01

    The recently introduced Kallisto pseudoaligner has radically simplified the quantification of transcripts in RNA-sequencing experiments.  We offer cloud-scale RNAseq pipelines Arkas-Quantification, and Arkas-Analysis available within Illumina’s BaseSpace cloud application platform which expedites Kallisto preparatory routines, reliably calculates differential expression, and performs gene-set enrichment of REACTOME pathways .  Due to inherit inefficiencies of scale, Illumina's BaseSpace computing platform offers a massively parallel distributive environment improving data management services and data importing.   Arkas-Quantification deploys Kallisto for parallel cloud computations and is conveniently integrated downstream from the BaseSpace Sequence Read Archive (SRA) import/conversion application titled SRA Import.  Arkas-Analysis annotates the Kallisto results by extracting structured information directly from source FASTA files with per-contig metadata, calculates the differential expression and gene-set enrichment analysis on both coding genes and transcripts. The Arkas cloud pipeline supports ENSEMBL transcriptomes and can be used downstream from the SRA Import facilitating raw sequencing importing, SRA FASTQ conversion, RNA quantification and analysis steps. PMID:28868134

  14. Hydrological Monitoring System Design and Implementation Based on IOT

    NASA Astrophysics Data System (ADS)

    Han, Kun; Zhang, Dacheng; Bo, Jingyi; Zhang, Zhiguang

    In this article, an embedded system development platform based on GSM communication is proposed. Through its application in hydrology monitoring management, the author makes discussion about communication reliability and lightning protection, suggests detail solutions, and also analyzes design and realization of upper computer software. Finally, communication program is given. Hydrology monitoring system from wireless communication network is a typical practical application of embedded system, which has realized intelligence, modernization, high-efficiency and networking of hydrology monitoring management.

  15. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE PAGES

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...

    2017-08-17

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  16. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  17. Quantitative method for gait pattern detection based on fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Tong, Xinglin; Yu, Lie

    2017-03-01

    This paper presents a method that uses fiber Bragg grating (FBG) sensors to distinguish the temporal gait patterns in gait cycles. Unlike most conventional methods that focus on electronic sensors to collect those physical quantities (i.e., strains, forces, pressure, displacements, velocity, and accelerations), the proposed method utilizes the backreflected peak wavelength from FBG sensors to describe the motion characteristics in human walking. Specifically, the FBG sensors are sensitive to external strain with the result that their backreflected peak wavelength will be shifted according to the extent of the influence of external strain. Therefore, when subjects walk in different gait patterns, the strains on FBG sensors will be different such that the magnitude of the backreflected peak wavelength varies. To test the reliability of the FBG sensor platform for gait pattern detection, the gold standard method using force-sensitive resistors (FSRs) for defining gait patterns is introduced as a reference platform. The reliability of the FBG sensor platform is determined by comparing the detection results between the FBG sensors and FSRs platforms. The experimental results show that the FBG sensor platform is reliable in gait pattern detection and gains high reliability when compared with the reference platform.

  18. Assessment of the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing

    2007-01-01

    The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.

  19. Peer-to-peer architectures for exascale computing : LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.

    2010-09-01

    The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitousmore » and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation solver for the heat equation, and the closely related technique of value iteration in optimization. We aimed to apply P2P concepts in designing implementations capable of surviving an unreliable machine of 10{sup 6} nodes.« less

  20. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  1. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  2. OWLS as platform technology in OPTOS satellite

    NASA Astrophysics Data System (ADS)

    Rivas Abalo, J.; Martínez Oter, J.; Arruego Rodríguez, I.; Martín-Ortega Rico, A.; de Mingo Martín, J. R.; Jiménez Martín, J. J.; Martín Vodopivec, B.; Rodríguez Bustabad, S.; Guerrero Padrón, H.

    2017-12-01

    The aim of this work is to show the Optical Wireless Link to intraSpacecraft Communications (OWLS) technology as a platform technology for space missions, and more specifically its use within the On-Board Communication system of OPTOS satellite. OWLS technology was proposed by Instituto Nacional de Técnica Aeroespacial (INTA) at the end of the 1990s and developed along 10 years through a number of ground demonstrations, technological developments and in-orbit experiments. Its main benefits are: mass reduction, flexibility, and simplification of the Assembly, Integration and Tests phases. The final step was to go from an experimental technology to a platform one. This step was carried out in the OPTOS satellite, which makes use of optical wireless links in a distributed network based on an OLWS implementation of the CAN bus. OPTOS is the first fully wireless satellite. It is based on the triple configuration (3U) of the popular Cubesat standard, and was completely built at INTA. It was conceived to procure a fast development, low cost, and yet reliable platform to the Spanish scientific community, acting as a test bed for space born science and technology. OPTOS presents a distributed OBDH architecture in which all satellite's subsystems and payloads incorporate a small Distributed On-Board Computer (OBC) Terminal (DOT). All DOTs (7 in total) communicate between them by means of the OWLS-CAN that enables full data sharing capabilities. This collaboration allows them to perform all tasks that would normally be carried out by a centralized On-Board Computer.

  3. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: Root1.

    PubMed

    Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M

    2017-01-01

    The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.

  4. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  5. Reliability of center of pressure measures within and between sessions in individuals post-stroke and healthy controls.

    PubMed

    Gray, Vicki L; Ivanova, Tanya D; Garland, S Jayne

    2014-01-01

    Knowing the reliability of the center of pressure (COP) is important for interpreting balance deficits post-stroke, especially when the balance deficits can necessitate the use of short duration trials. The novel aspect of this reliability study was to examine the center of pressure measures using two adjacent force platforms between and within sessions in stroke and controls. After stroke, it is important to understand the contribution of the paretic and non-paretic leg to the motor control of standing balance. Because there is a considerable body of knowledge on COP reliability on a single platform, we chose to examine reliability using two adjacent platforms which has not been examined previously in stroke. Twenty participants post-stroke and 22 controls performed an arm raise, load drop and quiet stance balance task while standing on two adjacent force platforms, on two separate days. Intraclass correlations coefficient (ICC2,1) and percentage standard error of measurement (SEM%) were calculated for COP velocity, ellipse area, anterior-posterior (AP) displacement, and medial-lateral (ML) displacement. Between sessions, COP velocity was the most reliable with high ICCs and low SEM% across groups and tasks and ellipse area was less reliable with low ICCs across groups and tasks. COP measures were less reliable during the arm raise than load drop post-stroke. Within session reliability was high for COP velocity and ML displacement requiring no more than six trials across tasks. The COP velocity was the most reliable measure with high ICCs between sessions and the high reliability was achieved with fewer trials in both groups in a single session. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry

    NASA Astrophysics Data System (ADS)

    Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime

    2013-05-01

    S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.

  7. Grid Task Execution

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2007-01-01

    IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.

  8. Development of jacket platform tsunami risk rating system in waters offshore North Borneo

    NASA Astrophysics Data System (ADS)

    Lee, H. E.; Liew, M. S.; Mardi, N. H.; Na, K. L.; Toloue, Iraj; Wong, S. K.

    2016-09-01

    This work details the simulation of tsunami waves generated by seaquakes in the Manila Trench and their effect on fixed oil and gas jacket platforms in waters offshore North Borneo. For this study, a four-leg living quarter jacket platform located in a water depth of 63m is modelled in SACS v5.3. Malaysia has traditionally been perceived to be safe from the hazards of earthquakes and tsunamis. Local design practices tend to neglect tsunami waves and include no such provisions. In 2004, a 9.3 M w seaquake occurred off the northwest coast of Aceh, which generated tsunami waves that caused destruction in Malaysia totalling US 25 million and 68 deaths. This event prompted an awareness of the need to study the reliability of fixed offshore platforms scattered throughout Malaysian waters. In this paper, we present a review of research on the seismicity of the Manila Trench, which is perceived to be high risk for Southeast Asia. From the tsunami numerical model TUNA-M2, we extract computer-simulated tsunami waves at prescribed grid points in the vicinity of the platforms in the region. Using wave heights as input, we simulate the tsunami using SACS v5.3 structural analysis software of offshore platforms, which is widely accepted by the industry. We employ the nonlinear solitary wave theory in our tsunami loading calculations for the platforms, and formulate a platform-specific risk quantification system. We then perform an intensive structural sensitivity analysis and derive a corresponding platform-specific risk rating model.

  9. Computer-assisted upper extremity training using interactive biking exercise (iBikE) platform.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2012-01-01

    Upper extremity exercise training has been shown to improve clinical outcomes in different chronic health conditions. Arm-operated bicycles are frequently used to facilitate upper extremity training however effective use of these devices at patient homes is hampered by lack of remote connectivity with clinical rehabilitation team, inability to monitor exercise progress in real time using simple graphical representation, and absence of an alert system which would prevent exertion levels exceeding those approved by the clinical rehabilitation team. We developed an interactive biking exercise (iBikE) platform aimed at addressing these limitations. The platform uses a miniature wireless 3-axis accelerometer mounted on a patient wrist that transmits the cycling acceleration data to a laptop. The laptop screen presents an exercise dashboard to the patient in real time allowing easy graphical visualization of exercise progress and presentation of exercise parameters in relation to prescribed targets. The iBikE platform is programmed to alert the patient when exercise intensity exceeds the levels recommended by the patient care provider. The iBikE platform has been tested in 7 healthy volunteers (age range: 26-50 years) and shown to reliably reflect exercise progress and to generate alerts at pre-setup levels. Implementation of remote connectivity with patient rehabilitation team is warranted for future extension and evaluation efforts.

  10. A novel approach for discovering condition-specific correlations of gene expressions within biological pathways by using cloud computing technology.

    PubMed

    Chang, Tzu-Hao; Wu, Shih-Lin; Wang, Wei-Jen; Horng, Jorng-Tzong; Chang, Cheng-Wei

    2014-01-01

    Microarrays are widely used to assess gene expressions. Most microarray studies focus primarily on identifying differential gene expressions between conditions (e.g., cancer versus normal cells), for discovering the major factors that cause diseases. Because previous studies have not identified the correlations of differential gene expression between conditions, crucial but abnormal regulations that cause diseases might have been disregarded. This paper proposes an approach for discovering the condition-specific correlations of gene expressions within biological pathways. Because analyzing gene expression correlations is time consuming, an Apache Hadoop cloud computing platform was implemented. Three microarray data sets of breast cancer were collected from the Gene Expression Omnibus, and pathway information from the Kyoto Encyclopedia of Genes and Genomes was applied for discovering meaningful biological correlations. The results showed that adopting the Hadoop platform considerably decreased the computation time. Several correlations of differential gene expressions were discovered between the relapse and nonrelapse breast cancer samples, and most of them were involved in cancer regulation and cancer-related pathways. The results showed that breast cancer recurrence might be highly associated with the abnormal regulations of these gene pairs, rather than with their individual expression levels. The proposed method was computationally efficient and reliable, and stable results were obtained when different data sets were used. The proposed method is effective in identifying meaningful biological regulation patterns between conditions.

  11. A self-assembled nanoscale robotic arm controlled by electric fields

    NASA Astrophysics Data System (ADS)

    Kopperger, Enzo; List, Jonathan; Madhira, Sushi; Rothfischer, Florian; Lamb, Don C.; Simmel, Friedrich C.

    2018-01-01

    The use of dynamic, self-assembled DNA nanostructures in the context of nanorobotics requires fast and reliable actuation mechanisms. We therefore created a 55-nanometer–by–55-nanometer DNA-based molecular platform with an integrated robotic arm of length 25 nanometers, which can be extended to more than 400 nanometers and actuated with externally applied electrical fields. Precise, computer-controlled switching of the arm between arbitrary positions on the platform can be achieved within milliseconds, as demonstrated with single-pair Förster resonance energy transfer experiments and fluorescence microscopy. The arm can be used for electrically driven transport of molecules or nanoparticles over tens of nanometers, which is useful for the control of photonic and plasmonic processes. Application of piconewton forces by the robot arm is demonstrated in force-induced DNA duplex melting experiments.

  12. A Reliability-Based Particle Filter for Humanoid Robot Self-Localization in RoboCup Standard Platform League

    PubMed Central

    Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.

    2013-01-01

    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098

  13. MSD-MAP: A Network-Based Systems Biology Platform for Predicting Disease-Metabolite Links.

    PubMed

    Wathieu, Henri; Issa, Naiem T; Mohandoss, Manisha; Byers, Stephen W; Dakshanamurthy, Sivanesan

    2017-01-01

    Cancer-associated metabolites result from cell-wide mechanisms of dysregulation. The field of metabolomics has sought to identify these aberrant metabolites as disease biomarkers, clues to understanding disease mechanisms, or even as therapeutic agents. This study was undertaken to reliably predict metabolites associated with colorectal, esophageal, and prostate cancers. Metabolite and disease biological action networks were compared in a computational platform called MSD-MAP (Multi Scale Disease-Metabolite Association Platform). Using differential gene expression analysis with patient-based RNAseq data from The Cancer Genome Atlas, genes up- or down-regulated in cancer compared to normal tissue were identified. Relational databases were used to map biological entities including pathways, functions, and interacting proteins, to those differential disease genes. Similar relational maps were built for metabolites, stemming from known and in silico predicted metabolite-protein associations. The hypergeometric test was used to find statistically significant relationships between disease and metabolite biological signatures at each tier, and metabolites were assessed for multi-scale association with each cancer. Metabolite networks were also directly associated with various other diseases using a disease functional perturbation database. Our platform recapitulated metabolite-disease links that have been empirically verified in the scientific literature, with network-based mapping of jointly-associated biological activity also matching known disease mechanisms. This was true for colorectal, esophageal, and prostate cancers, using metabolite action networks stemming from both predicted and known functional protein associations. By employing systems biology concepts, MSD-MAP reliably predicted known cancermetabolite links, and may serve as a predictive tool to streamline conventional metabolomic profiling methodologies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Free Global Dsm Assessment on Large Scale Areas Exploiting the Potentialities of the Innovative Google Earth Engine Platform

    NASA Astrophysics Data System (ADS)

    Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.

    2017-05-01

    The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.

  15. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  16. High-performance computing for airborne applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less

  17. The development and evaluation of a novel repurposing of a peripheral gaming device for the acquisition of forces applied to a hydraulic treatment plinth.

    PubMed

    Cooper, Darren; Bevins, Joe; Corbett, Mark

    2018-01-13

    This technical note details the stages taken to create an instrumented hydraulic treatment plinth for the measurement of applied forces in the vertical axis. The modification used a widely available low-cost peripheral gaming device and required only basic construction and computer skills. The instrumented treatment plinth was validated against a laboratory grade force platform across a range of applied masses from 0.5-15 kg, mock Gr I-IV vertebral mobilisations and a dynamic response test. Intraclass correlation coefficients demonstrated poor reliability (0.46) for low masses of 0.5 kg improving to excellent for larger masses up to15 kg respectively; excellent to good reliability (0.97-0.86) for the mock mobilisations and moderate reliability (0.51) for the dynamic response test. The study demonstrates how a cheap peripheral gaming device can be repurposed so that forces applied to a hydraulic treatment plinth can be collected reliably when applied in a clinically reasoned manner. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  19. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    PubMed

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  20. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei

    2013-09-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  1. Discrete event command and control for networked teams with multiple missions

    NASA Astrophysics Data System (ADS)

    Lewis, Frank L.; Hudas, Greg R.; Pang, Chee Khiang; Middleton, Matthew B.; McMurrough, Christopher

    2009-05-01

    During mission execution in military applications, the TRADOC Pamphlet 525-66 Battle Command and Battle Space Awareness capabilities prescribe expectations that networked teams will perform in a reliable manner under changing mission requirements, varying resource availability and reliability, and resource faults. In this paper, a Command and Control (C2) structure is presented that allows for computer-aided execution of the networked team decision-making process, control of force resources, shared resource dispatching, and adaptability to change based on battlefield conditions. A mathematically justified networked computing environment is provided called the Discrete Event Control (DEC) Framework. DEC has the ability to provide the logical connectivity among all team participants including mission planners, field commanders, war-fighters, and robotic platforms. The proposed data management tools are developed and demonstrated on a simulation study and an implementation on a distributed wireless sensor network. The results show that the tasks of multiple missions are correctly sequenced in real-time, and that shared resources are suitably assigned to competing tasks under dynamically changing conditions without conflicts and bottlenecks.

  2. Reducing audio stimulus presentation latencies across studies, laboratories, and hardware and operating system configurations.

    PubMed

    Babjack, Destiny L; Cernicky, Brandon; Sobotka, Andrew J; Basler, Lee; Struthers, Devon; Kisic, Richard; Barone, Kimberly; Zuccolotto, Anthony P

    2015-09-01

    Using differing computer platforms and audio output devices to deliver audio stimuli often introduces (1) substantial variability across labs and (2) variable time between the intended and actual sound delivery (the sound onset latency). Fast, accurate audio onset latencies are particularly important when audio stimuli need to be delivered precisely as part of studies that depend on accurate timing (e.g., electroencephalographic, event-related potential, or multimodal studies), or in multisite studies in which standardization and strict control over the computer platforms used is not feasible. This research describes the variability introduced by using differing configurations and introduces a novel approach to minimizing audio sound latency and variability. A stimulus presentation and latency assessment approach is presented using E-Prime and Chronos (a new multifunction, USB-based data presentation and collection device). The present approach reliably delivers audio stimuli with low latencies that vary by ≤1 ms, independent of hardware and Windows operating system (OS)/driver combinations. The Chronos audio subsystem adopts a buffering, aborting, querying, and remixing approach to the delivery of audio, to achieve a consistent 1-ms sound onset latency for single-sound delivery, and precise delivery of multiple sounds that achieves standard deviations of 1/10th of a millisecond without the use of advanced scripting. Chronos's sound onset latencies are small, reliable, and consistent across systems. Testing of standard audio delivery devices and configurations highlights the need for careful attention to consistency between labs, experiments, and multiple study sites in their hardware choices, OS selections, and adoption of audio delivery systems designed to sidestep the audio latency variability issue.

  3. Model-driven analysis of experimentally determined growth phenotypes for 465 yeast gene deletion mutants under 16 different conditions

    PubMed Central

    Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel

    2008-01-01

    Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699

  4. Design of a Parachute Canopy Instrumentation Platform

    NASA Technical Reports Server (NTRS)

    Alshahin, Wahab M.; Daum, Jared S.; Holley, James J.; Litteken, Douglas A.; Vandewalle, Michael T.

    2015-01-01

    This paper discusses the current technology available to design and develop a reliable and compact instrumentation platform for parachute system data collection and command actuation. Wireless communication with a parachute canopy will be an advancement to the state of the art of parachute design, development, and testing. Embedded instrumentation of the parachute canopy will provide reefing line tension, skirt position data, parachute health monitoring, and other telemetry, further validating computer models and giving engineering insight into parachute dynamics for both Earth and Mars entry that is currently unavailable. This will allow for more robust designs which are more optimally designed in terms of structural loading, less susceptible to adverse dynamics, and may eventually pave the way to currently unattainable advanced concepts of operations. The development of this technology has dual use potential for a variety of other applications including inflatable habitats, aerodynamic decelerators, heat shields, and other high stress environments.

  5. Telescope Array Control System Based on Wireless Touch Screen Platform

    NASA Astrophysics Data System (ADS)

    Fu, Xia-nan; Huang, Lei; Wei, Jian-yan

    2017-10-01

    Ground-based Wide Angle Cameras (GMAC) are the ground-based observational facility for the SVOM (Space Variable Object Monitor) astronomical satellite of Sino-French cooperation, and Mini-GWAC is the pathfinder and supplement of GWAC. In the context of the Mini-GWAC telescope array, this paper introduces the design and implementation of a kind of telescope array control system based on the wireless touch screen platform. We describe the development and implementation of the system in detail in terms of control system principle, system hardware structure, software design, experiment, and test etc. The system uses a touch-control PC which is based on the Windows CE system as the upper computer, while the wireless transceiver module and PLC (Programmable Logic Controller) are taken as the system kernel. It has the advantages of low cost, reliable data transmission, and simple operation. And the control system has been applied to the Mini-GWAC successfully.

  6. Simulating the dynamic behavior of a vertical axis wind turbine operating in unsteady conditions

    NASA Astrophysics Data System (ADS)

    Battisti, L.; Benini, E.; Brighenti, A.; Soraperra, G.; Raciti Castelli, M.

    2016-09-01

    The present work aims at assessing the reliability of a simulation tool capable of computing the unsteady rotational motion and the associated tower oscillations of a variable speed VAWT immersed in a coherent turbulent wind. As a matter of fact, since the dynamic behaviour of a variable speed turbine strongly depends on unsteady wind conditions (wind gusts), a steady state approach can't accurately catch transient correlated issues. The simulation platform proposed here is implemented using a lumped mass approach: the drive train is described by resorting to both the polar inertia and the angular position of rotating parts, also considering their speed and acceleration, while rotor aerodynamic is based on steady experimental curves. The ultimate objective of the presented numerical platform is the simulation of transient phenomena, driven by turbulence, occurring during rotor operation, with the aim of supporting the implementation of efficient and robust control algorithms.

  7. High-throughput GPU-based LDPC decoding

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  8. Citizen Sensors for SHM: Towards a Crowdsourcing Platform

    PubMed Central

    Ozer, Ekin; Feng, Maria Q.; Feng, Dongming

    2015-01-01

    This paper presents an innovative structural health monitoring (SHM) platform in terms of how it integrates smartphone sensors, the web, and crowdsourcing. The ubiquity of smartphones has provided an opportunity to create low-cost sensor networks for SHM. Crowdsourcing has given rise to citizen initiatives becoming a vast source of inexpensive, valuable but heterogeneous data. Previously, the authors have investigated the reliability of smartphone accelerometers for vibration-based SHM. This paper takes a step further to integrate mobile sensing and web-based computing for a prospective crowdsourcing-based SHM platform. An iOS application was developed to enable citizens to measure structural vibration and upload the data to a server with smartphones. A web-based platform was developed to collect and process the data automatically and store the processed data, such as modal properties of the structure, for long-term SHM purposes. Finally, the integrated mobile and web-based platforms were tested to collect the low-amplitude ambient vibration data of a bridge structure. Possible sources of uncertainties related to citizens were investigated, including the phone location, coupling conditions, and sampling duration. The field test results showed that the vibration data acquired by smartphones operated by citizens without expertise are useful for identifying structural modal properties with high accuracy. This platform can be further developed into an automated, smart, sustainable, cost-free system for long-term monitoring of structural integrity of spatially distributed urban infrastructure. Citizen Sensors for SHM will be a novel participatory sensing platform in the way that it offers hybrid solutions to transitional crowdsourcing parameters. PMID:26102490

  9. Desiderata for computable representations of electronic health records-driven phenotype algorithms

    PubMed Central

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-01-01

    Background Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). Methods A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. Results We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. Conclusion A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. PMID:26342218

  10. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  11. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.

  12. The Turkish Version of Web-Based Learning Platform Evaluation Scale: Reliability and Validity Study

    ERIC Educational Resources Information Center

    Dag, Funda

    2016-01-01

    The purpose of this study is to determine the language equivalence and the validity and reliability of the Turkish version of the "Web-Based Learning Platform Evaluation Scale" ("Web Tabanli Ögrenme Ortami Degerlendirme Ölçegi" [WTÖODÖ]) used in the selection and evaluation of web-based learning environments. Within this scope,…

  13. Bridging the provenance gap: opportunities and challenges tracking in and ex silico provenance in sUAS workflows

    NASA Astrophysics Data System (ADS)

    Thomer, A.

    2017-12-01

    Data provenance - the record of the varied processes that went into the creation of a dataset, as well as the relationships between resulting data objects - is necessary to support the reusability, reproducibility and reliability of earth science data. In sUAS-based research, capturing provenance can be particularly challenging because of the breadth and distributed nature of the many platforms used to collect, process and analyze data. In any given project, multiple drones, controllers, computers, software systems, sensors, cameras, imaging processing algorithms and data processing workflows are used over sometimes long periods of time. These platforms and processing result in dozens - if not hundreds - of data products in varying stages of readiness-for-analysis and sharing. Provenance tracking mechanisms are needed to make the relationships between these many data products explicit, and therefore more reusable and shareable. In this talk, I discuss opportunities and challenges in tracking provenance in sUAS-based research, and identify gaps in current workflow-capture technologies. I draw on prior work conducted as part of the IMLS-funded Site-Based Data Curation project in which we developed methods of documenting in and ex silico (that is, computational and non-computation) workflows, and demonstrate this approaches applicability to research with sUASes. I conclude with a discussion of ontologies and other semantic technologies that have potential application in sUAS research.

  14. Energy Consumption Management of Virtual Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Li, Lin

    2017-11-01

    For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.

  15. MC-GenomeKey: a multicloud system for the detection and annotation of genomic variants.

    PubMed

    Elshazly, Hatem; Souilmi, Yassine; Tonellato, Peter J; Wall, Dennis P; Abouelhoda, Mohamed

    2017-01-20

    Next Generation Genome sequencing techniques became affordable for massive sequencing efforts devoted to clinical characterization of human diseases. However, the cost of providing cloud-based data analysis of the mounting datasets remains a concerning bottleneck for providing cost-effective clinical services. To address this computational problem, it is important to optimize the variant analysis workflow and the used analysis tools to reduce the overall computational processing time, and concomitantly reduce the processing cost. Furthermore, it is important to capitalize on the use of the recent development in the cloud computing market, which have witnessed more providers competing in terms of products and prices. In this paper, we present a new package called MC-GenomeKey (Multi-Cloud GenomeKey) that efficiently executes the variant analysis workflow for detecting and annotating mutations using cloud resources from different commercial cloud providers. Our package supports Amazon, Google, and Azure clouds, as well as, any other cloud platform based on OpenStack. Our package allows different scenarios of execution with different levels of sophistication, up to the one where a workflow can be executed using a cluster whose nodes come from different clouds. MC-GenomeKey also supports scenarios to exploit the spot instance model of Amazon in combination with the use of other cloud platforms to provide significant cost reduction. To the best of our knowledge, this is the first solution that optimizes the execution of the workflow using computational resources from different cloud providers. MC-GenomeKey provides an efficient multicloud based solution to detect and annotate mutations. The package can run in different commercial cloud platforms, which enables the user to seize the best offers. The package also provides a reliable means to make use of the low-cost spot instance model of Amazon, as it provides an efficient solution to the sudden termination of spot machines as a result of a sudden price increase. The package has a web-interface and it is available for free for academic use.

  16. FORCEnet Net Centric Architecture - A Standards View

    DTIC Science & Technology

    2006-06-01

    SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION

  17. Formulation Predictive Dissolution (fPD) Testing to Advance Oral Drug Product Development: an Introduction to the US FDA Funded '21st Century BA/BE' Project.

    PubMed

    Hens, Bart; Sinko, Patrick; Job, Nicholas; Dean, Meagan; Al-Gousous, Jozef; Salehi, Niloufar; Ziff, Robert M; Tsume, Yasuhiro; Bermejo, Marival; Paixão, Paulo; Brasseur, James G; Yu, Alex; Talattof, Arjang; Benninghoff, Gail; Langguth, Peter; Lennernäs, Hans; Hasler, William L; Marciani, Luca; Dickens, Joseph; Shedden, Kerby; Sun, Duxin; Amidon, Gregory E; Amidon, Gordon L

    2018-06-23

    Over the past decade, formulation predictive dissolution (fPD) testing has gained increasing attention. Another mindset is pushed forward where scientists in our field are more confident to explore the in vivo behavior of an oral drug product by performing predictive in vitro dissolution studies. Similarly, there is an increasing interest in the application of modern computational fluid dynamics (CFD) frameworks and high-performance computing platforms to study the local processes underlying absorption within the gastrointestinal (GI) tract. In that way, CFD and computing platforms both can inform future PBPK-based in silico frameworks and determine the GI-motility-driven hydrodynamic impacts that should be incorporated into in vitro dissolution methods for in vivo relevance. Current compendial dissolution methods are not always reliable to predict the in vivo behavior, especially not for biopharmaceutics classification system (BCS) class 2/4 compounds suffering from a low aqueous solubility. Developing a predictive dissolution test will be more reliable, cost-effective and less time-consuming as long as the predictive power of the test is sufficiently strong. There is a need to develop a biorelevant, predictive dissolution method that can be applied by pharmaceutical drug companies to facilitate marketing access for generic and novel drug products. In 2014, Prof. Gordon L. Amidon and his team initiated a far-ranging research program designed to integrate (1) in vivo studies in humans in order to further improve the understanding of the intraluminal processing of oral dosage forms and dissolved drug along the gastrointestinal (GI) tract, (2) advancement of in vitro methodologies that incorporates higher levels of in vivo relevance and (3) computational experiments to study the local processes underlying dissolution, transport and absorption within the intestines performed with a new unique CFD based framework. Of particular importance is revealing the physiological variables determining the variability in in vivo dissolution and GI absorption from person to person in order to address (potential) in vivo BE failures. This paper provides an introduction to this multidisciplinary project, informs the reader about current achievements and outlines future directions. Copyright © 2018. Published by Elsevier B.V.

  18. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  19. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  20. Osteochondritis dissecans of the humeral capitellum: reliability of four classification systems using radiographs and computed tomography.

    PubMed

    Claessen, Femke M A P; van den Ende, Kimberly I M; Doornberg, Job N; Guitton, Thierry G; Eygendaal, Denise; van den Bekerom, Michel P J

    2015-10-01

    The radiographic appearance of osteochondritis dissecans (OCD) of the humeral capitellum varies according to the stage of the lesion. It is important to evaluate the stage of OCD lesion carefully to guide treatment. We compared the interobserver reliability of currently used classification systems for OCD of the humeral capitellum to identify the most reliable classification system. Thirty-two musculoskeletal radiologists and orthopaedic surgeons specialized in elbow surgery from several countries evaluated anteroposterior and lateral radiographs and corresponding computed tomography (CT) scans of 22 patients to classify the stage of OCD of the humeral capitellum according to the classification systems developed by (1) Minami, (2) Berndt and Harty, (3) Ferkel and Sgaglione, and (4) Anderson on a Web-based study platform including a Digital Imaging and Communications in Medicine viewer. Magnetic resonance imaging was not evaluated as part of this study. We measured agreement among observers using the Siegel and Castellan multirater κ. All OCD classification systems, except for Berndt and Harty, which had poor agreement among observers (κ = 0.20), had fair interobserver agreement: κ was 0.27 for the Minami, 0.23 for Anderson, and 0.22 for Ferkel and Sgaglione classifications. The Minami Classification was significantly more reliable than the other classifications (P < .001). The Minami Classification was the most reliable for classifying different stages of OCD of the humeral capitellum. However, it is unclear whether radiographic evidence of OCD of the humeral capitellum, as categorized by the Minami Classification, guides treatment in clinical practice as a result of this fair agreement. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  1. Developing Reliable Telemedicine Platforms with Unreliable and Limited Communication Bandwidth

    DTIC Science & Technology

    2017-10-01

    hospital health care, the benefit of high -resolution medical data is greatly limited in battlefield or natural disaster areas, where communication to...sampling rate. For high - frequency data like waveforms, the downsampling approach could directly reduce the amount of data. Therefore, it could be used...AFRL-SA-WP-TR-2017-0019 Developing Reliable Telemedicine Platforms with Unreliable and Limited Communication Bandwidth Peter F

  2. Using e-Learning Platforms for Mastery Learning in Developmental Mathematics Courses

    ERIC Educational Resources Information Center

    Boggs, Stacey; Shore, Mark; Shore, JoAnna

    2004-01-01

    Many colleges and universities have adopted e-learning platforms to utilize computers as an instructional tool in developmental (i.e., beginning and intermediate algebra) mathematics courses. An e-learning platform is a computer program used to enhance course instruction via computers and the Internet. Allegany College of Maryland is currently…

  3. Reliability and validity of the Wii Balance Board for assessment of standing balance: A systematic review.

    PubMed

    Clark, Ross A; Mentiplay, Benjamin F; Pua, Yong-Hao; Bower, Kelly J

    2018-03-01

    The use of force platform technologies to assess standing balance is common across a range of clinical areas. Numerous researchers have evaluated the low-cost Wii Balance Board (WBB) for its utility in assessing balance, with variable findings. This review aimed to systematically evaluate the reliability and concurrent validity of the WBB for assessment of static standing balance. Articles were retrieved from six databases (Medline, SCOPUS, EMBASE, CINAHL, Web of Science, Inspec) from 2007 to 2017. After independent screening by two reviewers, 25 articles were included. Two reviewers performed the data extraction and quality assessment. Test-retest reliability was investigated in 12 studies, with intraclass correlation coefficients or Pearson's correlation values showing a range from poor to excellent reliability (range: 0.27 to 0.99). Concurrent validity (i.e. comparison with another force platform) was examined in 21 studies, and was generally found to be excellent in studies examining the association between the same outcome measures collected on both devices. For studies reporting predominantly poor to moderate validity, potentially influential factors included the choice of 1) criterion reference (e.g. not a common force platform), 2) test duration (e.g. <30 s for double leg), 3) outcome measure (e.g. comparing a centre of pressure variable from the WBB with a summary score from the force platform), 4) data acquisition platform (studies using Apple iOS reported predominantly moderate validity), and 5) low sample size. In conclusion, evidence suggests that the WBB can be used as a reliable and valid tool for assessing standing balance. Protocol registration number: PROSPERO 2017: CRD42017058122. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Boutiques: a flexible framework to integrate command-line applications in computing platforms.

    PubMed

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-05-01

    We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.

  5. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  6. Report on the Second ARM Mobile Facility (AMF2) Stabilization Platform: Control Strategy and Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulter, Richard J.; Martin, Timothy J.

    One of the primary objectives of the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Climate Research Facility’s second Mobile Facility (AMF2) is to obtain reliable measurements from ocean-going vessels. A pillar of the AMF2 strategy in this effort is the use of a stable platform for those instruments that 1) need to look directly at, or be shaded from, direct sunlight or 2) require a truly vertical orientation. Some ARM instruments that fall into these categories include the Multi-Filter Rotating Shadow Band Radiometer (MFRSR) and the Total Sky Imager (TSI), both of which have a shadow band mechanism, upward-lookingmore » radiometry that should be exposed only to the sky, a Microwave Radiometer (MWR) that looks vertically and at specified tilt angles, and vertically pointing radars, for which the vertical component of motion is critically important. During the design and construction phase of AMF2, an inexpensive stable platform was purchased to perform the stabilization tasks for some of these instruments. Computer programs were developed to communicate with the platform controller and with an inertial measurements platform that measures true ship motion components (roll, pitch, yaw, surge, sway, and heave). The platform was then tested on a 3-day cruise aboard the RV Connecticut during June 16-18, 2010, off the east coast of the United States. This initial test period was followed by continued development of the platform control strategy and implementation as time permitted. This is a report of the results of these efforts and the critical points in moving forward.« less

  7. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research.

    PubMed

    Torous, John; Kiang, Mathew V; Lorme, Jeanette; Onnela, Jukka-Pekka

    2016-05-05

    A longstanding barrier to progress in psychiatry, both in clinical settings and research trials, has been the persistent difficulty of accurately and reliably quantifying disease phenotypes. Mobile phone technology combined with data science has the potential to offer medicine a wealth of additional information on disease phenotypes, but the large majority of existing smartphone apps are not intended for use as biomedical research platforms and, as such, do not generate research-quality data. Our aim is not the creation of yet another app per se but rather the establishment of a platform to collect research-quality smartphone raw sensor and usage pattern data. Our ultimate goal is to develop statistical, mathematical, and computational methodology to enable us and others to extract biomedical and clinical insights from smartphone data. We report on the development and early testing of Beiwe, a research platform featuring a study portal, smartphone app, database, and data modeling and analysis tools designed and developed specifically for transparent, customizable, and reproducible biomedical research use, in particular for the study of psychiatric and neurological disorders. We also outline a proposed study using the platform for patients with schizophrenia. We demonstrate the passive data capabilities of the Beiwe platform and early results of its analytical capabilities. Smartphone sensors and phone usage patterns, when coupled with appropriate statistical learning tools, are able to capture various social and behavioral manifestations of illnesses, in naturalistic settings, as lived and experienced by patients. The ubiquity of smartphones makes this type of moment-by-moment quantification of disease phenotypes highly scalable and, when integrated within a transparent research platform, presents tremendous opportunities for research, discovery, and patient health.

  8. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research

    PubMed Central

    Torous, John; Kiang, Mathew V; Lorme, Jeanette

    2016-01-01

    Background A longstanding barrier to progress in psychiatry, both in clinical settings and research trials, has been the persistent difficulty of accurately and reliably quantifying disease phenotypes. Mobile phone technology combined with data science has the potential to offer medicine a wealth of additional information on disease phenotypes, but the large majority of existing smartphone apps are not intended for use as biomedical research platforms and, as such, do not generate research-quality data. Objective Our aim is not the creation of yet another app per se but rather the establishment of a platform to collect research-quality smartphone raw sensor and usage pattern data. Our ultimate goal is to develop statistical, mathematical, and computational methodology to enable us and others to extract biomedical and clinical insights from smartphone data. Methods We report on the development and early testing of Beiwe, a research platform featuring a study portal, smartphone app, database, and data modeling and analysis tools designed and developed specifically for transparent, customizable, and reproducible biomedical research use, in particular for the study of psychiatric and neurological disorders. We also outline a proposed study using the platform for patients with schizophrenia. Results We demonstrate the passive data capabilities of the Beiwe platform and early results of its analytical capabilities. Conclusions Smartphone sensors and phone usage patterns, when coupled with appropriate statistical learning tools, are able to capture various social and behavioral manifestations of illnesses, in naturalistic settings, as lived and experienced by patients. The ubiquity of smartphones makes this type of moment-by-moment quantification of disease phenotypes highly scalable and, when integrated within a transparent research platform, presents tremendous opportunities for research, discovery, and patient health. PMID:27150677

  9. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    PubMed

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  10. A Resource Service Model in the Industrial IoT System Based on Transparent Computing

    PubMed Central

    Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-01-01

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system. PMID:29587450

  11. 30 CFR 250.903 - What records must I keep?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SULPHUR OPERATIONS IN THE OUTER CONTINENTAL SHELF Platforms and Structures General Requirements for... platform safety, structural reliability, or operating capabilities. Items such as steel brackets, deck...

  12. Desiderata for computable representations of electronic health records-driven phenotype algorithms.

    PubMed

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-11-01

    Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  13. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  14. [Reliability of static posturography in elderly persons].

    PubMed

    Bauer, C M; Gröger, I; Rupprecht, R; Tibesku, C O; Gassmann, K G

    2010-08-01

    Static posturography is used to quantify body sway. It is used to assess the balance of elderly persons who are prone to falls. There is still no general opinion concerning the reliability of force platform measurements. The aim of this study was to test the reliability of force platform parameters when measuring elderly persons. The reliability of 11 force platform parameters was tested measuring 30 elderly persons. The following parameters were calculated: mean speed of center of pressure displacement in mm/s, length of sway in mm, sway area in mm(2), amplitudes of center of pressure movement, the axis of oscillation in degrees and the person's angles of inclination in degrees. Three measurements were taken on the same day, with a resting period of 2 min. Four different test conditions were used: normal standing and narrow stand with eyes open and eyes closed, respectively. Reliability was determined by using intraclass correlation coefficients. Six parameters had excellent reliability with a correlation coefficient of >0.9: mean speed of center of pressure movement during narrow stand, area of sway during narrow stand, length of sway during normal and narrow stand, and the angle of inclination in the sagittal plane during normal stand and narrow stand. The condition "narrow stand eyes closed" proved to be the most reliable test position. Six parameters proved to have excellent reliability and are recommended to be used in further investigations. Narrow stand with eyes closed should be used as the test position. The tested protocol proved to be reliable. Whether these parameters can be used to predict falls in elderly persons remains to be investigated.

  15. Boutiques: a flexible framework to integrate command-line applications in computing platforms

    PubMed Central

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-01-01

    Abstract We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science. PMID:29718199

  16. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  17. Mobile healthcare applications: system design review, critical issues and challenges.

    PubMed

    Baig, Mirza Mansoor; GholamHosseini, Hamid; Connolly, Martin J

    2015-03-01

    Mobile phones are becoming increasingly important in monitoring and delivery of healthcare interventions. They are often considered as pocket computers, due to their advanced computing features, enhanced preferences and diverse capabilities. Their sophisticated sensors and complex software applications make the mobile healthcare (m-health) based applications more feasible and innovative. In a number of scenarios user-friendliness, convenience and effectiveness of these systems have been acknowledged by both patients as well as healthcare providers. M-health technology employs advanced concepts and techniques from multidisciplinary fields of electrical engineering, computer science, biomedical engineering and medicine which benefit the innovations of these fields towards healthcare systems. This paper deals with two important aspects of current mobile phone based sensor applications in healthcare. Firstly, critical review of advanced applications such as; vital sign monitoring, blood glucose monitoring and in-built camera based smartphone sensor applications. Secondly, investigating challenges and critical issues related to the use of smartphones in healthcare including; reliability, efficiency, mobile phone platform variability, cost effectiveness, energy usage, user interface, quality of medical data, and security and privacy. It was found that the mobile based applications have been widely developed in recent years with fast growing deployment by healthcare professionals and patients. However, despite the advantages of smartphones in patient monitoring, education, and management there are some critical issues and challenges related to security and privacy of data, acceptability, reliability and cost that need to be addressed.

  18. Easy and effective--web-based information systems designed and maintained by physicians: experience with two gynecological projects.

    PubMed

    Kupka, M S; Dorn, C; Richter, O; van der Ven, H; Baur, M

    2003-08-01

    It is well established that medical information sources develop continuously from printed media to digital online sources. To demonstrate effectiveness and feasibility of decentralized performed web-based information sources for health professionals, two projects are described. The information platform of the German Working Group for Information Technologies in Gynecology and Obstetrics (AIG) and the information source concerning the German Registry for in vitro fertilization (DIR) were implemented using ordinary software and standard computer equipment. Only minimal resources and training were necessary to perform safe and reliable web-based information sources with a high correlation of effectiveness in costs and time exposure.

  19. Transaction-based building controls framework, Volume 2: Platform descriptive model and requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyol, Bora A.; Haack, Jereme N.; Carpenter, Brandon J.

    Transaction-based Building Controls (TBC) offer a control systems platform that provides an agent execution environment that meets the growing requirements for security, resource utilization, and reliability. This report outlines the requirements for a platform to meet these needs and describes an illustrative/exemplary implementation.

  20. The Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.

    2015-12-01

    CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.

  1. ClusterCAD: a computational platform for type I modular polyketide synthase design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.

    Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less

  2. Satellite Cloud and Radiative Property Processing and Distribution System on the NASA Langley ASDC OpenStack and OpenShift Cloud Platform

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.

    2017-12-01

    Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.

  3. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    PubMed Central

    Al-Widyan, Khalid

    2017-01-01

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX=ZB, where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B, which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0.12∘ respectively. PMID:29036905

  4. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    PubMed

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  5. ClusterCAD: a computational platform for type I modular polyketide synthase design

    DOE PAGES

    Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.; ...

    2017-10-11

    Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less

  6. Automated Work Package: Initial Wireless Communication Platform Design, Development, and Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al Rashdan, Ahmad Yahya Mohammad; Agarwal, Vivek

    The Department of Energy’s Light Water Reactor Sustainability Program is developing the scientific basis to ensure long-term reliability, productivity, safety, and security of the nuclear power industry in the United States. The Instrumentation, Information, and Control (II&C) pathway of the program aims to increase the role of advanced II&C technologies to achieve this objective. One of the pathway efforts at Idaho National Laboratory (INL) is to improve the work packages execution process by replacing the expensive, inefficient, bulky, complex, and error-prone paper-based work orders with automated work packages (AWPs). An AWP is an automated and dynamic presentation of the workmore » package designed to guide the user through the work process. It is loaded on a mobile device, such as a tablet, and is capable of communicating with plant equipment and systems to acquire plant and procedure states. The AWP replaces those functions where a computer is more efficient and reliable than a human. To enable the automatic acquisition of plant data, it is necessary to design and develop a prototype platform for data exchange between the field instruments and the AWP mobile devices. The development of the platform aims to reveal issues and solutions generalizable to large-scale implementation of a similar system. Topics such as bandwidth, robustness, response time, interference, and security are usually associated with wireless communication. These concerns, along with other requirements, are listed in an earlier INL report. Specifically, the targeted issues and performance aspects in this work are relevant to the communication infrastructure from the perspective of promptness, robustness, expandability, and interoperability with different technologies.« less

  7. Using a Smart Phone as a Standalone Platform for Detection and Monitoring of Pathological Tremors

    PubMed Central

    Daneault, Jean-François; Carignan, Benoit; Codère, Carl Éric; Sadikot, Abbas F.; Duval, Christian

    2013-01-01

    Introduction: Smart phones are becoming ubiquitous and their computing capabilities are ever increasing. Consequently, more attention is geared toward their potential use in research and medical settings. For instance, their built-in hardware can provide quantitative data for different movements. Therefore, the goal of the current study was to evaluate the capabilities of a standalone smart phone platform to characterize tremor. Results: Algorithms for tremor recording and online analysis can be implemented within a smart phone. The smart phone provides reliable time- and frequency-domain tremor characteristics. The smart phone can also provide medically relevant tremor assessments. Discussion: Smart phones have the potential to provide researchers and clinicians with quantitative short- and long-term tremor assessments that are currently not easily available. Methods: A smart phone application for tremor quantification and online analysis was developed. Then, smart phone results were compared to those obtained simultaneously with a laboratory accelerometer. Finally, results from the smart phone were compared to clinical tremor assessments. PMID:23346053

  8. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    NASA Astrophysics Data System (ADS)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).

  9. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    PubMed Central

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  10. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  11. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  12. In vitro fatigue tests and in silico finite element analysis of dental implants with different fixture/abutment joint types using computer-aided design models.

    PubMed

    Yamaguchi, Satoshi; Yamanishi, Yasufumi; Machado, Lucas S; Matsumoto, Shuji; Tovar, Nick; Coelho, Paulo G; Thompson, Van P; Imazato, Satoshi

    2018-01-01

    The aim of this study was to evaluate fatigue resistance of dental fixtures with two different fixture-abutment connections by in vitro fatigue testing and in silico three-dimensional finite element analysis (3D FEA) using original computer-aided design (CAD) models. Dental implant fixtures with external connection (EX) or internal connection (IN) abutments were fabricated from original CAD models using grade IV titanium and step-stress accelerated life testing was performed. Fatigue cycles and loads were assessed by Weibull analysis, and fatigue cracking was observed by micro-computed tomography and a stereomicroscope with high dynamic range software. Using the same CAD models, displacement vectors of implant components were also analyzed by 3D FEA. Angles of the fractured line occurring at fixture platforms in vitro and of displacement vectors corresponding to the fractured line in silico were compared by two-way ANOVA. Fatigue testing showed significantly greater reliability for IN than EX (p<0.001). Fatigue crack initiation was primarily observed at implant fixture platforms. FEA demonstrated that crack lines of both implant systems in vitro were observed in the same direction as displacement vectors of the implant fixtures in silico. In silico displacement vectors in the implant fixture are insightful for geometric development of dental implants to reduce complex interactions leading to fatigue failure. Copyright © 2017 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  13. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  14. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  15. Network challenges for cyber physical systems with tiny wireless devices: a case study on reliable pipeline condition monitoring.

    PubMed

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Khan, Muhammad Farhan; Naeem, Muhammad; Anpalagan, Alagan

    2015-03-25

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed.

  16. Network Challenges for Cyber Physical Systems with Tiny Wireless Devices: A Case Study on Reliable Pipeline Condition Monitoring

    PubMed Central

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Farhan Khan, Muhammad; Naeem, Muhammad; Anpalagan, Alagan

    2015-01-01

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed. PMID:25815444

  17. Intersession reliability of self-selected and narrow stance balance testing in older adults.

    PubMed

    Riemann, Bryan L; Piersol, Kelsey

    2017-10-01

    Despite the common practice of using force platforms to assess balance of older adults, few investigations have examined the reliability of postural screening tests in this population. We sought to determine the test-retest reliability of self-selected and narrow stance balance testing with eyes open and eyes closed in healthy older adults. Thirty older adults (>65 years) completed 45 s trials of eyes open and eyes closed stability tests using self-selected and narrow stances on two separate days (1.9 ± .7 days). Average medial-lateral center of pressure velocity was computed. The ICC results ranged from .74 to .86, and no significant systematic changes (P < .05) occurred between the testing sessions for any of the tests. The standard error of measurement ranged from 15.9 to 23.6%. Reliability estimates were similar between the two stances and visual conditions assessed. Slightly higher coefficients were identified for the self-selected stances compared to the narrow stances under both visual conditions; however, there were negligible differences between the sessions. The within subject session-to-session variability provides a basis for further research to consider differences between fallers and non-fallers. Reliability for eyes open and closed balance testing using self-selected and narrow stances in older adults was established which should provide a foundation for the development of fall risk screening tests.

  18. What computational non-targeted mass spectrometry-based metabolomics can gain from shotgun proteomics.

    PubMed

    Hamzeiy, Hamid; Cox, Jürgen

    2017-02-01

    Computational workflows for mass spectrometry-based shotgun proteomics and untargeted metabolomics share many steps. Despite the similarities, untargeted metabolomics is lagging behind in terms of reliable fully automated quantitative data analysis. We argue that metabolomics will strongly benefit from the adaptation of successful automated proteomics workflows to metabolomics. MaxQuant is a popular platform for proteomics data analysis and is widely considered to be superior in achieving high precursor mass accuracies through advanced nonlinear recalibration, usually leading to five to ten-fold better accuracy in complex LC-MS/MS runs. This translates to a sharp decrease in the number of peptide candidates per measured feature, thereby strongly improving the coverage of identified peptides. We argue that similar strategies can be applied to untargeted metabolomics, leading to equivalent improvements in metabolite identification. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. In situ single-atom array synthesis using dynamic holographic optical tweezers

    PubMed Central

    Kim, Hyosub; Lee, Woojun; Lee, Han-gyeol; Jo, Hanlae; Song, Yunheung; Ahn, Jaewook

    2016-01-01

    Establishing a reliable method to form scalable neutral-atom platforms is an essential cornerstone for quantum computation, quantum simulation and quantum many-body physics. Here we demonstrate a real-time transport of single atoms using holographic microtraps controlled by a liquid-crystal spatial light modulator. For this, an analytical design approach to flicker-free microtrap movement is devised and cold rubidium atoms are simultaneously rearranged with 2N motional degrees of freedom, representing unprecedented space controllability. We also accomplish an in situ feedback control for single-atom rearrangements with the high success rate of 99% for up to 10 μm translation. We hope this proof-of-principle demonstration of high-fidelity atom-array preparations will be useful for deterministic loading of N single atoms, especially on arbitrary lattice locations, and also for real-time qubit shuttling in high-dimensional quantum computing architectures. PMID:27796372

  20. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  1. Wii Balance Board: Reliability and Clinical Use in Assessment of Balance in Healthy Elderly Women.

    PubMed

    Monteiro-Junior, Renato Sobral; Ferreira, Arthur Sá; Puell, Vivian Neiva; Lattari, Eduardo; Machado, Sérgio; Otero Vaghetti, César Augusto; da Silva, Elirez Bezerra

    2015-01-01

    Force plate is considered gold standard tool to assess body balance. However the Wii Balance Board (WBB) platform is a trustworthy equipment to assess stabilometric components in young people. Thus, we aim to examine the reliability of measures of center of pressure with WBB in healthy elderly women. Twenty one healthy and physically active women were enrolled in the study (age: 64 ± 7 years; body mass index: 29 ± 5 kg/m2. The WBB was used to assess the center of pressure measures in the individuals. Pressure was linearly applied to different points to test the platform precision. Three assessments were performed, with two of them being held on the same day at a 5- to 10-minute interval, and the third one was performed 48 h later. A linear regression analysis was used to find out linearity, while the intraclass correlation coefficient was used to assess reliability. The platform precision was adequate (R2 = 0.997, P = 0.01). Center of pressure measures showed an excellent reliability (all intraclass correlation coefficient values were > 0.90; p < 0.01). The WBB is a precise and reliable tool of body stability quantitative measure in healthy active elderly women and its use should be encouraged in clinical settings.

  2. Development of a computer model to predict platform station keeping requirements in the Gulf of Mexico using remote sensing data

    NASA Technical Reports Server (NTRS)

    Barber, Bryan; Kahn, Laura; Wong, David

    1990-01-01

    Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented.

  3. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  4. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  5. Study of thermal management for space platform applications

    NASA Technical Reports Server (NTRS)

    Oren, J. A.

    1980-01-01

    Techniques for the management of the thermal energy of large space platforms using many hundreds of kilowatts over a 10 year life span were evaluated. Concepts for heat rejection, heat transport within the vehicle, and interfacing were analyzed and compared. The heat rejection systems were parametrically weight optimized over conditions for heat pipe and pumped fluid approaches. Two approaches to achieve reliability were compared for: performance, weight, volume, projected area, reliability, cost, and operational characteristics. Technology needs are assessed and technology advancement recommendations are made.

  6. Interactive Computer-Assisted Instruction in Acid-Base Physiology for Mobile Computer Platforms

    ERIC Educational Resources Information Center

    Longmuir, Kenneth J.

    2014-01-01

    In this project, the traditional lecture hall presentation of acid-base physiology in the first-year medical school curriculum was replaced by interactive, computer-assisted instruction designed primarily for the iPad and other mobile computer platforms. Three learning modules were developed, each with ~20 screens of information, on the subjects…

  7. Automation of diagnostic genetic testing: mutation detection by cyclic minisequencing.

    PubMed

    Alagrund, Katariina; Orpana, Arto K

    2014-01-01

    The rising role of nucleic acid testing in clinical decision making is creating a need for efficient and automated diagnostic nucleic acid test platforms. Clinical use of nucleic acid testing sets demands for shorter turnaround times (TATs), lower production costs and robust, reliable methods that can easily adopt new test panels and is able to run rare tests in random access principle. Here we present a novel home-brew laboratory automation platform for diagnostic mutation testing. This platform is based on the cyclic minisequecing (cMS) and two color near-infrared (NIR) detection. Pipetting is automated using Tecan Freedom EVO pipetting robots and all assays are performed in 384-well micro plate format. The automation platform includes a data processing system, controlling all procedures, and automated patient result reporting to the hospital information system. We have found automated cMS a reliable, inexpensive and robust method for nucleic acid testing for a wide variety of diagnostic tests. The platform is currently in clinical use for over 80 mutations or polymorphisms. Additionally to tests performed from blood samples, the system performs also epigenetic test for the methylation of the MGMT gene promoter, and companion diagnostic tests for analysis of KRAS and BRAF gene mutations from formalin fixed and paraffin embedded tumor samples. Automation of genetic test reporting is found reliable and efficient decreasing the work load of academic personnel.

  8. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  9. Autonomous self-organizing resource manager for multiple networked platforms

    NASA Astrophysics Data System (ADS)

    Smith, James F., III

    2002-08-01

    A fuzzy logic based expert system for resource management has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar autonomous naval platforms defending their group against attackers. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. This paper provides an overview of the resource manager including the four fuzzy decision trees that make up the resource manager; the fuzzy EA model; genetic algorithm based optimization; co-evolutionary data mining through gaming; and mathematical, computational and hardware based validation. Methods of automatically designing new multi-platform EA techniques are considered. The expert system runs on each defending platform rendering it an autonomous system requiring no human intervention. There is no commanding platform. Instead the platforms work cooperatively as a function of battlespace geometry; sensor data such as range, bearing, ID, uncertainty measures for sensor output; intelligence reports; etc. Computational experiments will show the defending networked platform's ability to self- organize. The platforms' ability to self-organize is illustrated through the output of the scenario generator, a software package that automates the underlying data mining problem and creates a computer movie of the platforms' interaction for evaluation.

  10. Uncover the Cloud for Geospatial Sciences and Applications to Adopt Cloud Computing

    NASA Astrophysics Data System (ADS)

    Yang, C.; Huang, Q.; Xia, J.; Liu, K.; Li, J.; Xu, C.; Sun, M.; Bambacus, M.; Xu, Y.; Fay, D.

    2012-12-01

    Cloud computing is emerging as the future infrastructure for providing computing resources to support and enable scientific research, engineering development, and application construction, as well as work force education. On the other hand, there is a lot of doubt about the readiness of cloud computing to support a variety of scientific research, development and educations. This research is a project funded by NASA SMD to investigate through holistic studies how ready is the cloud computing to support geosciences. Four applications with different computing characteristics including data, computing, concurrent, and spatiotemporal intensities are taken to test the readiness of cloud computing to support geosciences. Three popular and representative cloud platforms including Amazon EC2, Microsoft Azure, and NASA Nebula as well as a traditional cluster are utilized in the study. Results illustrates that cloud is ready to some degree but more research needs to be done to fully implemented the cloud benefit as advertised by many vendors and defined by NIST. Specifically, 1) most cloud platform could help stand up new computing instances, a new computer, in a few minutes as envisioned, therefore, is ready to support most computing needs in an on demand fashion; 2) the load balance and elasticity, a defining characteristic, is ready in some cloud platforms, such as Amazon EC2, to support bigger jobs, e.g., needs response in minutes, while some are not ready to support the elasticity and load balance well. All cloud platform needs further research and development to support real time application at subminute level; 3) the user interface and functionality of cloud platforms vary a lot and some of them are very professional and well supported/documented, such as Amazon EC2, some of them needs significant improvement for the general public to adopt cloud computing without professional training or knowledge about computing infrastructure; 4) the security is a big concern in cloud computing platform, with the sharing spirit of cloud computing, it is very hard to ensure higher level security, except a private cloud is built for a specific organization without public access, public cloud platform does not support FISMA medium level yet and may never be able to support FISMA high level; 5) HPC jobs needs of cloud computing is not well supported and only Amazon EC2 supports this well. The research is being taken by NASA and other agencies to consider cloud computing adoption. We hope the publication of the research would also benefit the public to adopt cloud computing.

  11. Nonclassical light sources for silicon photonics

    NASA Astrophysics Data System (ADS)

    Bajoni, Daniele; Galli, Matteo

    2017-09-01

    Quantum photonics has recently attracted a lot of attention for its disruptive potential in emerging technologies like quantum cryptography, quantum communication and quantum computing. Driven by the impressive development in nanofabrication technologies and nanoscale engineering, silicon photonics has rapidly become the platform of choice for on-chip integration of high performing photonic devices, now extending their functionalities towards quantum-based applications. Focusing on quantum Information Technology (qIT) as a key application area, we review recent progress in integrated silicon-based sources of nonclassical states of light. We assess the state of the art in this growing field and highlight the challenges that need to be overcome to make quantum photonics a reliable and widespread technology.

  12. Dependable control systems with Internet of Things.

    PubMed

    Tran, Tri; Ha, Q P

    2015-11-01

    This paper presents an Internet of Things (IoT)-enabled dependable control system (DepCS) for continuous processes. In a DepCS, an actuator and a transmitter form a regulatory control loop. Each processor inside such actuator and transmitter is designed as a computational platform implementing the feedback control algorithm. The connections between actuators and transmitters via IoT create a reliable backbone for a DepCS. The centralized input-output marshaling system is not required in DepCSs. A state feedback control synthesis method for DepCS applying the self-recovery constraint is presented in the second part of the paper. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Automatic and accurate reconstruction of distal humerus contours through B-Spline fitting based on control polygon deformation.

    PubMed

    Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A

    2014-12-01

    The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.

  14. Rotating Desk for Collaboration by Two Computer Programmers

    NASA Technical Reports Server (NTRS)

    Riley, John Thomas

    2005-01-01

    A special-purpose desk has been designed to facilitate collaboration by two computer programmers sharing one desktop computer or computer terminal. The impetus for the design is a trend toward what is known in the software industry as extreme programming an approach intended to ensure high quality without sacrificing the quantity of computer code produced. Programmers working in pairs is a major feature of extreme programming. The present desk design minimizes the stress of the collaborative work environment. It supports both quality and work flow by making it unnecessary for programmers to get in each other s way. The desk (see figure) includes a rotating platform that supports a computer video monitor, keyboard, and mouse. The desk enables one programmer to work on the keyboard for any amount of time and then the other programmer to take over without breaking the train of thought. The rotating platform is supported by a turntable bearing that, in turn, is supported by a weighted base. The platform contains weights to improve its balance. The base includes a stand for a computer, and is shaped and dimensioned to provide adequate foot clearance for both users. The platform includes an adjustable stand for the monitor, a surface for the keyboard and mouse, and spaces for work papers, drinks, and snacks. The heights of the monitor, keyboard, and mouse are set to minimize stress. The platform can be rotated through an angle of 40 to give either user a straight-on view of the monitor and full access to the keyboard and mouse. Magnetic latches keep the platform preferentially at either of the two extremes of rotation. To switch between users, one simply grabs the edge of the platform and pulls it around. The magnetic latch is easily released, allowing the platform to rotate freely to the position of the other user

  15. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  16. Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar

    NASA Astrophysics Data System (ADS)

    Lottman, Brian Todd

    1998-09-01

    This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.

  17. Continuous measurement of breast tumor hormone receptor expression: a comparison of two computational pathology platforms

    PubMed Central

    Ahern, Thomas P.; Beck, Andrew H.; Rosner, Bernard A.; Glass, Ben; Frieling, Gretchen; Collins, Laura C.; Tamimi, Rulla M.

    2017-01-01

    Background Computational pathology platforms incorporate digital microscopy with sophisticated image analysis to permit rapid, continuous measurement of protein expression. We compared two computational pathology platforms on their measurement of breast tumor estrogen receptor (ER) and progesterone receptor (PR) expression. Methods Breast tumor microarrays from the Nurses’ Health Study were stained for ER (n=592) and PR (n=187). One expert pathologist scored cases as positive if ≥1% of tumor nuclei exhibited stain. ER and PR were then measured with the Definiens Tissue Studio (automated) and Aperio Digital Pathology (user-supervised) platforms. Platform-specific measurements were compared using boxplots, scatter plots, and correlation statistics. Classification of ER and PR positivity by platform-specific measurements was evaluated with areas under receiver operating characteristic curves (AUC) from univariable logistic regression models, using expert pathologist classification as the standard. Results Both platforms showed considerable overlap in continuous measurements of ER and PR between positive and negative groups classified by expert pathologist. Platform-specific measurements were strongly and positively correlated with one another (rho≥0.77). The user-supervised Aperio workflow performed slightly better than the automated Definiens workflow at classifying ER positivity (AUCAperio=0.97; AUCDefiniens=0.90; difference=0.07, 95% CI: 0.05, 0.09) and PR positivity (AUCAperio=0.94; AUCDefiniens=0.87; difference=0.07, 95% CI: 0.03, 0.12). Conclusion Paired hormone receptor expression measurements from two different computational pathology platforms agreed well with one another. The user-supervised workflow yielded better classification accuracy than the automated workflow. Appropriately validated computational pathology algorithms enrich molecular epidemiology studies with continuous protein expression data and may accelerate tumor biomarker discovery. PMID:27729430

  18. Using a dry electrode EEG device during balance tasks in healthy young-adult males: Test-retest reliability analysis.

    PubMed

    Collado-Mateo, Daniel; Adsuar, Jose C; Olivares, Pedro R; Cano-Plasencia, Ricardo; Gusi, Narcis

    2015-01-01

    The analysis of brain activity during balance is an important topic in different fields of science. Given that all measurements involve an error that is caused by different agents, like the instrument, the researcher, or the natural human variability, a test-retest reliability evaluation of the electroencephalographic assessment is a needed starting point. However, there is a lack of information about the reliability of electroencephalographic measurements, especially in a new wireless device with dry electrodes. The current study aims to analyze the reliability of electroencephalographic measurements from a wireless device using dry electrodes during two different balance tests. Seventeen healthy male volunteers performed two different static balance tasks on a Biodex Balance Platform: (a) with two feet on the platform and (b) with one foot on the platform. Electroencephalographic data was recorded using Enobio (Neuroelectrics). The mean power spectrum of the alpha band of the central and frontal channels was calculated. Relative and absolute indices of reliability were also calculated. In general terms, the intraclass correlation coefficient (ICC) values of all the assessed channels can be classified as excellent (>0.90). The percentage standard error of measurement oscillated from 0.54% to 1.02% and the percentage smallest real difference ranged from 1.50% to 2.82%. Electroencephalographic assessment through an Enobio device during balance tasks has an excellent reliability. However, its utility was not demonstrated because responsiveness was not assessed.

  19. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  20. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.

    PubMed

    Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.

  1. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research

    PubMed Central

    Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444

  2. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030

    PubMed Central

    Slotnick, Jeffrey P.; Khodadoust, Abdollah; Alonso, Juan J.; Darmofal, David L.; Gropp, William D.; Lurie, Elizabeth A.; Mavriplis, Dimitri J.; Venkatakrishnan, Venkat

    2014-01-01

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be ‘cleaner’ and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. PMID:25024413

  3. Cloud computing for comparative genomics with windows azure platform.

    PubMed

    Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.

  4. Cloud Computing for Comparative Genomics with Windows Azure Platform

    PubMed Central

    Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609

  5. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  6. the APL Balloonborne High Altitude Research Platform (HARP)

    NASA Astrophysics Data System (ADS)

    Adams, D.; Arnold, S.; Bernasconi, P.

    2015-09-01

    The Johns Hopkins University Applied Physics Laboratory (APL) has developed and demonstrated a multi-purpose stratospheric balloonborne gondola known as the High Altitude Research Platform (HARP). HARP provides the power, mechanical supports, thermal control, and data transmission for multiple forms of high-altitude scientific research equipment. The platform has been used for astronomy, cosmology and heliophysics experiments but can also be applied to atmospheric studies, space weather and other forms of high altitude research. HARP has executed five missions. The first was Flare Genesis from Antarctica in 1993 and the most recent was the Balloon Observation Platform for Planetary Science (BOPPS) from New Mexico in 2014. HARP will next be used to perform again the Stratospheric Terahertz Observatory mission, a mission that it first performed in 2009. The structure, composed of an aluminum framework is designed for easy transport and field assembly while providing ready access to the payload and supporting avionics. A light-weighted structure, capable of supporting Ultra-Long Duration Balloon (ULDB) flights that can last more than 100 days is available. Scientific research payloads as heavy as 600 kg (1322 pounds) and requiring up to 800 Watts electrical power can be supported. The platform comprises all subsystems required to support and operate the science payload, including both line-of-sight (LOS) and over-the-horizon (0TH) telecommunications, the latter provided by Iridium Pilot. Electrical power is produced by solar panels for multi-day missions and batteries for single-day missions. The avionics design is primarily single-string; however, use of ruggedized industrial components provides high reliability. The avionics features a Command and Control (C&C) computer and a Pointing Control System (PCS) computer housed within a common unpressurized unit. The avionics operates from ground pressure to 2 Torr and over a temperature range from —30 C to +85 C. Science data is stored on-board and also flows through the C&C computer where it is packetized for real-time downlink. The telecommunications system is capable of LOS downlink up to 3000 kbps and 0TH downlink up to 120 kbps. The pointing control system (PCS) provides three-axis attitude stability to 1 arcsec and can be used to aim at a fixed point for science observations, to perform science scans, and to track an object ephemeris. This paper provides a description of HARP, summarizes its performance on prior flights, describes its use on upcoming missions and outlines the characteristics that can be customized to meet the needs of the high altitude research community to support future missions.

  7. Cloud Based Applications and Platforms (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodt-Giles, D.

    2014-05-15

    Presentation to the Cloud Computing East 2014 Conference, where we are highlighting our cloud computing strategy, describing the platforms on the cloud (including Smartgrid.gov), and defining our process for implementing cloud based applications.

  8. Batching System for Superior Service

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Veridian's Portable Batch System (PBS) was the recipient of the 1997 NASA Space Act Award for outstanding software. A batch system is a set of processes for managing queues and jobs. Without a batch system, it is difficult to manage the workload of a computer system. By bundling the enterprise's computing resources, the PBS technology offers users a single coherent interface, resulting in efficient management of the batch services. Users choose which information to package into "containers" for system-wide use. PBS also provides detailed system usage data, a procedure not easily executed without this software. PBS operates on networked, multi-platform UNIX environments. Veridian's new version, PBS Pro,TM has additional features and enhancements, including support for additional operating systems. Veridian distributes the original version of PBS as Open Source software via the PBS website. Customers can register and download the software at no cost. PBS Pro is also available via the web and offers additional features such as increased stability, reliability, and fault tolerance.A company using PBS can expect a significant increase in the effective management of its computing resources. Tangible benefits include increased utilization of costly resources and enhanced understanding of computational requirements and user needs.

  9. A cloud computing based platform for sleep behavior and chronic diseases collaborative research.

    PubMed

    Kuo, Mu-Hsing; Borycki, Elizabeth; Kushniruk, Andre; Huang, Yueh-Min; Hung, Shu-Hui

    2014-01-01

    The objective of this study is to propose a Cloud Computing based platform for sleep behavior and chronic disease collaborative research. The platform consists of two main components: (1) a sensing bed sheet with textile sensors to automatically record patient's sleep behaviors and vital signs, and (2) a service-oriented cloud computing architecture (SOCCA) that provides a data repository and allows for sharing and analysis of collected data. Also, we describe our systematic approach to implementing the SOCCA. We believe that the new cloud-based platform can provide nurse and other health professional researchers located in differing geographic locations with a cost effective, flexible, secure and privacy-preserved research environment.

  10. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  11. A Correction Equation for Jump Height Measured Using the Just Jump System.

    PubMed

    McMahon, John J; Jones, Paul A; Comfort, Paul

    2016-05-01

    To determine the concurrent validity and reliability of the popular Just Jump system (JJS) for determining jump height and, if necessary, provide a correction equation for future reference. Eighteen male college athletes performed 3 bilateral countermovement jumps (CMJs) on 2 JJSs (alternative method) that were placed on top of a force platform (criterion method). Two JJSs were used to establish consistency between systems. Jump height was calculated from flight time obtained from the JJS and force platform. Intraclass correlation coefficients (ICCs) demonstrated excellent within-session reliability of the CMJ height measurement derived from both the JJS (ICC = .96, P < .001) and the force platform (ICC = .96, P < .001). Dependent t tests revealed that the JJS yielded a significantly greater CMJ jump height (0.46 ± 0.09 m vs 0.33 ± 0.08 m) than the force platform (P < .001, Cohen d = 1.39, power = 1.00). There was, however, an excellent relationship between CMJ heights derived from the JJS and force platform (r = .998, P < .001, power = 1.00), with a coefficient of determination (R2) of .995. Therefore, the following correction equation was produced: Criterion jump height = (0.8747 × alternative jump height) - 0.0666. The JJS provides a reliable but overestimated measure of jump height. It is suggested, therefore, that practitioners who use the JJS as part of future work apply the correction equation presented in this study to resultant jump-height values.

  12. Computer calculation of device, circuit, equipment, and system reliability.

    NASA Technical Reports Server (NTRS)

    Crosby, D. R.

    1972-01-01

    A grouping into four classes is proposed for all reliability computations that are related to electronic equipment. Examples are presented of reliability computations in three of these four classes. Each of the three specific reliability tasks described was originally undertaken to satisfy an engineering need for reliability data. The form and interpretation of the print-out of the specific reliability computations is presented. The justification for the costs of these computations is indicated. The skills of the personnel used to conduct the analysis, the interfaces between the personnel, and the timing of the projects is discussed.

  13. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research

    PubMed Central

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400

  14. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.

    PubMed

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.

  15. Acceleration of Cherenkov angle reconstruction with the new Intel Xeon/FPGA compute platform for the particle identification in the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Faerber, Christian

    2017-10-01

    The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel Xeon/FPGA platforms, which are built in general for high performance computing, are also very interesting for the High Energy Physics community.

  16. Design and research on the platform of network manufacture product electronic trading

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Liu, Quan; Jiang, Xuemei

    2003-09-01

    With the rapid globalization of market and business, E-trading affects every manufacture enterprise. However, the security of network manufacturing products of transmission on Internet is very important. In this paper we discussed the protocol of fair exchange and platform for network manufacture products E-trading based on fair exchange protocol and digital watermarking techniques. The platform realized reliable and copyright protection.

  17. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  18. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  19. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.

  20. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  1. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  2. The Efficacy of the Internet-Based Blackboard Platform in Developmental Writing Classes

    ERIC Educational Resources Information Center

    Shudooh, Yusuf M.

    2016-01-01

    The application of computer-assisted platforms in writing classes is a relatively new paradigm in education. The adoption of computers-assisted writing classes is gaining ground in many western and non western universities. Numerous issues can be addressed when conducting computer-assisted classes (CAC). However, a few studies conducted to assess…

  3. UPC BarcelonaTech Platform. Innovative aerobatic parabolic flights for life sciences experiments.

    NASA Astrophysics Data System (ADS)

    Perez-Poch, Antoni; Gonzalez, Daniel

    We present an innovative method of performing parabolic flights with aerobatic single-engine planes. A parabolic platform has been established in Sabadell Airport (Barcelona, Spain) to provide an infraestructure ready to allow Life Sciences reduced gravity experiments to be conducted in parabolic flights. Test flights have demonstrated that up to 8 seconds of reduced gravity can be achieved by using a two-seat CAP10B aircraft, with a gravity range between 0.1 and 0.01g in the three axis. A parabolic flight campaign may be implemented with a significant reduction in budget compared to conventional parabolic flight campaigns, and with a very short time-to-access to the platform. Operational skills and proficiency of the pilot controling the aircraft during the maneuvre, sensitivity to wind gusts, and aircraft balance are the key issues that make a parabola successful. Efforts are focused on improving the total “zero-g” time and the quality of reduced gravity achieved, as well as providing more space for experiments. We report results of test flights that have been conducted in order to optimize the quality and total microgravity time. A computer sofware has been developed and implemented to help the pilot optimize his or her performance. Finally, we summarize the life science experiments that have been conducted in this platform. Specific focus is given to the very successful 'Barcelona ZeroG Challenge', this year in its third edition. This educational contest gives undergraduate and graduate students worldwide the opportunity to design their research within our platform and test it on flight, thus becoming real researchers. We conclude that aerobatic parabolic flights have proven to be a safe, unexpensive and reliable way to conduct life sciences reduced gravity experiments.

  4. Smart Web-Based Platform to Support Physical Rehabilitation.

    PubMed

    Rybarczyk, Yves; Kleine Deters, Jan; Cointe, Clément; Esparza, Danilo

    2018-04-26

    The enhancement of ubiquitous and pervasive computing brings new perspectives in medical rehabilitation. In that sense, the present study proposes a smart, web-based platform to promote the reeducation of patients after hip replacement surgery. This project focuses on two fundamental aspects in the development of a suitable tele-rehabilitation application, which are: (i) being based on an affordable technology, and (ii) providing the patients with a real-time assessment of the correctness of their movements. A probabilistic approach based on the development and training of ten Hidden Markov Models (HMMs) is used to discriminate in real time the main faults in the execution of the therapeutic exercises. Two experiments are designed to evaluate the precision of the algorithm for classifying movements performed in the laboratory and clinical settings, respectively. A comparative analysis of the data shows that the models are as reliable as the physiotherapists to discriminate and identify the motion errors. The results are discussed in terms of the required setup for a successful application in the field and further implementations to improve the accuracy and usability of the system.

  5. Indoor Map Aided Wi-Fi Integrated Lbs on Smartphone Platforms

    NASA Astrophysics Data System (ADS)

    Yu, C.; El-Sheimy, N.

    2017-09-01

    In this research, an indoor map aided INS/Wi-Fi integrated location based services (LBS) applications is proposed and implemented on smartphone platforms. Indoor map information together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value from Wi-Fi are collected to obtain an accurate, continuous, and low-cost position solution. The main challenge of this research is to make effective use of various measurements that complement each other without increasing the computational burden of the system. The integrated system in this paper includes three modules: INS, Wi-Fi (if signal available) and indoor maps. A cascade structure Particle/Kalman filter framework is applied to combine the different modules. Firstly, INS position and Wi-Fi fingerprint position integrated through Kalman filter for estimating positioning information. Then, indoor map information is applied to correct the error of INS/Wi-Fi estimated position through particle filter. Indoor tests show that the proposed method can effectively reduce the accumulation positioning errors of stand-alone INS systems, and provide stable, continuous and reliable indoor location service.

  6. Automated saccharification assay for determination of digestibility in plant materials.

    PubMed

    Gomez, Leonardo D; Whitehead, Caragh; Barakate, Abdellah; Halpin, Claire; McQueen-Mason, Simon J

    2010-10-27

    Cell wall resistance represents the main barrier for the production of second generation biofuels. The deconstruction of lignocellulose can provide sugars for the production of fuels or other industrial products through fermentation. Understanding the biochemical basis of the recalcitrance of cell walls to digestion will allow development of more effective and cost efficient ways to produce sugars from biomass. One approach is to identify plant genes that play a role in biomass recalcitrance, using association genetics. Such an approach requires a robust and reliable high throughput (HT) assay for biomass digestibility, which can be used to screen the large numbers of samples involved in such studies. We developed a HT saccharification assay based on a robotic platform that can carry out in a 96-well plate format the enzymatic digestion and quantification of the released sugars. The handling of the biomass powder for weighing and formatting into 96 wells is performed by a robotic station, where the plant material is ground, delivered to the desired well in the plates and weighed with a precision of 0.1 mg. Once the plates are loaded, an automated liquid handling platform delivers an optional mild pretreatment (< 100°C) followed by enzymatic hydrolysis of the biomass. Aliquots from the hydrolysis are then analyzed for the release of reducing sugar equivalents. The same platform can be used for the comparative evaluation of different enzymes and enzyme cocktails. The sensitivity and reliability of the platform was evaluated by measuring the saccharification of stems from lignin modified tobacco plants, and the results of automated and manual analyses compared. The automated assay systems are sensitive, robust and reliable. The system can reliably detect differences in the saccharification of plant tissues, and is able to process large number of samples with a minimum amount of human intervention. The automated system uncovered significant increases in the digestibility of certain lignin modified lines in a manner compatible with known effects of lignin modification on cell wall properties. We conclude that this automated assay platform is of sufficient sensitivity and reliability to undertake the screening of the large populations of plants necessary for mutant identification and genetic association studies.

  7. Reliability and Validity of Kinetic and Kinematic Parameters Determined With Force Plates Embedded Under a Soil-Filled Baseball Mound.

    PubMed

    Yanai, Toshimasa; Matsuo, Akifumi; Maeda, Akira; Nakamoto, Hiroki; Mizutani, Mirai; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2017-08-01

    We developed a force measurement system in a soil-filled mound for measuring ground reaction forces (GRFs) acting on baseball pitchers and examined the reliability and validity of kinetic and kinematic parameters determined from the GRFs. Three soil-filled trays of dimensions that satisfied the official baseball rules were fixed onto 3 force platforms. Eight collegiate pitchers wearing baseball shoes with metal cleats were asked to throw 5 fastballs with maximum effort from the mound toward a catcher. The reliability of each parameter was determined for each subject as the coefficient of variation across the 5 pitches. The validity of the measurements was tested by comparing the outcomes either with the true values or the corresponding values computed from a motion capture system. The coefficients of variation in the repeated measurements of the peak forces ranged from 0.00 to 0.17, and were smaller for the pivot foot than the stride foot. The mean absolute errors in the impulses determined over the entire duration of pitching motion were 5.3 N˙s, 1.9 N˙s, and 8.2 N˙s for the X-, Y-, and Z-directions, respectively. These results suggest that the present method is reliable and valid for determining selected kinetic and kinematic parameters for analyzing pitching performance.

  8. Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.

    2007-01-01

    The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.

  9. Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing

    NASA Astrophysics Data System (ADS)

    Kim, Mooseop; Ryou, Jaecheol

    The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.

  10. An Evaluation of Architectural Platforms for Parallel Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1996-01-01

    We study the computational, communication, and scalability characteristics of a computational fluid dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architecture platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and distributed memory multiprocessors with different topologies - the IBM SP and the Cray T3D. We investigate the impact of various networks connecting the cluster of workstations on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  11. Parallelizing Navier-Stokes Computations on a Variety of Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1997-01-01

    We study the computational, communication, and scalability characteristics of a Computational Fluid Dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architectural platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), distributed memory multiprocessors with different topologies-the IBM SP and the Cray T3D. We investigate the impact of various networks, connecting the cluster of workstations, on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  12. A Web Tool for Research in Nonlinear Optics

    NASA Astrophysics Data System (ADS)

    Prikhod'ko, Nikolay V.; Abramovsky, Viktor A.; Abramovskaya, Natalia V.; Demichev, Andrey P.; Kryukov, Alexandr P.; Polyakov, Stanislav P.

    2016-02-01

    This paper presents a project of developing the web platform called WebNLO for computer modeling of nonlinear optics phenomena. We discuss a general scheme of the platform and a model for interaction between the platform modules. The platform is built as a set of interacting RESTful web services (SaaS approach). Users can interact with the platform through a web browser or command line interface. Such a resource has no analogues in the field of nonlinear optics and will be created for the first time therefore allowing researchers to access high-performance computing resources that will significantly reduce the cost of the research and development process.

  13. Continuous measurement of breast tumour hormone receptor expression: a comparison of two computational pathology platforms.

    PubMed

    Ahern, Thomas P; Beck, Andrew H; Rosner, Bernard A; Glass, Ben; Frieling, Gretchen; Collins, Laura C; Tamimi, Rulla M

    2017-05-01

    Computational pathology platforms incorporate digital microscopy with sophisticated image analysis to permit rapid, continuous measurement of protein expression. We compared two computational pathology platforms on their measurement of breast tumour oestrogen receptor (ER) and progesterone receptor (PR) expression. Breast tumour microarrays from the Nurses' Health Study were stained for ER (n=592) and PR (n=187). One expert pathologist scored cases as positive if ≥1% of tumour nuclei exhibited stain. ER and PR were then measured with the Definiens Tissue Studio (automated) and Aperio Digital Pathology (user-supervised) platforms. Platform-specific measurements were compared using boxplots, scatter plots and correlation statistics. Classification of ER and PR positivity by platform-specific measurements was evaluated with areas under receiver operating characteristic curves (AUC) from univariable logistic regression models, using expert pathologist classification as the standard. Both platforms showed considerable overlap in continuous measurements of ER and PR between positive and negative groups classified by expert pathologist. Platform-specific measurements were strongly and positively correlated with one another (r≥0.77). The user-supervised Aperio workflow performed slightly better than the automated Definiens workflow at classifying ER positivity (AUC Aperio =0.97; AUC Definiens =0.90; difference=0.07, 95% CI 0.05 to 0.09) and PR positivity (AUC Aperio =0.94; AUC Definiens =0.87; difference=0.07, 95% CI 0.03 to 0.12). Paired hormone receptor expression measurements from two different computational pathology platforms agreed well with one another. The user-supervised workflow yielded better classification accuracy than the automated workflow. Appropriately validated computational pathology algorithms enrich molecular epidemiology studies with continuous protein expression data and may accelerate tumour biomarker discovery. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Design of platform for removing screws from LCD display shields

    NASA Astrophysics Data System (ADS)

    Tu, Zimei; Qin, Qin; Dou, Jianfang; Zhu, Dongdong

    2017-11-01

    Removing the screws on the sides of a shield is a necessary process in disassembling a computer LCD display. To solve this issue, a platform has been designed for removing the screws on display shields. This platform uses virtual instrument technology with LabVIEW as the development environment to design the mechanical structure with the technologies of motion control, human-computer interaction and target recognition. This platform removes the screws from the sides of the shield of an LCD display mechanically thus to guarantee follow-up separation and recycle.

  15. Supporting research sites in resource-limited settings: Challenges in implementing IT infrastructure

    PubMed Central

    Whalen, Christopher; Donnell, Deborah; Tartakovsky, Michael

    2014-01-01

    As Information and Communication Technology infrastructure becomes more reliable, new methods of Electronic Data Capture (EDC), datamarts/Data warehouses, and mobile computing provide platforms for rapid coordination of international research projects and multisite studies. However, despite the increasing availability of internet connectivity and communication systems in remote regions of the world, there are still significant obstacles. Sites with poor infrastructure face serious challenges participating in modern clinical and basic research, particularly that relying on EDC and internet communication technologies. This report discusses our experiences in supporting research in resource-limited settings (RLS). We describe examples of the practical and ethical/regulatory challenges raised by use of these newer technologies for data collection in multisite clinical studies. PMID:24321986

  16. Supporting research sites in resource-limited settings: challenges in implementing information technology infrastructure.

    PubMed

    Whalen, Christopher J; Donnell, Deborah; Tartakovsky, Michael

    2014-01-01

    As information and communication technology infrastructure becomes more reliable, new methods of electronic data capture, data marts/data warehouses, and mobile computing provide platforms for rapid coordination of international research projects and multisite studies. However, despite the increasing availability of Internet connectivity and communication systems in remote regions of the world, there are still significant obstacles. Sites with poor infrastructure face serious challenges participating in modern clinical and basic research, particularly that relying on electronic data capture and Internet communication technologies. This report discusses our experiences in supporting research in resource-limited settings. We describe examples of the practical and ethical/regulatory challenges raised by the use of these newer technologies for data collection in multisite clinical studies.

  17. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  18. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  19. Report on the Second ARM Mobile Facility (AMF2) Roll, Pitch, and Heave (RPH) Stabilization Platform: Design and Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulter, Richard L.; Martin, Timothy J.

    One of the primary objectives of the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Climate Research Facility’s second Mobile Facility (AMF2) is to obtain reliable measurements of solar, surface, and atmospheric radiation, as well as cloud and atmospheric properties, from ocean-going vessels. To ensure that these climatic measurements are representative and accurate, many AMF2 instrument systems are designed to collect data in a zenith orientation. A pillar of the AMF2 strategy in this effort is the use of a stable platform. The purpose of the platform is to 1) mitigate vessel motion for instruments that require a truly verticalmore » orientation and keep them pointed in the zenith direction, and 2) allow for accurate positioning for viewing or shading of the sensors from direct sunlight. Numerous ARM instruments fall into these categories, but perhaps the most important are the vertically pointing cloud radars, for which vertical motions are a critical parameter. During the design and construction phase of AMF2, an inexpensive stable platform was purchased to perform the stabilization tasks for some of these instruments. The first table compensated for roll, pitch, and yaw (RPY) and was reported upon in a previous technical report (Kafle and Coulter, 2012). Subsequently, a second table was purchased specifically for operation with the Marine W-band cloud radar (MWACR). Computer programs originally developed for RPY were modified to communicate with the new platform controller and with an inertial measurements platform that measures true ship motion components (roll, pitch, yaw, surge, sway, and heave). This platform could not be tested dynamically for RPY because of time constraints requiring its deployment aboard the container ship Horizon Spirit in September 2013. Hence the initial motion tests were conducted on the initial cruise. Subsequent cruises provided additional test results. The platform, as tested, meets all the design and performance criteria established for its use. This is a report of the results of those efforts and the critical points in moving forward« less

  20. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  1. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  2. Unified Geophysical Cloud Platform (UGCP) for Seismic Monitoring and other Geophysical Applications.

    NASA Astrophysics Data System (ADS)

    Synytsky, R.; Starovoit, Y. O.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.

    2016-12-01

    We present Unified Geophysical Cloud Platform (UGCP) or UniGeoCloud as an innovative approach for geophysical data processing in the Cloud environment with the ability to run any type of data processing software in isolated environment within the single Cloud platform. We've developed a simple and quick method of several open-source widely known software seismic packages (SeisComp3, Earthworm, Geotool, MSNoise) installation which does not require knowledge of system administration, configuration, OS compatibility issues etc. and other often annoying details preventing time wasting for system configuration work. Installation process is simplified as "mouse click" on selected software package from the Cloud market place. The main objective of the developed capability was the software tools conception with which users are able to design and install quickly their own highly reliable and highly available virtual IT-infrastructure for the organization of seismic (and in future other geophysical) data processing for either research or monitoring purposes. These tools provide access to any seismic station data available in open IP configuration from the different networks affiliated with different Institutions and Organizations. It allows also setting up your own network as you desire by selecting either regionally deployed stations or the worldwide global network based on stations selection form the global map. The processing software and products and research results could be easily monitored from everywhere using variety of user's devices form desk top computers to IT gadgets. Currents efforts of the development team are directed to achieve Scalability, Reliability and Sustainability (SRS) of proposed solutions allowing any user to run their applications with the confidence of no data loss and no failure of the monitoring or research software components. The system is suitable for quick rollout of NDC-in-Box software package developed for State Signatories and aimed for promotion of data processing collected by the IMS Network.

  3. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  4. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  5. Solving lattice QCD systems of equations using mixed precision solvers on GPUs

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; Babich, R.; Barros, K.; Brower, R. C.; Rebbi, C.

    2010-09-01

    Modern graphics hardware is designed for highly parallel numerical tasks and promises significant cost and performance benefits for many scientific applications. One such application is lattice quantum chromodynamics (lattice QCD), where the main computational challenge is to efficiently solve the discretized Dirac equation in the presence of an SU(3) gauge field. Using NVIDIA's CUDA platform we have implemented a Wilson-Dirac sparse matrix-vector product that performs at up to 40, 135 and 212 Gflops for double, single and half precision respectively on NVIDIA's GeForce GTX 280 GPU. We have developed a new mixed precision approach for Krylov solvers using reliable updates which allows for full double precision accuracy while using only single or half precision arithmetic for the bulk of the computation. The resulting BiCGstab and CG solvers run in excess of 100 Gflops and, in terms of iterations until convergence, perform better than the usual defect-correction approach for mixed precision.

  6. Discovery and analysis of time delay sources in the USGS personal computer data collection platform (PCDCP) system

    USGS Publications Warehouse

    White, Timothy C.; Sauter, Edward A.; Stewart, Duff C.

    2014-01-01

    Intermagnet is an international oversight group which exists to establish a global network for geomagnetic observatories. This group establishes data standards and standard operating procedures for members and prospective members. Intermagnet has proposed a new One-Second Data Standard, for that emerging geomagnetic product. The standard specifies that all data collected must have a time stamp accuracy of ±10 milliseconds of the top-of-the-second Coordinated Universal Time. Therefore, the U.S. Geological Survey Geomagnetism Program has designed and executed several tests on its current data collection system, the Personal Computer Data Collection Platform. Tests are designed to measure the time shifts introduced by individual components within the data collection system, as well as to measure the time shift introduced by the entire Personal Computer Data Collection Platform. Additional testing designed for Intermagnet will be used to validate further such measurements. Current results of the measurements showed a 5.0–19.9 millisecond lag for the vertical channel (Z) of the Personal Computer Data Collection Platform and a 13.0–25.8 millisecond lag for horizontal channels (H and D) of the collection system. These measurements represent a dynamically changing delay introduced within the U.S. Geological Survey Personal Computer Data Collection Platform.

  7. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  8. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030.

    PubMed

    Slotnick, Jeffrey P; Khodadoust, Abdollah; Alonso, Juan J; Darmofal, David L; Gropp, William D; Lurie, Elizabeth A; Mavriplis, Dimitri J; Venkatakrishnan, Venkat

    2014-08-13

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be 'cleaner' and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. Citizen science: A new perspective to evaluate spatial patterns in hydrology.

    NASA Astrophysics Data System (ADS)

    Koch, J.; Stisen, S.

    2016-12-01

    Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning make humans often more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which is inevitable giving benefits such as speed and the possibility to automatize processes. This study highlights the integration of the generally underused human resource into hydrology. We established a citizen science project on the zooniverse platform entitled Pattern Perception. The aim is to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of a hydrological catchment model. In total, the turnout counts more than 2,800 users that provided over 46,000 classifications of 1,095 individual subjects within 64 days after the launch. Each subject displays simulated spatial patterns of land-surface variables of a baseline model and six modelling scenarios. The citizen science data discloses a numeric pattern similarity score for each of the scenarios with respect to the reference. We investigate the capability of a set of innovative statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide flexibility and auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric.

  10. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    PubMed

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.

  11. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  12. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    PubMed

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  13. GenomicTools: a computational platform for developing high-throughput analytics in genomics.

    PubMed

    Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo

    2012-01-15

    Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.

  14. Designing and evaluating Brain Powered Games for cognitive training and rehabilitation in at-risk African children.

    PubMed

    Giordani, B; Novak, B; Sikorskii, A; Bangirana, P; Nakasujja, N; Winn, B M; Boivin, M J

    2015-01-01

    Valid, reliable, accessible, and cost-effective computer-training approaches can be important components in scaling up educational support across resource-poor settings, such as sub-Saharan Africa. The goal of the current study was to develop a computer-based training platform, the Michigan State University Games for Entertainment and Learning laboratory's Brain Powered Games (BPG) package that would be suitable for use with at-risk children within a rural Ugandan context and then complete an initial field trial of that package. After game development was completed with the use of local stimuli and sounds to match the context of the games as closely as possible to the rural Ugandan setting, an initial field study was completed with 33 children (mean age = 8.55 ± 2.29 years, range 6-12 years of age) with HIV in rural Uganda. The Test of Variables of Attention (TOVA), CogState computer battery, and the Non-Verbal Index from the Kaufman Assessment Battery for Children, 2nd edition (KABC-II) were chosen as the outcome measures for pre- and post-intervention testing. The children received approximately 45 min of BPG training several days per week for 2 months (24 sessions). Although some improvements in test scores were evident prior to BPG training, following training, children demonstrated clinically significant changes (significant repeated-measures outcomes with moderate to large effect sizes) on specific TOVA and CogState measures reflecting processing speed, attention, visual-motor coordination, maze learning, and problem solving. Results provide preliminary support for the acceptability, feasibility, and neurocognitive benefit of BPG and its utility as a model platform for computerized cognitive training in cross-cultural low-resource settings.

  15. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  16. A Platform-Independent Plugin for Navigating Online Radiology Cases.

    PubMed

    Balkman, Jason D; Awan, Omer A

    2016-06-01

    Software methods that enable navigation of radiology cases on various digital platforms differ between handheld devices and desktop computers. This has resulted in poor compatibility of online radiology teaching files across mobile smartphones, tablets, and desktop computers. A standardized, platform-independent, or "agnostic" approach for presenting online radiology content was produced in this work by leveraging modern hypertext markup language (HTML) and JavaScript web software technology. We describe the design and evaluation of this software, demonstrate its use across multiple viewing platforms, and make it publicly available as a model for future development efforts.

  17. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  18. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends

    PubMed Central

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called “big data” challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields. PMID:25383096

  19. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends.

    PubMed

    Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields.

  20. Understanding the Sequence Preference of Recurrent RNA Building Blocks using Quantum Chemistry: The Intrastrand RNA Dinucleotide Platform.

    PubMed

    Mládek, Arnošt; Sponer, Judit E; Kulhánek, Petr; Lu, Xiang-Jun; Olson, Wilma K; Sponer, Jiřĺ

    2012-01-10

    Folded RNA molecules are shaped by an astonishing variety of highly conserved noncanonical molecular interactions and backbone topologies. The dinucleotide platform is a widespread recurrent RNA modular building submotif formed by the side-by-side pairing of bases from two consecutive nucleotides within a single strand, with highly specific sequence preferences. This unique arrangement of bases is cemented by an intricate network of noncanonical hydrogen bonds and facilitated by a distinctive backbone topology. The present study investigates the gas-phase intrinsic stabilities of the three most common RNA dinucleotide platforms - 5'-GpU-3', ApA, and UpC - via state-of-the-art quantum-chemical (QM) techniques. The mean stability of base-base interactions decreases with sequence in the order GpU > ApA > UpC. Bader's atoms-in-molecules analysis reveals that the N2(G)…O4(U) hydrogen bond of the GpU platform is stronger than the corresponding hydrogen bonds in the other two platforms. The mixed-pucker sugar-phosphate backbone conformation found in most GpU platforms, in which the 5'-ribose sugar (G) is in the C2'-endo form and the 3'-sugar (U) in the C3'-endo form, is intrinsically more stable than the standard A-RNA backbone arrangement, partially as a result of a favorable O2'…O2P intra-platform interaction. Our results thus validate the hypothesis of Lu et al. (Lu Xiang-Jun, et al. Nucleic Acids Res. 2010, 38, 4868-4876), that the superior stability of GpU platforms is partially mediated by the strong O2'…O2P hydrogen bond. In contrast, ApA and especially UpC platform-compatible backbone conformations are rather diverse and do not display any characteristic structural features. The average stabilities of ApA and UpC derived backbone conformers are also lower than those of GpU platforms. Thus, the observed structural and evolutionary patterns of the dinucleotide platforms can be accounted for, to a large extent, by their intrinsic properties as described by modern QM calculations. In contrast, we show that the dinucleotide platform is not properly described in the course of atomistic explicit-solvent simulations. Our work also gives methodological insights into QM calculations of experimental RNA backbone geometries. Such calculations are inherently complicated by rather large data and refinement uncertainties in the available RNA experimental structures, which often preclude reliable energy computations.

  1. A forward view on reliable computers for flight control

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Wensley, J. H.

    1976-01-01

    The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.

  2. Turbomachine Sealing and Secondary Flows - Part 3. Part 3; Review of Power-Stream Support, Unsteady Flow Systems, Seal and Disk Cavity Flows, Engine Externals, and Life and Reliability Issues

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Steinetz, B. M.; Zaretsky, E. V.; Athavale, M. M.; Przekwas, A. J.

    2004-01-01

    The issues and components supporting the engine power stream are reviewed. It is essential that companies pay close attention to engine sealing issues, particularly on the high-pressure spool or high-pressure pumps. Small changes in these systems are reflected throughout the entire engine. Although cavity, platform, and tip sealing are complex and have a significant effect on component and engine performance, computational tools (e.g., NASA-developed INDSEAL, SCISEAL, and ADPAC) are available to help guide the designer and the experimenter. Gas turbine engine and rocket engine externals must all function efficiently with a high degree of reliability in order for the engine to run but often receive little attention until they malfunction. Within the open literature statistically significant data for critical engine components are virtually nonexistent; the classic approach is deterministic. Studies show that variations with loading can have a significant effect on component performance and life. Without validation data they are just studies. These variations and deficits in statistical databases require immediate attention.

  3. University Students Use of Computers and Mobile Devices for Learning and Their Reading Speed on Different Platforms

    ERIC Educational Resources Information Center

    Mpofu, Bongeka

    2016-01-01

    This research was aimed at the investigation of mobile device and computer use at a higher learning institution. The goal was to determine the current use of computers and mobile devices for learning and the students' reading speed on different platforms. The research was contextualised in a sample of students at the University of South Africa.…

  4. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  5. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    PubMed

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  6. Test Platforms for Model-Based Flight Research

    NASA Astrophysics Data System (ADS)

    Dorobantu, Andrei

    Demonstrating the reliability of flight control algorithms is critical to integrating unmanned aircraft systems into the civilian airspace. For many potential applications, design and certification of these algorithms will rely heavily on mathematical models of the aircraft dynamics. Therefore, the aerospace community must develop flight test platforms to support the advancement of model-based techniques. The University of Minnesota has developed a test platform dedicated to model-based flight research for unmanned aircraft systems. This thesis provides an overview of the test platform and its research activities in the areas of system identification, model validation, and closed-loop control for small unmanned aircraft.

  7. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.

  8. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  9. Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments

    PubMed Central

    Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu

    2017-01-01

    High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments. PMID:28835734

  10. Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments.

    PubMed

    Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu

    2017-01-01

    High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments.

  11. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  12. OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux

    NASA Astrophysics Data System (ADS)

    Gunawan, P. H.; Pahlevi, M. R.

    2018-03-01

    The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.

  13. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    PubMed

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  14. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  15. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  16. Improvement in the amine glass platform by bubbling method for a DNA microarray

    PubMed Central

    Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo

    2015-01-01

    A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool. PMID:26468293

  17. Improvement in the amine glass platform by bubbling method for a DNA microarray.

    PubMed

    Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo

    2015-01-01

    A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool.

  18. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  19. An overview of wireless structural health monitoring for civil structures.

    PubMed

    Lynch, Jerome Peter

    2007-02-15

    Wireless monitoring has emerged in recent years as a promising technology that could greatly impact the field of structural monitoring and infrastructure asset management. This paper is a summary of research efforts that have resulted in the design of numerous wireless sensing unit prototypes explicitly intended for implementation in civil structures. Wireless sensing units integrate wireless communications and mobile computing with sensors to deliver a relatively inexpensive sensor platform. A key design feature of wireless sensing units is the collocation of computational power and sensors; the tight integration of computing with a wireless sensing unit provides sensors with the opportunity to self-interrogate measurement data. In particular, there is strong interest in using wireless sensing units to build structural health monitoring systems that interrogate structural data for signs of damage. After the hardware and the software designs of wireless sensing units are completed, the Alamosa Canyon Bridge in New Mexico is utilized to validate their accuracy and reliability. To improve the ability of low-cost wireless sensing units to detect the onset of structural damage, the wireless sensing unit paradigm is extended to include the capability to command actuators and active sensors.

  20. Use of Soft Computing Technologies For Rocket Engine Control

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Olcmen, Semih; Polites, Michael

    2003-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to further improve overall engine system reliability and performance. Specifically, this will be presented by enhancing rocket engine control and engine health management (EHM) using SCT coupled with conventional control technologies, and sound software engineering practices used in Marshall s Flight Software Group. The principle goals are to improve software management, software development time and maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control and EHM methodologies, but to provide alternative design choices for control, EHM, implementation, performance, and sustaining engineering. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion, software engineering for embedded systems, and soft computing technologies (i.e., neural networks, fuzzy logic, and Bayesian belief networks), much of which is presented in this paper. The first targeted demonstration rocket engine platform is the MC-1 (formerly FASTRAC Engine) which is simulated with hardware and software in the Marshall Avionics & Software Testbed laboratory that

  1. Development and Characteristics of a Mobile, Semi-Autonomous Floating Platform for in situ Lake Measurements

    NASA Astrophysics Data System (ADS)

    Barry, D.; Lemmin, U.; Le Dantec, N.; Zulliger, L.; Rusterholz, M.; Bolay, M.; Rossier, J.; Kangur, K.

    2013-12-01

    In the development of sustainable management strategies of lakes more insight into their physical, chemical and ecological dynamics is needed. Field data obtained from various types of sensors with adequate spatial and temporal sampling rate are essential to understand better the processes that govern fluxes and pathways of water masses and transported compounds, whether for model validation or for monitoring purposes. One advantage of unmanned platforms is that they limit the disturbances typically affecting the quality of data collected on small vessels, including perturbations caused by movements of onboard crew. We have developed a mobile, semi-autonomous floating platform with 8 h power autonomy using a 5 m long by 2.5 m wide catamaran. Our approach focused on modularity and high payload capacity in order to accommodate a large number of sensors both in terms of electronic (power and data) and mechanical constraints of integration. Software architecture and onboard electronics use National Instruments technology to simplify and standardize integration of sensors, actuators and communication. Piecewise-movable deck sections allow optimizing platform stability depending on the payload. The entire system is controlled by a remote computer located on an accompanying vessel and connected via a wireless link with a range of over 1 km. Real-time transmission of GPS-stamped measurements allows immediate modifications in the survey plan if needed. The displacement of the platform is semi-autonomous, with the options of either autopilot mode following a pre-planned course specified by waypoints or remote manual control from the accompanying vessel. Maintenance of permanent control over the platform displacement is required for safety reasons with respect to other users of the lake. Currently, the sensor payload comprises an array of fast temperature probes, a bottom-tracking ADCP and atmospheric sensors including a radiometer. A towed CTD with additional water quality sensors operated from a remotely controlled winch is presently being integrated. Field tests have shown that the platform is reliable, capable of collecting long transects of 2D lake and collocated atmospheric boundary layer data and adaptable to integrate new sensors.

  2. A methodology for secure recovery of spacecrafts based on a trusted hardware platform

    NASA Astrophysics Data System (ADS)

    Juliato, Marcio; Gebotys, Catherine

    2017-02-01

    This paper proposes a methodology for the secure recovery of spacecrafts and the recovery of its cryptographic capabilities in emergency scenarios recurring from major unintentional failures and malicious attacks. The proposed approach employs trusted modules to achieve higher reliability and security levels in space missions due to the presence of integrity check capabilities as well as secure recovery mechanisms. Additionally, several recovery protocols are thoroughly discussed and analyzed against a wide variety of attacks. Exhaustive search attacks are shown in a wide variety of contexts and are shown to be infeasible and totally independent of the computational power of attackers. Experimental results have shown that the proposed methodology allows for the fast and secure recovery of spacecrafts, demanding minimum implementation area, power consumption and bandwidth.

  3. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  4. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  5. 77 FR 28387 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... practice recommendations on emergency alerting systems such as promoting E9-1-1 reliability and alerting platforms--Emergency Alert System and Common Alerting Protocol. DATES: June 6, 2012. ADDRESSES: Federal... Advisory Committee that will provide recommendations to the FCC regarding best practices and actions the...

  6. A measure of the signal-to-noise ratio of microarray samples and studies using gene correlations.

    PubMed

    Venet, David; Detours, Vincent; Bersini, Hugues

    2012-01-01

    The quality of gene expression data can vary dramatically from platform to platform, study to study, and sample to sample. As reliable statistical analysis rests on reliable data, determining such quality is of the utmost importance. Quality measures to spot problematic samples exist, but they are platform-specific, and cannot be used to compare studies. As a proxy for quality, we propose a signal-to-noise ratio for microarray data, the "Signal-to-Noise Applied to Gene Expression Experiments", or SNAGEE. SNAGEE is based on the consistency of gene-gene correlations. We applied SNAGEE to a compendium of 80 large datasets on 37 platforms, for a total of 24,380 samples, and assessed the signal-to-noise ratio of studies and samples. This allowed us to discover serious issues with three studies. We show that signal-to-noise ratios of both studies and samples are linked to the statistical significance of the biological results. We showed that SNAGEE is an effective way to measure data quality for most types of gene expression studies, and that it often outperforms existing techniques. Furthermore, SNAGEE is platform-independent and does not require raw data files. The SNAGEE R package is available in BioConductor.

  7. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  8. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  9. Reliable computation from contextual correlations

    NASA Astrophysics Data System (ADS)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  10. Social Computing as Next-Gen Learning Paradigm: A Platform and Applications

    NASA Astrophysics Data System (ADS)

    Margherita, Alessandro; Taurino, Cesare; Del Vecchio, Pasquale

    As a field at the intersection between computer science and people behavior, social computing can contribute significantly in the endeavor of innovating how individuals and groups interact for learning and working purposes. In particular, the generation of Internet applications tagged as web 2.0 provides an opportunity to create new “environments” where people can exchange knowledge and experience, create new knowledge and learn together. This chapter illustrates the design and application of a prototypal platform which embeds tools such as blog, wiki, folksonomy and RSS in a unique web-based system. This platform has been developed to support a case-based and project-driven learning strategy for the development of business and technology management competencies in undergraduate and graduate education programs. A set of illustrative scenarios are described to show how a learning community can be promoted, created, and sustained through the technological platform.

  11. A generic, cost-effective, and scalable cell lineage analysis platform

    PubMed Central

    Biezuner, Tamir; Spiro, Adam; Raz, Ofir; Amir, Shiran; Milo, Lilach; Adar, Rivka; Chapal-Ilani, Noa; Berman, Veronika; Fried, Yael; Ainbinder, Elena; Cohen, Galit; Barr, Haim M.; Halaban, Ruth; Shapiro, Ehud

    2016-01-01

    Advances in single-cell genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells. Current sequencing-based methods for cell lineage analysis depend on low-resolution bulk analysis or rely on extensive single-cell sequencing, which is not scalable and could be biased by functional dependencies. Here we show an integrated biochemical-computational platform for generic single-cell lineage analysis that is retrospective, cost-effective, and scalable. It consists of a biochemical-computational pipeline that inputs individual cells, produces targeted single-cell sequencing data, and uses it to generate a lineage tree of the input cells. We validated the platform by applying it to cells sampled from an ex vivo grown tree and analyzed its feasibility landscape by computer simulations. We conclude that the platform may serve as a generic tool for lineage analysis and thus pave the way toward large-scale human cell lineage discovery. PMID:27558250

  12. An interactive parallel programming environment applied in atmospheric science

    NASA Technical Reports Server (NTRS)

    vonLaszewski, G.

    1996-01-01

    This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.

  13. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  14. Cellular computational platform and neurally inspired elements thereof

    DOEpatents

    Okandan, Murat

    2016-11-22

    A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.

  15. A simple and reliable health monitoring system for shoulder health: proposal.

    PubMed

    Liu, Shuo-Fang; Lee, Yann-Long

    2014-02-26

    The current health care system is complex and inefficient. A simple and reliable health monitoring system that can help patients perform medical self-diagnosis is seldom readily available. Because the medical system is vast and complex, it has hampered or delayed patients in seeking medical advice or treatment in a timely manner, which may potentially affect the patient's chances of recovery, especially those with severe sicknesses such as cancer, and heart disease. The purpose of this paper is to propose a methodology in designing a simple, low cost, Internet-based health-screening platform. This health-screening platform will enable patients to perform medical self-diagnosis over the Internet. Historical data has shown the importance of early detection to ensure patients receive proper treatment and speedy recovery. The platform is designed with special emphasis on the user interface. Standard Web-based user-interface design is adopted so the user feels ease to operate in a familiar Web environment. In addition, graphics such as charts and graphs are used generously to help users visualize and understand the result of the diagnostic. The system is developed using hypertext preprocessor (PHP) programming language. One important feature of this system platform is that it is built to be a stand-alone platform, which tends to have better user privacy security. The prototype system platform was developed by the National Cheng Kung University Ergonomic and Design Laboratory. The completed prototype of this system platform was submitted to the Taiwan Medical Institute for evaluation. The evaluation of 120 participants showed that this platform system is a highly effective tool in health-screening applications, and has great potential for improving the medical care quality for the general public.

  16. A computational platform for MALDI-TOF mass spectrometry data: application to serum and plasma samples.

    PubMed

    Mantini, Dante; Petrucci, Francesca; Pieragostino, Damiana; Del Boccio, Piero; Sacchetta, Paolo; Candiano, Giovanni; Ghiggeri, Gian Marco; Lugaresi, Alessandra; Federici, Giorgio; Di Ilio, Carmine; Urbani, Andrea

    2010-01-03

    Mass spectrometry (MS) is becoming the gold standard for biomarker discovery. Several MS-based bioinformatics methods have been proposed for this application, but the divergence of the findings by different research groups on the same MS data suggests that the definition of a reliable method has not been achieved yet. In this work, we propose an integrated software platform, MASCAP, intended for comparative biomarker detection from MALDI-TOF MS data. MASCAP integrates denoising and feature extraction algorithms, which have already shown to provide consistent peaks across mass spectra; furthermore, it relies on statistical analysis and graphical tools to compare the results between groups. The effectiveness in mass spectrum processing is demonstrated using MALDI-TOF data, as well as SELDI-TOF data. The usefulness in detecting potential protein biomarkers is shown comparing MALDI-TOF mass spectra collected from serum and plasma samples belonging to the same clinical population. The analysis approach implemented in MASCAP may simplify biomarker detection, by assisting the recognition of proteomic expression signatures of the disease. A MATLAB implementation of the software and the data used for its validation are available at http://www.unich.it/proteomica/bioinf. (c) 2009 Elsevier B.V. All rights reserved.

  17. Development of Distributed Research Center for analysis of regional climatic and environmental changes

    NASA Astrophysics Data System (ADS)

    Gordov, E.; Shiklomanov, A.; Okladnikov, I.; Prusevich, A.; Titov, A.

    2016-11-01

    We present an approach and first results of a collaborative project being carried out by a joint team of researchers from the Institute of Monitoring of Climatic and Ecological Systems, Russia and Earth Systems Research Center UNH, USA. Its main objective is development of a hardware and software platform prototype of a Distributed Research Center (DRC) for monitoring and projecting of regional climatic and environmental changes in the Northern extratropical areas. The DRC should provide the specialists working in climate related sciences and decision-makers with accurate and detailed climatic characteristics for the selected area and reliable and affordable tools for their in-depth statistical analysis and studies of the effects of climate change. Within the framework of the project, new approaches to cloud processing and analysis of large geospatial datasets (big geospatial data) inherent to climate change studies are developed and deployed on technical platforms of both institutions. We discuss here the state of the art in this domain, describe web based information-computational systems developed by the partners, justify the methods chosen to reach the project goal, and briefly list the results obtained so far.

  18. Care 3 model overview and user's guide, first revision

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Petersen, P. L.

    1985-01-01

    A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.

  19. The Effect of In-Service Training of Computer Science Teachers on Scratch Programming Language Skills Using an Electronic Learning Platform on Programming Skills and the Attitudes towards Teaching Programming

    ERIC Educational Resources Information Center

    Alkaria, Ahmed; Alhassan, Riyadh

    2017-01-01

    This study was conducted to examine the effect of in-service training of computer science teachers in Scratch language using an electronic learning platform on acquiring programming skills and attitudes towards teaching programming. The sample of this study consisted of 40 middle school computer science teachers. They were assigned into two…

  20. Superior model for fault tolerance computation in designing nano-sized circuit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my

    2014-10-24

    As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less

  1. iSPHERE - A New Approach to Collaborative Research and Cloud Computing

    NASA Astrophysics Data System (ADS)

    Al-Ubaidi, T.; Khodachenko, M. L.; Kallio, E. J.; Harry, A.; Alexeev, I. I.; Vázquez-Poletti, J. L.; Enke, H.; Magin, T.; Mair, M.; Scherf, M.; Poedts, S.; De Causmaecker, P.; Heynderickx, D.; Congedo, P.; Manolescu, I.; Esser, B.; Webb, S.; Ruja, C.

    2015-10-01

    The project iSPHERE (integrated Scientific Platform for HEterogeneous Research and Engineering) that has been proposed for Horizon 2020 (EINFRA-9- 2015, [1]) aims at creating a next generation Virtual Research Environment (VRE) that embraces existing and emerging technologies and standards in order to provide a versatile platform for scientific investigations and collaboration. The presentation will introduce the large project consortium, provide a comprehensive overview of iSPHERE's basic concepts and approaches and outline general user requirements that the VRE will strive to satisfy. An overview of the envisioned architecture will be given, focusing on the adapted Service Bus concept, i.e. the "Scientific Service Bus" as it is called in iSPHERE. The bus will act as a central hub for all communication and user access, and will be implemented in the course of the project. The agile approach [2] that has been chosen for detailed elaboration and documentation of user requirements, as well as for the actual implementation of the system, will be outlined and its motivation and basic structure will be discussed. The presentation will show which user communities will benefit and which concrete problems, scientific investigations are facing today, will be tackled by the system. Another focus of the presentation is iSPHERE's seamless integration of cloud computing resources and how these will benefit scientific modeling teams by providing a reliable and web based environment for cloud based model execution, storage of results, and comparison with measurements, including fully web based tools for data mining, analysis and visualization. Also the envisioned creation of a dedicated data model for experimental plasma physics will be discussed. It will be shown why the Scientific Service Bus provides an ideal basis to integrate a number of data models and communication protocols and to provide mechanisms for data exchange across multiple and even multidisciplinary platforms.

  2. Environmental Detectives--The Development of an Augmented Reality Platform for Environmental Simulations

    ERIC Educational Resources Information Center

    Klopfer, Eric; Squire, Kurt

    2008-01-01

    The form factors of handheld computers make them increasingly popular among K-12 educators. Although some compelling examples of educational software for handhelds exist, we believe that the potential of this platform are just being discovered. This paper reviews innovative applications for mobile computing for both education and entertainment…

  3. Beam Dynamics Simulation Platform and Studies of Beam Breakup in Dielectric Wakefield Structures

    NASA Astrophysics Data System (ADS)

    Schoessow, P.; Kanareykin, A.; Jing, C.; Kustov, A.; Altmark, A.; Gai, W.

    2010-11-01

    A particle-Green's function beam dynamics code (BBU-3000) to study beam breakup effects is incorporated into a parallel computing framework based on the Boinc software environment, and supports both task farming on a heterogeneous cluster and local grid computing. User access to the platform is through a web browser.

  4. Assessing the Decision Process towards Bring Your Own Device

    ERIC Educational Resources Information Center

    Koester, Richard F.

    2017-01-01

    Information technology continues to evolve to the point where mobile technologies--such as smart phones, tablets, and ultra-mobile computers have the embedded flexibility and power to be a ubiquitous platform to fulfill the entire user's computing needs. Mobile technology users view these platforms as adaptable enough to be the single solution for…

  5. Multivariate Gradient Analysis for Evaluating and Visualizing a Learning System Platform for Computer Programming

    ERIC Educational Resources Information Center

    Mather, Richard

    2015-01-01

    This paper explores the application of canonical gradient analysis to evaluate and visualize student performance and acceptance of a learning system platform. The subject of evaluation is a first year BSc module for computer programming. This uses "Ceebot," an animated and immersive game-like development environment. Multivariate…

  6. The community FabLab platform: applications and implications in biomedical engineering.

    PubMed

    Stephenson, Makeda K; Dow, Douglas E

    2014-01-01

    Skill development in science, technology, engineering and math (STEM) education present one of the most formidable challenges of modern society. The Community FabLab platform presents a viable solution. Each FabLab contains a suite of modern computer numerical control (CNC) equipment, electronics and computing hardware and design, programming, computer aided design (CAD) and computer aided machining (CAM) software. FabLabs are community and educational resources and open to the public. Development of STEM based workforce skills such as digital fabrication and advanced manufacturing can be enhanced using this platform. Particularly notable is the potential of the FabLab platform in STEM education. The active learning environment engages and supports a diversity of learners, while the iterative learning that is supported by the FabLab rapid prototyping platform facilitates depth of understanding, creativity, innovation and mastery. The product and project based learning that occurs in FabLabs develops in the student a personal sense of accomplishment, self-awareness, command of the material and technology. This helps build the interest and confidence necessary to excel in STEM and throughout life. Finally the introduction and use of relevant technologies at every stage of the education process ensures technical familiarity and a broad knowledge base needed for work in STEM based fields. Biomedical engineering education strives to cultivate broad technical adeptness, creativity, interdisciplinary thought, and an ability to form deep conceptual understanding of complex systems. The FabLab platform is well designed to enhance biomedical engineering education.

  7. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and themore » Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.« less

  8. Homomorphic encryption experiments on IBM's cloud quantum computing platform

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, You-Wei; Li, Tan; Li, Feng-Guang; Du, Yu-Tao; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-02-01

    Quantum computing has undergone rapid development in recent years. Owing to limitations on scalability, personal quantum computers still seem slightly unrealistic in the near future. The first practical quantum computer for ordinary users is likely to be on the cloud. However, the adoption of cloud computing is possible only if security is ensured. Homomorphic encryption is a cryptographic protocol that allows computation to be performed on encrypted data without decrypting them, so it is well suited to cloud computing. Here, we first applied homomorphic encryption on IBM's cloud quantum computer platform. In our experiments, we successfully implemented a quantum algorithm for linear equations while protecting our privacy. This demonstration opens a feasible path to the next stage of development of cloud quantum information technology.

  9. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  10. A game-based platform for crowd-sourcing biomedical image diagnosis and standardized remote training and education of diagnosticians

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Woo, Minjae; Chandramouli, Krithika; Ozcan, Aydogan

    2015-03-01

    Over the past decade, crowd-sourcing complex image analysis tasks to a human crowd has emerged as an alternative to energy-inefficient and difficult-to-implement computational approaches. Following this trend, we have developed a mathematical framework for statistically combining human crowd-sourcing of biomedical image analysis and diagnosis through games. Using a web-based smart game (BioGames), we demonstrated this platform's effectiveness for telediagnosis of malaria from microscopic images of individual red blood cells (RBCs). After public release in early 2012 (http://biogames.ee.ucla.edu), more than 3000 gamers (experts and non-experts) used this BioGames platform to diagnose over 2800 distinct RBC images, marking them as positive (infected) or negative (non-infected). Furthermore, we asked expert diagnosticians to tag the same set of cells with labels of positive, negative, or questionable (insufficient information for a reliable diagnosis) and statistically combined their decisions to generate a gold standard malaria image library. Our framework utilized minimally trained gamers' diagnoses to generate a set of statistical labels with an accuracy that is within 98% of our gold standard image library, demonstrating the "wisdom of the crowd". Using the same image library, we have recently launched a web-based malaria training and educational game allowing diagnosticians to compare their performance with their peers. After diagnosing a set of ~500 cells per game, diagnosticians can compare their quantified scores against a leaderboard and view their misdiagnosed cells. Using this platform, we aim to expand our gold standard library with new RBC images and provide a quantified digital tool for measuring and improving diagnostician training globally.

  11. Development of a UAV system for VNIR-TIR acquisitions in precision agriculture

    NASA Astrophysics Data System (ADS)

    Misopolinos, L.; Zalidis, Ch.; Liakopoulos, V.; Stavridou, D.; Katsigiannis, P.; Alexandridis, T. K.; Zalidis, G.

    2015-06-01

    Adoption of precision agriculture techniques requires the development of specialized tools that provide spatially distributed information. Both flying platforms and airborne sensors are being continuously evolved to cover the needs of plant and soil sensing at affordable costs. Due to restrictions in payload, flying platforms are usually limited to carry a single sensor on board. The aim of this work is to present the development of a vertical take-off and landing autonomous unmanned aerial vehicle (VTOL UAV) system for the simultaneous acquisition of high resolution vertical images at the visible, near infrared (VNIR) and thermal infrared (TIR) wavelengths. A system was developed that has the ability to trigger two cameras simultaneously with a fully automated process and no pilot intervention. A commercial unmanned hexacopter UAV platform was optimized to increase reliability, ease of operation and automation. The designed systems communication platform is based on a reduced instruction set computing (RISC) processor running Linux OS with custom developed drivers in an efficient way, while keeping the cost and weight to a minimum. Special software was also developed for the automated image capture, data processing and on board data and metadata storage. The system was tested over a kiwifruit field in northern Greece, at flying heights of 70 and 100m above the ground. The acquired images were mosaicked and geo-corrected. Images from both flying heights were of good quality and revealed unprecedented detail within the field. The normalized difference vegetation index (NDVI) was calculated along with the thermal image in order to provide information on the accurate location of stressors and other parameters related to the crop productivity. Compared to other available sources of data, this system can provide low cost, high resolution and easily repeatable information to cover the requirements of precision agriculture.

  12. Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1999-01-01

    Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.

  13. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud

    PubMed Central

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966

  14. Performance of MCNP4A on seven computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-12-31

    The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less

  15. Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade

    NASA Astrophysics Data System (ADS)

    Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko

    2017-07-01

    The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.

  16. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.

    PubMed

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.

  17. Flight Simulator Platform Motion and Air Transport Pilot Training

    NASA Technical Reports Server (NTRS)

    Lee, Alfred T.; Bussolari, Steven R.

    1989-01-01

    The influence of flight simulator platform motion on pilot training and performance was examined In two studies utilizing a B-727-200 aircraft simulator. The simulator, located at Ames Research Center, Is certified by the FAA for upgrade and transition training in air carrier operations. Subjective ratings and objective performance of experienced B-727 pilots did not reveal any reliable effects of wide variations In platform motion de- sign. Motion platform variations did, however, affect the acquisition of control skill by pilots with no prior heavy aircraft flying experience. The effect was limited to pitch attitude control inputs during the early phase of landing training. Implications for the definition of platform motion requirements in air transport pilot training are discussed.

  18. Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System

    NASA Astrophysics Data System (ADS)

    Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng

    2013-11-01

    Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.

  19. Load monitoring of aerospace structures utilizing micro-electro-mechanical systems for static and quasi-static loading conditions

    NASA Astrophysics Data System (ADS)

    Martinez, M.; Rocha, B.; Li, M.; Shi, G.; Beltempo, A.; Rutledge, R.; Yanishevsky, M.

    2012-11-01

    The National Research Council Canada (NRC) has worked on the development of structural health monitoring (SHM) test platforms for assessing the performance of sensor systems for load monitoring applications. The first SHM platform consists of a 5.5 m cantilever aluminum beam that provides an optimal scenario for evaluating the ability of a load monitoring system to measure bending, torsion and shear loads. The second SHM platform contains an added level of structural complexity, by consisting of aluminum skins with bonded/riveted stringers, typical of an aircraft lower wing structure. These two load monitoring platforms are well characterized and documented, providing loading conditions similar to those encountered during service. In this study, a micro-electro-mechanical system (MEMS) for acquiring data from triads of gyroscopes, accelerometers and magnetometers is described. The system was used to compute changes in angles at discrete stations along the platforms. The angles obtained from the MEMS were used to compute a second, third or fourth order degree polynomial surface from which displacements at every point could be computed. The use of a new Kalman filter was evaluated for angle estimation, from which displacements in the structure were computed. The outputs of the newly developed algorithms were then compared to the displacements obtained from the linear variable displacement transducers connected to the platforms. The displacement curves were subsequently post-processed either analytically, or with the help of a finite element model of the structure, to estimate strains and loads. The estimated strains were compared with baseline strain gauge instrumentation installed on the platforms. This new approach for load monitoring was able to provide accurate estimates of applied strains and shear loads.

  20. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  1. A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools

    DTIC Science & Technology

    2015-07-14

    computer that establishes an encrypted Virtual Private Network ( OpenVPN [44]) based on the Secure Socket Layer (SSL) paradigm. Each user is given a...security certificate for each device used to connect to the computing nodes. Stable OpenVPN clients are available for Linux, Microsoft Windows, Apple OSX...platform is granted by an encrypted connection base on the Secure Socket Layer (SSL) protocol, and implemented in the OpenVPN Virtual Personal Network

  2. MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.

    PubMed

    Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro

    2018-06-01

    The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.

  3. Wireless sensor platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Pooran C.; Killough, Stephen M.; Kuruganti, Phani Teja

    A wireless sensor platform and methods of manufacture are provided. The platform involves providing a plurality of wireless sensors, where each of the sensors is fabricated on flexible substrates using printing techniques and low temperature curing. Each of the sensors can include planar sensor elements and planar antennas defined using the printing and curing. Further, each of the sensors can include a communications system configured to encode the data from the sensors into a spread spectrum code sequence that is transmitted to a central computer(s) for use in monitoring an area associated with the sensors.

  4. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE PAGES

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  5. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  6. Testing and refining the Science in Risk Assessment and Policy (SciRAP) web-based platform for evaluating the reliability and relevance of in vivo toxicity studies.

    PubMed

    Beronius, Anna; Molander, Linda; Zilliacus, Johanna; Rudén, Christina; Hanberg, Annika

    2018-05-28

    The Science in Risk Assessment and Policy (SciRAP) web-based platform was developed to promote and facilitate structure and transparency in the evaluation of ecotoxicity and toxicity studies for hazard and risk assessment of chemicals. The platform includes sets of criteria and a colour-coding tool for evaluating the reliability and relevance of individual studies. The SciRAP method for evaluating in vivo toxicity studies was first published in 2014 and the aim of the work presented here was to evaluate and develop that method further. Toxicologists and risk assessors from different sectors and geographical areas were invited to test the SciRAP criteria and tool on a specific set of in vivo toxicity studies and to provide feedback concerning the scientific soundness and user-friendliness of the SciRAP approach. The results of this expert assessment were used to refine and improve both the evaluation criteria and the colour-coding tool. It is expected that the SciRAP web-based platform will continue to be developed and enhanced to keep up to date with the needs of end-users. Copyright © 2018 John Wiley & Sons, Ltd.

  7. The Relationship between Chief Information Officer Transformational Leadership and Computing Platform Operating Systems

    ERIC Educational Resources Information Center

    Anderson, George W.

    2010-01-01

    The purpose of this study was to relate the strength of Chief Information Officer (CIO) transformational leadership behaviors to 1 of 5 computing platform operating systems (OSs) that may be selected for a firm's Enterprise Resource Planning (ERP) business system. Research shows executive leader behaviors may promote innovation through the use of…

  8. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.; Yu, G.; Wang, K.

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less

  10. Modular HPC I/O characterization with Darshan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Shane; Carns, Philip; Harms, Kevin

    2016-11-13

    Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientificmore » applications and computing platforms calls for greater flexibililty and scope in I/O characterization.« less

  11. Research on private cloud computing based on analysis on typical opensource platform: a case study with Eucalyptus and Wavemaker

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoyuan; Yuan, Jian; Chen, Shi

    2013-03-01

    Cloud computing is one of the most popular topics in the IT industry and is recently being adopted by many companies. It has four development models, as: public cloud, community cloud, hybrid cloud and private cloud. Except others, private cloud can be implemented in a private network, and delivers some benefits of cloud computing without pitfalls. This paper makes a comparison of typical open source platforms through which we can implement a private cloud. After this comparison, we choose Eucalyptus and Wavemaker to do a case study on the private cloud. We also do some performance estimation of cloud platform services and development of prototype software as cloud services.

  12. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  13. Reliability of Computer Systems ODRA 1305 and R-32,

    DTIC Science & Technology

    1983-03-25

    RELIABILITY OF COMPUTER SYSTEMS ODRA 1305 AND R-32 By: Wit Drewniak English pages: 12 Source: Informatyka , Vol. 14, Nr. 7, 1979, pp. 5-8 Country of...JS EMC computers installed in ZETO, Katowice", Informatyka , No. 7-8/78, deals with various reliability classes * within the family of the machines of

  14. Feasibility and reliability of a virtual reality oculus platform to measure sensory integration for postural control in young adults.

    PubMed

    Lubetzky, Anat V; Kary, Erinn E; Harel, Daphna; Hujsak, Bryan; Perlin, Ken

    2018-01-24

    Using Unity for the Oculus Development-Kit 2, we have developed an affordable, portable virtual reality platform that targets the visuomotor domain, a missing link in current clinical assessments of postural control. Here, we describe the design and technical development as well as report its feasibility with regards to cybersickness and test-retest reliability in healthy young adults. Our virtual reality paradigm includes two functional scenes ('City' and 'Park') and four moving dots scenes. Twenty-one healthy young adults were tested twice, one to two weeks apart. They completed a simulator sickness questionnaire several times per session. Their postural sway response was recorded from a forceplate underneath their feet while standing on the floor, stability trainers, or a Both Sides Up (BOSU) ball. Sample entropy, postural displacement, velocity, and excursion were calculated and compared between sessions given the visual and surface conditions. Participants reported slight-to-moderate transient side effects. Intra-Class Correlation values mostly ranged from 0.5 to 0.7 for displacement and velocity, were above 0.5 (stability trainer conditions) and above 0.4 (floor mediolateral conditions) for sample entropy, and minimal for excursion. Our novel portable VR platform was found to be feasible and reliable in healthy young adults.

  15. The validity and reliability of an iPhone app for measuring vertical jump performance.

    PubMed

    Balsalobre-Fernández, Carlos; Glaister, Mark; Lockey, Richard Anthony

    2015-01-01

    The purpose of this investigation was to analyse the concurrent validity and reliability of an iPhone app (called: My Jump) for measuring vertical jump performance. Twenty recreationally active healthy men (age: 22.1 ± 3.6 years) completed five maximal countermovement jumps, which were evaluated using a force platform (time in the air method) and a specially designed iPhone app. My jump was developed to calculate the jump height from flight time using the high-speed video recording facility on the iPhone 5 s. Jump heights of the 100 jumps measured, for both devices, were compared using the intraclass correlation coefficient, Pearson product moment correlation coefficient (r), Cronbach's alpha (α), coefficient of variation and Bland-Altman plots. There was almost perfect agreement between the force platform and My Jump for the countermovement jump height (intraclass correlation coefficient = 0.997, P < 0.001; Bland-Altman bias = 1.1 ± 0.5 cm, P < 0.001). In comparison with the force platform, My Jump showed good validity for the CMJ height (r = 0.995, P < 0.001). The results of the present study showed that CMJ height can be easily, accurately and reliably evaluated using a specially developed iPhone 5 s app.

  16. Analysis of outcomes in radiation oncology: An integrated computational platform

    PubMed Central

    Liu, Dezhi; Ajlouni, Munther; Jin, Jian-Yue; Ryu, Samuel; Siddiqui, Farzan; Patel, Anushka; Movsas, Benjamin; Chetty, Indrin J.

    2009-01-01

    Radiotherapy research and outcome analyses are essential for evaluating new methods of radiation delivery and for assessing the benefits of a given technology on locoregional control and overall survival. In this article, a computational platform is presented to facilitate radiotherapy research and outcome studies in radiation oncology. This computational platform consists of (1) an infrastructural database that stores patient diagnosis, IMRT treatment details, and follow-up information, (2) an interface tool that is used to import and export IMRT plans in DICOM RT and AAPM/RTOG formats from a wide range of planning systems to facilitate reproducible research, (3) a graphical data analysis and programming tool that visualizes all aspects of an IMRT plan including dose, contour, and image data to aid the analysis of treatment plans, and (4) a software package that calculates radiobiological models to evaluate IMRT treatment plans. Given the limited number of general-purpose computational environments for radiotherapy research and outcome studies, this computational platform represents a powerful and convenient tool that is well suited for analyzing dose distributions biologically and correlating them with the delivered radiation dose distributions and other patient-related clinical factors. In addition the database is web-based and accessible by multiple users, facilitating its convenient application and use. PMID:19544785

  17. Superconducting Optoelectronic Circuits for Neuromorphic Computing

    NASA Astrophysics Data System (ADS)

    Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo

    2017-03-01

    Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.

  18. Synchronous wearable wireless body sensor network composed of autonomous textile nodes.

    PubMed

    Vanveerdeghem, Peter; Van Torre, Patrick; Stevens, Christiaan; Knockaert, Jos; Rogier, Hendrik

    2014-10-09

    A novel, fully-autonomous, wearable, wireless sensor network is presented, where each flexible textile node performs cooperative synchronous acquisition and distributed event detection. Computationally efficient situational-awareness algorithms are implemented on the low-power microcontroller present on each flexible node. The detected events are wirelessly transmitted to a base station, directly, as well as forwarded by other on-body nodes. For each node, a dual-polarized textile patch antenna serves as a platform for the flexible electronic circuitry. Therefore, the system is particularly suitable for comfortable and unobtrusive integration into garments. In the meantime, polarization diversity can be exploited to improve the reliability and energy-efficiency of the wireless transmission. Extensive experiments in realistic conditions have demonstrated that this new autonomous, body-centric, textile-antenna, wireless sensor network is able to correctly detect different operating conditions of a firefighter during an intervention. By relying on four network nodes integrated into the protective garment, this functionality is implemented locally, on the body, and in real time. In addition, the received sensor data are reliably transferred to a central access point at the command post, for more detailed and more comprehensive real-time visualization. This information provides coordinators and commanders with situational awareness of the entire rescue operation. A statistical analysis of measured on-body node-to-node, as well as off-body person-to-person channels is included, confirming the reliability of the communication system.

  19. Synchronous Wearable Wireless Body Sensor Network Composed of Autonomous Textile Nodes

    PubMed Central

    Vanveerdeghem, Peter; Van Torre, Patrick; Stevens, Christiaan; Knockaert, Jos; Rogier, Hendrik

    2014-01-01

    A novel, fully-autonomous, wearable, wireless sensor network is presented, where each flexible textile node performs cooperative synchronous acquisition and distributed event detection. Computationally efficient situational-awareness algorithms are implemented on the low-power microcontroller present on each flexible node. The detected events are wirelessly transmitted to a base station, directly, as well as forwarded by other on-body nodes. For each node, a dual-polarized textile patch antenna serves as a platform for the flexible electronic circuitry. Therefore, the system is particularly suitable for comfortable and unobtrusive integration into garments. In the meantime, polarization diversity can be exploited to improve the reliability and energy-efficiency of the wireless transmission. Extensive experiments in realistic conditions have demonstrated that this new autonomous, body-centric, textile-antenna, wireless sensor network is able to correctly detect different operating conditions of a firefighter during an intervention. By relying on four network nodes integrated into the protective garment, this functionality is implemented locally, on the body, and in real time. In addition, the received sensor data are reliably transferred to a central access point at the command post, for more detailed and more comprehensive real-time visualization. This information provides coordinators and commanders with situational awareness of the entire rescue operation. A statistical analysis of measured on-body node-to-node, as well as off-body person-to-person channels is included, confirming the reliability of the communication system. PMID:25302808

  20. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    PubMed

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  1. TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.

    PubMed

    Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan

    2017-01-01

    Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.

  2. An accelerated exposure and testing apparatus for building joint sealants

    NASA Astrophysics Data System (ADS)

    White, C. C.; Hunston, D. L.; Tan, K. T.; Hettenhouser, J.; Garver, J. D.

    2013-09-01

    The design, fabrication, and implementation of a computer-controlled exposure and testing apparatus for building joint sealants are described in this paper. This apparatus is unique in its ability to independently control and monitor temperature, relative humidity, ultraviolet (UV) radiation, and mechanical deformation. Each of these environmental factors can be controlled precisely over a wide range of conditions during periods of a month or more. Moreover, as controlled mechanical deformations can be generated, in situ mechanical characterization tests can be performed without removing specimens from the chamber. Temperature and humidity were controlled during our experiments via a precision temperature regulator and proportional mixing of dry and moisture-saturated air; while highly uniform UV radiation was attained by attaching the chamber to an integrating sphere-based radiation source. A computer-controlled stepper motor and a transmission system were used to provide precise movement control. The reliability and effectiveness of the apparatus were demonstrated on a model sealant material. The results clearly show that this apparatus provides an excellent platform to study the long-term durability of building joint sealants.

  3. Development of a surgical navigation system based on 3D Slicer for intraoperative implant placement surgery

    PubMed Central

    Chen, Xiaojun; Xu, Lu; Wang, Huixiang; Wang, Fang; Wang, Qiugen; Kikinis, Ron

    2017-01-01

    Implant placement has been widely used in various kinds of surgery. However, accurate intraoperative drilling performance is essential to avoid injury to adjacent structures. Although some commercially-available surgical navigation systems have been approved for clinical applications, these systems are expensive and the source code is not available to researchers. 3D Slicer is a free, open source software platform for the research community of computer-aided surgery. In this study, a loadable module based on Slicer has been developed and validated to support surgical navigation. This research module allows reliable calibration of the surgical drill, point-based registration and surface matching registration, so that the position and orientation of the surgical drill can be tracked and displayed on the computer screen in real time, aiming at reducing risks. In accuracy verification experiments, the mean target registration error (TRE) for point-based and surface-based registration were 0.31±0.06mm and 1.01±0.06mm respectively, which should meet clinical requirements. Both phantom and cadaver experiments demonstrated the feasibility of our surgical navigation software module. PMID:28109564

  4. Good practices in free-energy calculations.

    PubMed

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christophe

    2010-08-19

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in a wide range of research areas. Yet, the reliability of these calculations can often be improved significantly if a number of precepts, or good practices, are followed. Although the theory upon which these good practices rely has largely been known for many years, it is often overlooked or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. In this contribution, the current best practices for carrying out free-energy calculations using free energy perturbation and nonequilibrium work methods are discussed, demonstrating that at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. Monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway, and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision.

  5. A structural equation modeling approach for the adoption of cloud computing to enhance the Malaysian healthcare sector.

    PubMed

    Ratnam, Kalai Anand; Dominic, P D D; Ramayah, T

    2014-08-01

    The investments and costs of infrastructure, communication, medical-related equipments, and software within the global healthcare ecosystem portray a rather significant increase. The emergence of this proliferation is then expected to grow. As a result, information and cross-system communication became challenging due to the detached independent systems and subsystems which are not connected. The overall model fit expending over a sample size of 320 were tested with structural equation modelling (SEM) using AMOS 20.0 as the modelling tool. SPSS 20.0 is used to analyse the descriptive statistics and dimension reliability. Results of the study show that system utilisation and system impact dimension influences the overall level of services of the healthcare providers. In addition to that, the findings also suggest that systems integration and security plays a pivotal role for IT resources in healthcare organisations. Through this study, a basis for investigation on the need to improvise the Malaysian healthcare ecosystem and the introduction of a cloud computing platform to host the national healthcare information exchange has been successfully established.

  6. ALFA: The new ALICE-FAIR software framework

    NASA Astrophysics Data System (ADS)

    Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.

    2015-12-01

    The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.

  7. An accelerated exposure and testing apparatus for building joint sealants.

    PubMed

    White, C C; Hunston, D L; Tan, K T; Hettenhouser, J; Garver, J D

    2013-09-01

    The design, fabrication, and implementation of a computer-controlled exposure and testing apparatus for building joint sealants are described in this paper. This apparatus is unique in its ability to independently control and monitor temperature, relative humidity, ultraviolet (UV) radiation, and mechanical deformation. Each of these environmental factors can be controlled precisely over a wide range of conditions during periods of a month or more. Moreover, as controlled mechanical deformations can be generated, in situ mechanical characterization tests can be performed without removing specimens from the chamber. Temperature and humidity were controlled during our experiments via a precision temperature regulator and proportional mixing of dry and moisture-saturated air; while highly uniform UV radiation was attained by attaching the chamber to an integrating sphere-based radiation source. A computer-controlled stepper motor and a transmission system were used to provide precise movement control. The reliability and effectiveness of the apparatus were demonstrated on a model sealant material. The results clearly show that this apparatus provides an excellent platform to study the long-term durability of building joint sealants.

  8. A Reliable TTP-Based Infrastructure with Low Sensor Resource Consumption for the Smart Home Multi-Platform

    PubMed Central

    Kang, Jungho; Kim, Mansik; Park, Jong Hyuk

    2016-01-01

    With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms. PMID:27399699

  9. Noncomparative scaling of aromaticity through electron itinerancy

    NASA Astrophysics Data System (ADS)

    Paul, Satadal; Goswami, Tamal; Misra, Anirban

    2015-10-01

    Aromaticity is a multidimensional concept and not a directly observable. These facts have always stood in the way of developing an appropriate theoretical framework for scaling of aromaticity. In the present work, a quantitative account of aromaticity is developed on the basis of cyclic delocalization of π-electrons, which is the phenomenon leading to unique features of aromatic molecules. The stabilization in molecular energy, caused by delocalization of π-electrons is obtained as a second order perturbation energy for archetypal aromatic systems. The final expression parameterizes the aromatic stabilization energy in terms of atom to atom charge transfer integral, onsite repulsion energy and the population of spin orbitals at each site in the delocalized π-electrons. An appropriate computational platform is framed to compute each and individual parameter in the derived equation. The numerical values of aromatic stabilization energies obtained for various aromatic molecules are found to be in close agreement with available theoretical and experimental reports. Thus the reliable estimate of aromaticity through the proposed formalism renders it as a useful tool for the direct assessment of aromaticity, which has been a long standing problem in chemistry.

  10. A Reliable TTP-Based Infrastructure with Low Sensor Resource Consumption for the Smart Home Multi-Platform.

    PubMed

    Kang, Jungho; Kim, Mansik; Park, Jong Hyuk

    2016-07-05

    With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms.

  11. Interfacing HTCondor-CE with OpenStack

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; Hover, J.

    2017-10-01

    Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.

  12. Seven R's for the Student Teacher.

    ERIC Educational Resources Information Center

    Newson, Graham

    1983-01-01

    Briefly discussed are seven qualities which provide a significant basic platform for teaching mathematics effectively: reinforcement, responsiveness, reliability, realism, reversibility, range, and radicalism. (MNS)

  13. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  14. Implementation of a SOA-Based Service Deployment Platform with Portal

    NASA Astrophysics Data System (ADS)

    Yang, Chao-Tung; Yu, Shih-Chi; Lai, Chung-Che; Liu, Jung-Chun; Chu, William C.

    In this paper we propose a Service Oriented Architecture to provide a flexible and serviceable environment. SOA comes up with commercial requirements; it integrates many techniques over ten years to find the solution in different platforms, programming languages and users. SOA provides the connection with a protocol between service providers and service users. After this, the performance and the reliability problems are reviewed. Finally we apply SOA into our Grid and Hadoop platform. Service acts as an interface in front of the Resource Broker in the Grid, and the Resource Broker is middleware that provides functions for developers. The Hadoop has a file replication feature to ensure file reliability. Services provided on the Grid and Hadoop are centralized. We design a portal, in which users can use services on it directly or register service through the service provider. The portal also offers a service workflow function so that users can customize services according to the need of their jobs.

  15. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.

    PubMed

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.

  16. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance

    PubMed Central

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735

  17. Non-Pilot Protection of the HVDC Grid

    NASA Astrophysics Data System (ADS)

    Badrkhani Ajaei, Firouz

    This thesis develops a non-pilot protection system for the next generation power transmission system, the High-Voltage Direct Current (HVDC) grid. The HVDC grid protection system is required to be (i) adequately fast to prevent damages and/or converter blocking and (ii) reliable to minimize the impacts of faults. This study is mainly focused on the Modular Multilevel Converter (MMC) -based HVDC grid since the MMC is considered as the building block of the future HVDC systems. The studies reported in this thesis include (i) developing an enhanced equivalent model of the MMC to enable accurate representation of its DC-side fault response, (ii) developing a realistic HVDC-AC test system that includes a five-terminal MMC-based HVDC grid embedded in a large interconnected AC network, (iii) investigating the transient response of the developed test system to AC-side and DC-side disturbances in order to determine the HVDC grid protection requirements, (iv) investigating the fault surge propagation in the HVDC grid to determine the impacts of the DC-side fault location on the measured signals at each relay location, (v) designing a protection algorithm that detects and locates DC-side faults reliably and sufficiently fast to prevent relay malfunction and unnecessary blocking of the converters, and (vi) performing hardware-in-the-loop tests on the designed relay to verify its potential to be implemented in hardware. The results of the off-line time domain transients studies in the PSCAD software platform and the real-time hardware-in-the-loop tests using an enhanced version of the RTDS platform indicate that the developed HVDC grid relay meets all technical requirements including speed, dependability, security, selectivity, and robustness. Moreover, the developed protection algorithm does not impose considerable computational burden on the hardware.

  18. Multi-hop routing mechanism for reliable sensor computing.

    PubMed

    Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min

    2009-01-01

    Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.

  19. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  20. Optical RISC computer

    NASA Astrophysics Data System (ADS)

    Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Zeise, Frederick F.

    1993-07-01

    A second generation digital optical computer (DOC II) has been developed which utilizes a RISC based operating system as its host. This 32 bit, high performance (12.8 GByte/sec), computing platform demonstrates a number of basic principals that are inherent to parallel free space optical interconnects such as speed (up to 1012 bit operations per second) and low power 1.2 fJ per bit). Although DOC II is a general purpose machine, special purpose applications have been developed and are currently being evaluated on the optical platform.

  1. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  2. Transitioning ISR architecture into the cloud

    NASA Astrophysics Data System (ADS)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  3. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  4. High-speed multiple sequence alignment on a reconfigurable platform.

    PubMed

    Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf

    2006-01-01

    Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.

  5. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Technical Reports Server (NTRS)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  6. A three-dimensional computerized isometric strength measurement system.

    PubMed

    Black, Nancy L; Das, Biman

    2007-05-01

    The three-dimensional Computerized Isometric Strength Measurement System (CISMS) reliably and accurately measures isometric pull and push strengths in work spaces of paraplegic populations while anticipating comparative studies with other populations. The main elements of the system were: an extendable arm, a vertical supporting track, a rotating platform, a force transducer, stability sensors and a computerized data collection interface. The CISMS with minor modification was successfully used to measure isometric push-up and pull-down strengths of paraplegics and isometric push, pull, push-up and pull-down strength in work spaces for seated and standing able-bodied populations. The instrument has satisfied criteria of versatility, safety and comfort, ease of operation, and durability. Results are accurate within 2N for aligned forces. Costing approximately $1,500 (US) including computer, the system is affordable and accurate for aligned isometric strength measurements.

  7. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    PubMed

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R; Nestle, Frank O

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  8. Integration of Lyoplate Based Flow Cytometry and Computational Analysis for Standardized Immunological Biomarker Discovery

    PubMed Central

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A. Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R.; Nestle, Frank O.

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases. PMID:23843942

  9. Making Spatial Statistics Service Accessible On Cloud Platform

    NASA Astrophysics Data System (ADS)

    Mu, X.; Wu, J.; Li, T.; Zhong, Y.; Gao, X.

    2014-04-01

    Web service can bring together applications running on diverse platforms, users can access and share various data, information and models more effectively and conveniently from certain web service platform. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtualized resources are provided as services. With the rampant growth of massive data and restriction of net, traditional web services platforms have some prominent problems existing in development such as calculation efficiency, maintenance cost and data security. In this paper, we offer a spatial statistics service based on Microsoft cloud. An experiment was carried out to evaluate the availability and efficiency of this service. The results show that this spatial statistics service is accessible for the public conveniently with high processing efficiency.

  10. High performance GPU processing for inversion using uniform grid searches

    NASA Astrophysics Data System (ADS)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.

  11. Analytic and Computational Perspectives of Multi-Scale Theory for Homogeneous, Laminated Composite, and Sandwich Beams and Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Gherlone, Marco; Versino, Daniele; DiSciuva, Marco

    2012-01-01

    This paper reviews the theoretical foundation and computational mechanics aspects of the recently developed shear-deformation theory, called the Refined Zigzag Theory (RZT). The theory is based on a multi-scale formalism in which an equivalent single-layer plate theory is refined with a robust set of zigzag local layer displacements that are free of the usual deficiencies found in common plate theories with zigzag kinematics. In the RZT, first-order shear-deformation plate theory is used as the equivalent single-layer plate theory, which represents the overall response characteristics. Local piecewise-linear zigzag displacements are used to provide corrections to these overall response characteristics that are associated with the plate heterogeneity and the relative stiffnesses of the layers. The theory does not rely on shear correction factors and is equally accurate for homogeneous, laminated composite, and sandwich beams and plates. Regardless of the number of material layers, the theory maintains only seven kinematic unknowns that describe the membrane, bending, and transverse shear plate-deformation modes. Derived from the virtual work principle, RZT is well-suited for developing computationally efficient, C(sup 0)-continuous finite elements; formulations of several RZT-based elements are highlighted. The theory and its finite element approximations thus provide a unified and reliable computational platform for the analysis and design of high-performance load-bearing aerospace structures.

  12. Analytic and Computational Perspectives of Multi-Scale Theory for Homogeneous, Laminated Composite, and Sandwich Beams and Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Gherlone, Marco; Versino, Daniele; Di Sciuva, Marco

    2012-01-01

    This paper reviews the theoretical foundation and computational mechanics aspects of the recently developed shear-deformation theory, called the Refined Zigzag Theory (RZT). The theory is based on a multi-scale formalism in which an equivalent single-layer plate theory is refined with a robust set of zigzag local layer displacements that are free of the usual deficiencies found in common plate theories with zigzag kinematics. In the RZT, first-order shear-deformation plate theory is used as the equivalent single-layer plate theory, which represents the overall response characteristics. Local piecewise-linear zigzag displacements are used to provide corrections to these overall response characteristics that are associated with the plate heterogeneity and the relative stiffnesses of the layers. The theory does not rely on shear correction factors and is equally accurate for homogeneous, laminated composite, and sandwich beams and plates. Regardless of the number of material layers, the theory maintains only seven kinematic unknowns that describe the membrane, bending, and transverse shear plate-deformation modes. Derived from the virtual work principle, RZT is well-suited for developing computationally efficient, C0-continuous finite elements; formulations of several RZT-based elements are highlighted. The theory and its finite elements provide a unified and reliable computational platform for the analysis and design of high-performance load-bearing aerospace structures.

  13. Reliability of a computer and Internet survey (Computer User Profile) used by adults with and without traumatic brain injury (TBI).

    PubMed

    Kilov, Andrea M; Togher, Leanne; Power, Emma

    2015-01-01

    To determine test-re-test reliability of the 'Computer User Profile' (CUP) in people with and without TBI. The CUP was administered on two occasions to people with and without TBI. The CUP investigated the nature and frequency of participants' computer and Internet use. Intra-class correlation coefficients and kappa coefficients were conducted to measure reliability of individual CUP items. Descriptive statistics were used to summarize content of responses. Sixteen adults with TBI and 40 adults without TBI were included in the study. All participants were reliable in reporting demographic information, frequency of social communication and leisure activities and computer/Internet habits and usage. Adults with TBI were reliable in 77% of their responses to survey items. Adults without TBI were reliable in 88% of their responses to survey items. The CUP was practical and valuable in capturing information about social, leisure, communication and computer/Internet habits of people with and without TBI. Adults without TBI scored more items with satisfactory reliability overall in their surveys. Future studies may include larger samples and could also include an exploration of how people with/without TBI use other digital communication technologies. This may provide further information on determining technology readiness for people with TBI in therapy programmes.

  14. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  15. [Reliability theory based on quality risk network analysis for Chinese medicine injection].

    PubMed

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    A new risk analysis method based upon reliability theory was introduced in this paper for the quality risk management of Chinese medicine injection manufacturing plants. The risk events including both cause and effect ones were derived in the framework as nodes with a Bayesian network analysis approach. It thus transforms the risk analysis results from failure mode and effect analysis (FMEA) into a Bayesian network platform. With its structure and parameters determined, the network can be used to evaluate the system reliability quantitatively with probabilistic analytical appraoches. Using network analysis tools such as GeNie and AgenaRisk, we are able to find the nodes that are most critical to influence the system reliability. The importance of each node to the system can be quantitatively evaluated by calculating the effect of the node on the overall risk, and minimization plan can be determined accordingly to reduce their influences and improve the system reliability. Using the Shengmai injection manufacturing plant of SZYY Ltd as a user case, we analyzed the quality risk with both static FMEA analysis and dynamic Bayesian Network analysis. The potential risk factors for the quality of Shengmai injection manufacturing were identified with the network analysis platform. Quality assurance actions were further defined to reduce the risk and improve the product quality.

  16. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  17. Development of a Cloud Computing-Based Pier Type Port Structure Stability Evaluation Platform Using Fiber Bragg Grating Sensors.

    PubMed

    Jo, Byung Wan; Jo, Jun Ho; Khan, Rana Muhammad Asad; Kim, Jung Hoon; Lee, Yun Sung

    2018-05-23

    Structure Health Monitoring is a topic of great interest in port structures due to the ageing of structures and the limitations of evaluating structures. This paper presents a cloud computing-based stability evaluation platform for a pier type port structure using Fiber Bragg Grating (FBG) sensors in a system consisting of a FBG strain sensor, FBG displacement gauge, FBG angle meter, gateway, and cloud computing-based web server. The sensors were installed on core components of the structure and measurements were taken to evaluate the structures. The measurement values were transmitted to the web server via the gateway to analyze and visualize them. All data were analyzed and visualized in the web server to evaluate the structure based on the safety evaluation index (SEI). The stability evaluation platform for pier type port structures involves the efficient monitoring of the structures which can be carried out easily anytime and anywhere by converging new technologies such as cloud computing and FBG sensors. In addition, the platform has been successfully implemented at “Maryang Harbor” situated in Maryang-Meyon of Korea to test its durability.

  18. 20170312 - Computer Simulation of Developmental ...

    EPA Pesticide Factsheets

    Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of

  19. Computer Simulation of Developmental Processes and ...

    EPA Pesticide Factsheets

    Rationale: Recent progress in systems toxicology and synthetic biology have paved the way to new thinking about in vitro/in silico modeling of developmental processes and toxicities, both for embryological and reproductive impacts. Novel in vitro platforms such as 3D organotypic culture models, engineered microscale tissues and complex microphysiological systems (MPS), together with computational models and computer simulation of tissue dynamics, lend themselves to a integrated testing strategies for predictive toxicology. As these emergent methodologies continue to evolve, they must be integrally tied to maternal/fetal physiology and toxicity of the developing individual across early lifestage transitions, from fertilization to birth, through puberty and beyond. Scope: This symposium will focus on how the novel technology platforms can help now and in the future, with in vitro/in silico modeling of complex biological systems for developmental and reproductive toxicity issues, and translating systems models into integrative testing strategies. The symposium is based on three main organizing principles: (1) that novel in vitro platforms with human cells configured in nascent tissue architectures with a native microphysiological environments yield mechanistic understanding of developmental and reproductive impacts of drug/chemical exposures; (2) that novel in silico platforms with high-throughput screening (HTS) data, biologically-inspired computational models of

  20. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  1. Facial Aesthetic Outcomes of Cleft Surgery: Assessment of Discrete Lip and Nose Images Compared with Digital Symmetry Analysis.

    PubMed

    Deall, Ciara E; Kornmann, Nirvana S S; Bella, Husam; Wallis, Katy L; Hardwicke, Joseph T; Su, Ting-Li; Richard, Bruce M

    2016-10-01

    High-quality aesthetic outcomes are of paramount importance to children growing up after cleft lip and palate surgery. Establishing a validated and reliable assessment tool for cleft professionals and families will facilitate cleft units, surgeons, techniques, and protocols to be audited and compared with greater confidence. This study used exemplar images across a five-point aesthetic scale, identified in a pilot project, to score lips and noses as separate units and compared these human scores with computer-based SymNose symmetry scores. Forty-five assessors (17 cleft surgeons nationally and 28 other cleft professionals from the UK South West Tri-centre units), scored 25 standardized photographs, uploaded randomly onto a Web-based platform, twice. Each photograph was shown in three forms: lip and nose together, and separately cropped images of nose only and lip only. The same images were analyzed using the SymNose software program. Scoring lips gave the best intrarater and interrater reliabilities. Nose scores were more variable. Lip scoring associated most closely with the whole-image score. SymNose ranking of the lip images related highly to the same ranking by humans (p = 0.001). The exemplar images maintained their established previous ranking. Images illustrating the aesthetic outcome grades are confirmed. The lip score is reliable and seems to dominate in the whole-image score. Noses are much harder to score reliably. It appears that SymNose can score lip images very effectively by symmetry. Further use of SymNose will be investigated, and families of children with cleft will trial the scoring system. Therapeutic, III.

  2. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  3. Solving optimization problems on computational grids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, S. J.; Mathematics and Computer Science

    2001-05-01

    Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less

  4. Clean access platform for orbiter

    NASA Technical Reports Server (NTRS)

    Morrison, H.; Harris, J.

    1990-01-01

    The design of the Clean Access Platform at the Kennedy Space Center, beginning with the design requirements and tracing the effort throughout development and manufacturing is described. Also examined are: (1) A system description; (2) Testing requirements and conclusions; (3) Safety and reliability features; (4) Major problems experienced during the project; and (5) Lessons learned, including features necessary for the effective design of mechanisms used in clean systems.

  5. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  6. Power in the loop real time simulation platform for renewable energy generation

    NASA Astrophysics Data System (ADS)

    Li, Yang; Shi, Wenhui; Zhang, Xing; He, Guoqing

    2018-02-01

    Nowadays, a large scale of renewable energy sources has been connecting to power system and the real time simulation platform is widely used to carry out research on integration control algorithm, power system stability etc. Compared to traditional pure digital simulation and hardware in the loop simulation, power in the loop simulation has higher accuracy and degree of reliability. In this paper, a power in the loop analog digital hybrid simulation platform has been built and it can be used not only for the single generation unit connecting to grid, but also for multiple new energy generation units connecting to grid. A wind generator inertia control experiment was carried out on the platform. The structure of the inertia control platform was researched and the results verify that the platform is up to need for renewable power in the loop real time simulation.

  7. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  8. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  9. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  10. Implementation and performance test of cloud platform based on Hadoop

    NASA Astrophysics Data System (ADS)

    Xu, Jingxian; Guo, Jianhong; Ren, Chunlan

    2018-01-01

    Hadoop, as an open source project for the Apache foundation, is a distributed computing framework that deals with large amounts of data and has been widely used in the Internet industry. Therefore, it is meaningful to study the implementation of Hadoop platform and the performance of test platform. The purpose of this subject is to study the method of building Hadoop platform and to study the performance of test platform. This paper presents a method to implement Hadoop platform and a test platform performance method. Experimental results show that the proposed test performance method is effective and it can detect the performance of Hadoop platform.

  11. Reliability history of the Apollo guidance computer

    NASA Technical Reports Server (NTRS)

    Hall, E. C.

    1972-01-01

    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  12. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  13. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; Fugate, David L.; Woodworth, Ken

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the softwaremore » for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.« less

  14. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  15. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  16. Apps seeking theories: results of a study on the use of health behavior change theories in cancer survivorship mobile apps.

    PubMed

    Vollmer Dahlke, Deborah; Fair, Kayla; Hong, Y Alicia; Beaudoin, Christopher E; Pulczinski, Jairus; Ory, Marcia G

    2015-03-27

    Thousands of mobile health apps are now available for use on mobile phones for a variety of uses and conditions, including cancer survivorship. Many of these apps appear to deliver health behavior interventions but may fail to consider design considerations based in human computer interface and health behavior change theories. This study is designed to assess the presence of and manner in which health behavior change and health communication theories are applied in mobile phone cancer survivorship apps. The research team selected a set of criteria-based health apps for mobile phones and assessed each app using qualitative coding methods to assess the application of health behavior change and communication theories. Each app was assessed using a coding derived from the taxonomy of 26 health behavior change techniques by Abraham and Michie with a few important changes based on the characteristics of mHealth apps that are specific to information processing and human computer interaction such as control theory and feedback systems. A total of 68 mobile phone apps and games built on the iOS and Android platforms were coded, with 65 being unique. Using a Cohen's kappa analysis statistic, the inter-rater reliability for the iOS apps was 86.1 (P<.001) and for the Android apps, 77.4 (P<.001). For the most part, the scores for inclusion of theory-based health behavior change characteristics in the iOS platform cancer survivorship apps were consistently higher than those of the Android platform apps. For personalization and tailoring, 67% of the iOS apps (24/36) had these elements as compared to 38% of the Android apps (12/32). In the area of prompting for intention formation, 67% of the iOS apps (34/36) indicated these elements as compared to 16% (5/32) of the Android apps. Mobile apps are rapidly emerging as a way to deliver health behavior change interventions that can be tailored or personalized for individuals. As these apps and games continue to evolve and include interactive and adaptive sensors and other forms of dynamic feedback, their content and interventional elements need to be grounded in human computer interface design and health behavior and communication theory and practice.

  17. Apps Seeking Theories: Results of a Study on the Use of Health Behavior Change Theories in Cancer Survivorship Mobile Apps

    PubMed Central

    Fair, Kayla; Hong, Y Alicia; Beaudoin, Christopher E; Pulczinski, Jairus; Ory, Marcia G

    2015-01-01

    Background Thousands of mobile health apps are now available for use on mobile phones for a variety of uses and conditions, including cancer survivorship. Many of these apps appear to deliver health behavior interventions but may fail to consider design considerations based in human computer interface and health behavior change theories. Objective This study is designed to assess the presence of and manner in which health behavior change and health communication theories are applied in mobile phone cancer survivorship apps. Methods The research team selected a set of criteria-based health apps for mobile phones and assessed each app using qualitative coding methods to assess the application of health behavior change and communication theories. Each app was assessed using a coding derived from the taxonomy of 26 health behavior change techniques by Abraham and Michie with a few important changes based on the characteristics of mHealth apps that are specific to information processing and human computer interaction such as control theory and feedback systems. Results A total of 68 mobile phone apps and games built on the iOS and Android platforms were coded, with 65 being unique. Using a Cohen’s kappa analysis statistic, the inter-rater reliability for the iOS apps was 86.1 (P<.001) and for the Android apps, 77.4 (P<.001). For the most part, the scores for inclusion of theory-based health behavior change characteristics in the iOS platform cancer survivorship apps were consistently higher than those of the Android platform apps. For personalization and tailoring, 67% of the iOS apps (24/36) had these elements as compared to 38% of the Android apps (12/32). In the area of prompting for intention formation, 67% of the iOS apps (34/36) indicated these elements as compared to 16% (5/32) of the Android apps. Conclusions Mobile apps are rapidly emerging as a way to deliver health behavior change interventions that can be tailored or personalized for individuals. As these apps and games continue to evolve and include interactive and adaptive sensors and other forms of dynamic feedback, their content and interventional elements need to be grounded in human computer interface design and health behavior and communication theory and practice. PMID:25830810

  18. TERRA REF: Advancing phenomics with high resolution, open access sensor and genomics data

    NASA Astrophysics Data System (ADS)

    LeBauer, D.; Kooper, R.; Burnette, M.; Willis, C.

    2017-12-01

    Automated plant measurement has the potential to improve understanding of genetic and environmental controls on plant traits (phenotypes). The application of sensors and software in the automation of high throughput phenotyping reflects a fundamental shift from labor intensive hand measurements to drone, tractor, and robot mounted sensing platforms. These tools are expected to speed the rate of crop improvement by enabling plant breeders to more accurately select plants with improved yields, resource use efficiency, and stress tolerance. However, there are many challenges facing high throughput phenomics: sensors and platforms are expensive, currently there are few standard methods of data collection and storage, and the analysis of large data sets requires high performance computers and automated, reproducible computing pipelines. To overcome these obstacles and advance the science of high throughput phenomics, the TERRA Phenotyping Reference Platform (TERRA-REF) team is developing an open-access database of high resolution sensor data. TERRA REF is an integrated field and greenhouse phenotyping system that includes: a reference field scanner with fifteen sensors that can generate terrabytes of data each day at mm resolution; UAV, tractor, and fixed field sensing platforms; and an automated controlled-environment scanner. These platforms will enable investigation of diverse sensing modalities, and the investigation of traits under controlled and field environments. It is the goal of TERRA REF to lower the barrier to entry for academic and industry researchers by providing high-resolution data, open source software, and online computing resources. Our project is unique in that all data will be made fully public in November 2018, and is already available to early adopters through the beta-user program. We will describe the datasets and how to use them as well as the databases and computing pipeline and how these can be reused and remixed in other phenomics pipelines. Finally, we will describe the National Data Service workbench, a cloud computing platform that can access the petabyte scale data while supporting reproducible research.

  19. TTEthernet for Integrated Spacecraft Networks

    NASA Technical Reports Server (NTRS)

    Loveless, Andrew

    2015-01-01

    Aerospace projects have traditionally employed federated avionics architectures, in which each computer system is designed to perform one specific function (e.g. navigation). There are obvious downsides to this approach, including excessive weight (from so much computing hardware), and inefficient processor utilization (since modern processors are capable of performing multiple tasks). There has therefore been a push for integrated modular avionics (IMA), in which common computing platforms can be leveraged for different purposes. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and design complexity. However, the application of IMA principles introduces significant challenges, as the data network must accommodate traffic of mixed criticality and performance levels - potentially all related to the same shared computer hardware. Because individual network technologies are rarely so competent, the development of truly integrated network architectures often proves unreasonable. Several different types of networks are utilized - each suited to support a specific vehicle function. Critical functions are typically driven by precise timing loops, requiring networks with strict guarantees regarding message latency (i.e. determinism) and fault-tolerance. Alternatively, non-critical systems generally employ data networks prioritizing flexibility and high performance over reliable operation. Switched Ethernet has seen widespread success filling this role in terrestrial applications. Its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components make it desirable for inclusion in spacecraft platforms. Basic Ethernet configurations have been incorporated into several preexisting aerospace projects, including both the Space Shuttle and International Space Station (ISS). However, classical switched Ethernet cannot provide the high level of network determinism required by real-time spacecraft applications. Even with modern advancements, the uncoordinated (i.e. event-driven) nature of Ethernet communication unavoidably leads to message contention within network switches. The arbitration process used to resolve such conflicts introduces variation in the time it takes for messages to be forwarded. TTEthernet1 introduces decentralized clock synchronization to switched Ethernet, enabling message transmission according to a time-triggered (TT) paradigm. A network planning tool is used to allocate each device a finite amount of time in which it may transmit a frame. Each time slot is repeated sequentially to form a periodic communication schedule that is then loaded onto each TTEthernet device (e.g. switches and end systems). Each network participant references the synchronized time in order to dispatch messages at predetermined instances. This schedule guarantees that no contention exists between time-triggered Ethernet frames in the network switches, therefore eliminating the need for arbitration (and the timing variation it causes). Besides time-triggered messaging, TTEthernet networks may provide two additional traffic classes to support communication of different criticality levels. In the rate-constrained (RC) traffic class, the frame payload size and rate of transmission along each communication channel are limited to predetermined maximums. The network switches can therefore be configured to accommodate the known worst-case traffic pattern, and buffer overflows can be eliminated. The best-effort (BE) traffic class behaves akin to classical Ethernet. No guarantees are provided regarding transmission latency or successful message delivery. TTEthernet coordinates transmission of all three traffic classes over the same physical connections, therefore accommodating the full spectrum of traffic criticality levels required in IMA architectures. Common computing platforms (e.g. LRUs) can share networking resources in such a way that failures in non-critical systems (using BE or RC communication modes) cannot impact flight-critical functions (using TT communication). Furthermore, TTEthernet hardware (e.g. switches, cabling) can be shared by both TTEthernet and classical Ethernet traffic.

  20. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    NASA Astrophysics Data System (ADS)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  1. Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)

    NASA Astrophysics Data System (ADS)

    Nebert, D. D.; Huang, Q.; Yang, C.

    2013-12-01

    The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This paper presents the background, architectural design, and activities of GeoCloud in support of the Geospatial Platform Initiative. System security strategies and approval processes for migrating federal geospatial data, information, and applications into cloud, and cost estimation for cloud operations are covered. Finally, some lessons learned from the GeoCloud project are discussed as reference for geoscientists to consider in the adoption of cloud computing.

  2. Inter- and intraobserver reliability of the vertebral, local and segmental kyphosis in 120 traumatic lumbar and thoracic burst fractures: evaluation in lateral X-rays and sagittal computed tomographies

    PubMed Central

    Brunner, Alexander; Gühring, Markus; Schmälzle, Traude; Weise, Kuno; Badke, Andreas

    2009-01-01

    Evaluation of the kyphosis angle in thoracic and lumbar burst fractures is often used to indicate surgical procedures. The kyphosis angle could be measured as vertebral, segmental and local kyphosis according to the method of Cobb. The vertebral, segmental and local kyphosis according to the method of Cobb were measured at 120 lateral X-rays and sagittal computed tomographies of 60 thoracic and 60 lumbar burst fractures by 3 independent observers on 2 separate occasions. Osteoporotic fractures were excluded. The intra- and interobserver reliability of these angles in X-ray and computed tomogram, using the intra class correlation coefficient (ICC) were evaluated. Highest reproducibility showed the segmental kyphosis followed by the vertebral kyphosis. For thoracic fractures segmental kyphosis shows in X-ray “excellent” inter- and intraobserver reliabilities (ICC 0.826, 0.802) and for lumbar fractures “good” to “excellent” inter- and intraobserver reliabilities (ICC = 0.790, 0.803). In computed tomography, the segmental kyphosis showed “excellent” inter- and intraobserver reliabilities (ICC = 0.824, 0.801) for thoracic and “excellent” inter- and intraobserver reliabilities (ICC = 0.874, 0.835) for the lumbar fractures. Regarding both diagnostic work ups (X-ray and computed tomography), significant differences were evaluated in interobserver reliabilities for vertebral kyphosis measured in lumbar fracture X-rays (p = 0.035) and interobserver reliabilities for local kyphosis, measured in thoracic fracture X-rays (p = 0.010). Regarding both fracture localizations (thoracic and lumbar fractures), significant differences could only be evaluated in interobserver reliabilities for the local kyphosis measured in computed tomographies (p = 0.045) and in intraobserver reliabilities for the vertebral kyphosis measured in X-rays (p = 0.024). “Good” to “excellent” inter- and intraobserver reliabilities for vertebral, segmental and local kyphosis in X-ray make these angles to a helpful tool, indicating surgical procedures. For the practical use in lateral X-ray, we emphasize the determination of the segmental kyphosis, because of the highest reproducibility of this angle. “Good” to “excellent” inter- and intraobserver reliabilities for these three angles could also be evaluated in computed tomographies. Therefore, also in computed tomography, the use of these three angles seems to be generally possible. For a direct correlation of the results in lateral X-ray and in computed tomography, further studies should be needed. PMID:19953277

  3. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  4. 32 CFR 34.42 - Retention and access requirements for records.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...

  5. 32 CFR 34.42 - Retention and access requirements for records.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...

  6. 32 CFR 34.42 - Retention and access requirements for records.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...

  7. Circuit design advances for ultra-low power sensing platforms

    NASA Astrophysics Data System (ADS)

    Wieckowski, Michael; Dreslinski, Ronald G.; Mudge, Trevor; Blaauw, David; Sylvester, Dennis

    2010-04-01

    This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of NTC and describes current work aimed at overcoming these obstacles in the circuit design space.

  8. Balloon platform for extended-life astronomy research

    NASA Technical Reports Server (NTRS)

    Ostwald, L. T.

    1974-01-01

    A configuration has been developed for a long-life balloon platform to carry pointing telescopes weighing as much as 80 pounds (36 kg) to point at selected celestial targets. A platform of this configuration weighs about 375 pounds (170 kg) gross and can be suspended from a high altitude super pressure balloon for a lifetime of several months. The balloon platform contains a solar array and storage batteries for electrical power, up and down link communications equipment, and navigational and attitude control systems for orienting the scientific instrument. A biaxial controller maintains the telescope attitude in response to look-angle data stored in an on-board computer memory which is updated periodically by ground command. Gimbal angles are computed by using location data derived by an on-board navigational receiver.

  9. A multilevel control approach for a modular structured space platform

    NASA Technical Reports Server (NTRS)

    Chichester, F. D.; Borelli, M. T.

    1981-01-01

    A three axis mathematical representation of a modular assembled space platform consisting of interconnected discrete masses, including a deployable truss module, was derived for digital computer simulation. The platform attitude control system as developed to provide multilevel control utilizing the Gauss-Seidel second level formulation along with an extended form of linear quadratic regulator techniques. The objectives of the multilevel control are to decouple the space platform's spatial axes and to accommodate the modification of the platform's configuration for each of the decoupled axes.

  10. Architecture and Initial Development of a Digital Library Platform for Computable Knowledge Objects for Health.

    PubMed

    Flynn, Allen J; Bahulekar, Namita; Boisvert, Peter; Lagoze, Carl; Meng, George; Rampton, James; Friedman, Charles P

    2017-01-01

    Throughout the world, biomedical knowledge is routinely generated and shared through primary and secondary scientific publications. However, there is too much latency between publication of knowledge and its routine use in practice. To address this latency, what is actionable in scientific publications can be encoded to make it computable. We have created a purpose-built digital library platform to hold, manage, and share actionable, computable knowledge for health called the Knowledge Grid Library. Here we present it with its system architecture.

  11. A Geospatial Information Grid Framework for Geological Survey.

    PubMed

    Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong

    2015-01-01

    The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.

  12. A Geospatial Information Grid Framework for Geological Survey

    PubMed Central

    Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong

    2015-01-01

    The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255

  13. Fog-computing concept usage as means to enhance information and control system reliability

    NASA Astrophysics Data System (ADS)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-05-01

    This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.

  14. Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

    NASA Astrophysics Data System (ADS)

    Cherukuru, Nihanth Wagmi

    Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.

  15. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  16. Computer Usage and the Validity of Self-Assessed Computer Competence among First-Year Business Students

    ERIC Educational Resources Information Center

    Ballantine, Joan A.; McCourt Larres, Patricia; Oyelere, Peter

    2007-01-01

    This study evaluates the reliability of self-assessment as a measure of computer competence. This evaluation is carried out in response to recent research which has employed self-reported ratings as the sole indicator of students' computer competence. To evaluate the reliability of self-assessed computer competence, the scores achieved by students…

  17. Characterizing Information Processing With a Mobile Device: Measurement of Simple and Choice Reaction Time.

    PubMed

    Burke, Daniel; Linder, Susan; Hirsch, Joshua; Dey, Tanujit; Kana, Daniel; Ringenbach, Shannon; Schindler, David; Alberts, Jay

    2017-10-01

    Information processing is typically evaluated using simple reaction time (SRT) and choice reaction time (CRT) paradigms in which a specific response is initiated following a given stimulus. The measurement of reaction time (RT) has evolved from monitoring the timing of mechanical switches to computerized paradigms. The proliferation of mobile devices with touch screens makes them a natural next technological approach to assess information processing. The aims of this study were to determine the validity and reliability of using of a mobile device (Apple iPad or iTouch) to accurately measure RT. Sixty healthy young adults completed SRT and CRT tasks using a traditional test platform and mobile platforms on two occasions. The SRT was similar across test modality: 300, 287, and 280 milliseconds (ms) for the traditional, iPad, and iTouch, respectively. The CRT was similar within mobile devices, though slightly faster on the traditional: 359, 408, and 384 ms for traditional, iPad, and iTouch, respectively. Intraclass correlation coefficients ranged from 0.79 to 0.85 for SRT and from 0.75 to 0.83 for CRT. The similarity and reliability of SRT across platforms and consistency of SRT and CRT across test conditions indicate that mobile devices provide the next generation of assessment platforms for information processing.

  18. We-Measure: Toward a low-cost portable posturography for patients with multiple sclerosis using the commercial Wii balance board.

    PubMed

    Castelli, Letizia; Stocchi, Luca; Patrignani, Maurizio; Sellitto, Giovanni; Giuliani, Manuela; Prosperini, Luca

    2015-12-15

    This study was aimed at investigating whether postural sway measures derived from a standard force platform were similar to those generated by a custom-written software ("We-Measure") acquiring and processing data from a commercial Nintendo balance board (BB). For this purpose, 90 patients with multiple sclerosis (MS) and 50 healthy controls (HC) were tested in a single-day session with a reference standard force platform and a BB-based system. Despite its acceptable between-device agreement (tested by visual evaluation of Bland-Altman plot), the low-cost BB-based system tended to overestimate postural sway when compared to the reference standard force platform in both MS and HC groups (on average +30% and +54%, respectively). Between-device reliability was just adequate (MS: 66%, HC: 47%), while test-retest reliability was excellent (MS: 84%, HC: 88%). Concurrent validity evaluation showed similar performance between the reference standard force platform and the BB-based system in discriminating fallers and non-fallers among patients with MS. All these findings may encourage the use of this balance board-based new device in longitudinal study, rather than in cross-sectional design, thus providing a potential useful tool for multicenter settings. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  20. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity.

    PubMed

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-06-21

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads.

  1. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity

    PubMed Central

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-01-01

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads. PMID:27338396

  2. 32 CFR 32.53 - Retention and access requirements for records.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...

  3. 32 CFR 32.53 - Retention and access requirements for records.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...

  4. 32 CFR 32.53 - Retention and access requirements for records.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...

  5. Identification of species by multiplex analysis of variable-length sequences

    PubMed Central

    Pereira, Filipe; Carneiro, João; Matthiesen, Rune; van Asch, Barbara; Pinto, Nádia; Gusmão, Leonor; Amorim, António

    2010-01-01

    The quest for a universal and efficient method of identifying species has been a longstanding challenge in biology. Here, we show that accurate identification of species in all domains of life can be accomplished by multiplex analysis of variable-length sequences containing multiple insertion/deletion variants. The new method, called SPInDel, is able to discriminate 93.3% of eukaryotic species from 18 taxonomic groups. We also demonstrate that the identification of prokaryotic and viral species with numeric profiles of fragment lengths is generally straightforward. A computational platform is presented to facilitate the planning of projects and includes a large data set with nearly 1800 numeric profiles for species in all domains of life (1556 for eukaryotes, 105 for prokaryotes and 130 for viruses). Finally, a SPInDel profiling kit for discrimination of 10 mammalian species was successfully validated on highly processed food products with species mixtures and proved to be easily adaptable to multiple screening procedures routinely used in molecular biology laboratories. These results suggest that SPInDel is a reliable and cost-effective method for broad-spectrum species identification that is appropriate for use in suboptimal samples and is amenable to different high-throughput genotyping platforms without the need for DNA sequencing. PMID:20923781

  6. Electrolyte-free Amperometric Immunosensor using a Dendritic Nanotip†

    PubMed Central

    Kim, Jong-Hoon; Hiraiwa, Morgan; Lee, Hyun-Boo; Lee, Kyong-Hoon; Cangelosi, Gerard A.; Chung, Jae-Hyun

    2013-01-01

    Electric detection using a nanocomponent may lead to platforms for rapid and simple biosensing. Sensors composed of nanotips or nanodots have been described for highly sensitive amperometry enabled by confined geometry. However, both fabrication and use of nanostructured sensors remain challenging. This paper describes a dendritic nanotip used as an amperometric biosensor for highly sensitive detection of target bacteria. A dendritic nanotip is structured by Si nanowires coated with single-walled carbon nanotubes (SWCNTs) for generation of a high electric field. For reliable measurement using the dendritic structure, Si nanowires were uniformly fabricated by ultraviolet (UV) lithography and etching. The dendritic structure effectively increased the electric current density near the terminal end of the nanotip according to numerical computation. The electrical characteristics of a dendritic nanotip with additional protein layers was studied by cyclic voltammetry and I–V measurement in deionized (DI) water. When the target bacteria dielectrophoretically captured onto a nanotip were bound with fluorescence antibodies, the electric current through DI water decreased. Measurement results were consistent with fluorescence- and electron microscopy. The sensitivity of the amperometry was 10 cfu/sample volume (103 cfu/mL), which was equivalent to the more laborious fluorescence measurement method. The simple configuration of a dendritic nanotip can potentially offer an electrolyte-free detection platform for sensitive and rapid biosensors. PMID:23585927

  7. Electrolyte-free Amperometric Immunosensor using a Dendritic Nanotip.

    PubMed

    Kim, Jong-Hoon; Hiraiwa, Morgan; Lee, Hyun-Boo; Lee, Kyong-Hoon; Cangelosi, Gerard A; Chung, Jae-Hyun

    2013-01-01

    Electric detection using a nanocomponent may lead to platforms for rapid and simple biosensing. Sensors composed of nanotips or nanodots have been described for highly sensitive amperometry enabled by confined geometry. However, both fabrication and use of nanostructured sensors remain challenging. This paper describes a dendritic nanotip used as an amperometric biosensor for highly sensitive detection of target bacteria. A dendritic nanotip is structured by Si nanowires coated with single-walled carbon nanotubes (SWCNTs) for generation of a high electric field. For reliable measurement using the dendritic structure, Si nanowires were uniformly fabricated by ultraviolet (UV) lithography and etching. The dendritic structure effectively increased the electric current density near the terminal end of the nanotip according to numerical computation. The electrical characteristics of a dendritic nanotip with additional protein layers was studied by cyclic voltammetry and I-V measurement in deionized (DI) water. When the target bacteria dielectrophoretically captured onto a nanotip were bound with fluorescence antibodies, the electric current through DI water decreased. Measurement results were consistent with fluorescence- and electron microscopy. The sensitivity of the amperometry was 10 cfu/sample volume (10 3 cfu/mL), which was equivalent to the more laborious fluorescence measurement method. The simple configuration of a dendritic nanotip can potentially offer an electrolyte-free detection platform for sensitive and rapid biosensors.

  8. CBESW: sequence alignment on the Playstation 3.

    PubMed

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-09-17

    The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. For large datasets, our implementation on the PlayStation 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. The results from our experiments demonstrate that the PlayStation 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications.

  9. CBESW: Sequence Alignment on the Playstation 3

    PubMed Central

    Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil

    2008-01-01

    Background The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation® 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. Results For large datasets, our implementation on the PlayStation® 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. Conclusion The results from our experiments demonstrate that the PlayStation® 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications. PMID:18798993

  10. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. CFD and Neutron codes coupling on a computational platform

    NASA Astrophysics Data System (ADS)

    Cerroni, D.; Da Vià, R.; Manservisi, S.; Menghini, F.; Scardovelli, R.

    2017-01-01

    In this work we investigate the thermal-hydraulics behavior of a PWR nuclear reactor core, evaluating the power generation distribution taking into account the local temperature field. The temperature field, evaluated using a self-developed CFD module, is exchanged with a neutron code, DONJON-DRAGON, which updates the macroscopic cross sections and evaluates the new neutron flux. From the updated neutron flux the new peak factor is evaluated and the new temperature field is computed. The exchange of data between the two codes is obtained thanks to their inclusion into the computational platform SALOME, an open-source tools developed by the collaborative project NURESAFE. The numerical libraries MEDmem, included into the SALOME platform, are used in this work, for the projection of computational fields from one problem to another. The two problems are driven by a common supervisor that can access to the computational fields of both systems, in every time step, the temperature field, is extracted from the CFD problem and set into the neutron problem. After this iteration the new power peak factor is projected back into the CFD problem and the new time step can be computed. Several computational examples, where both neutron and thermal-hydraulics quantities are parametrized, are finally reported in this work.

  12. Comparison of Two Commercial Automated Nucleic Acid Extraction and Integrated Quantitation Real-Time PCR Platforms for the Detection of Cytomegalovirus in Plasma

    PubMed Central

    Tsai, Huey-Pin; Tsai, You-Yuan; Lin, I-Ting; Kuo, Pin-Hwa; Chen, Tsai-Yun; Chang, Kung-Chao; Wang, Jen-Ren

    2016-01-01

    Quantitation of cytomegalovirus (CMV) viral load in the transplant patients has become a standard practice for monitoring the response to antiviral therapy. The cut-off values of CMV viral load assays for preemptive therapy are different due to the various assay designs employed. To establish a sensitive and reliable diagnostic assay for preemptive therapy of CMV infection, two commercial automated platforms including m2000sp extraction system integrated the Abbott RealTime (m2000rt) and the Roche COBAS AmpliPrep for extraction integrated COBAS Taqman (CAP/CTM) were evaluated using WHO international CMV standards and 110 plasma specimens from transplant patients. The performance characteristics, correlation, and workflow of the two platforms were investigated. The Abbott RealTime assay correlated well with the Roche CAP/CTM assay (R2 = 0.9379, P<0.01). The Abbott RealTime assay exhibited higher sensitivity for the detection of CMV viral load, and viral load values measured with Abbott RealTime assay were on average 0.76 log10 IU/mL higher than those measured with the Roche CAP/CTM assay (P<0.0001). Workflow analysis on a small batch size at one time, using the Roche CAP/CTM platform had a shorter hands-on time than the Abbott RealTime platform. In conclusion, these two assays can provide reliable data for different purpose in a clinical virology laboratory setting. PMID:27494707

  13. Development and Initial Validation of an Instrument to Measure Physicians' Use of, Knowledge about, and Attitudes Toward Computers

    PubMed Central

    Cork, Randy D.; Detmer, William M.; Friedman, Charles P.

    1998-01-01

    This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349

  14. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  15. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  16. Geostationary multipurpose platforms

    NASA Technical Reports Server (NTRS)

    Bekey, I.; Bowman, R. M.

    1981-01-01

    In addition to the advantages generally associated with orbital platforms, such as improved reliability, economies of scale, simple connectivity of elements, reduced tracking demands and the restraint of orbital object population growth, geostationary platforms yield: (1) continuous access by fixed ground antennas for communications services; (2) continuous monitoring of phenomena over chosen regions of the earth's surface; (3) a preferred location for many solar-terrestrial physics experiments. The geostationary platform also offers a low-risk and economical solution to the impending saturation of the orbital arc/frequency spectrum, maximizing the capacity of individual slots and increasing the utility of the entire arc. It also allows the use of many small, simple and inexpensive earth stations through complexity inversion and high power per beam. Block diagram and operational flowcharts are provided.

  17. MicroScope in 2017: an expanding and evolving integrated resource for community expertise of microbial genomes

    PubMed Central

    Vallenet, David; Calteau, Alexandra; Cruveiller, Stéphane; Gachet, Mathieu; Lajus, Aurélie; Josso, Adrien; Mercier, Jonathan; Renaux, Alexandre; Rollin, Johan; Rouy, Zoe; Roche, David; Scarpelli, Claude; Médigue, Claudine

    2017-01-01

    The annotation of genomes from NGS platforms needs to be automated and fully integrated. However, maintaining consistency and accuracy in genome annotation is a challenging problem because millions of protein database entries are not assigned reliable functions. This shortcoming limits the knowledge that can be extracted from genomes and metabolic models. Launched in 2005, the MicroScope platform (http://www.genoscope.cns.fr/agc/microscope) is an integrative resource that supports systematic and efficient revision of microbial genome annotation, data management and comparative analysis. Effective comparative analysis requires a consistent and complete view of biological data, and therefore, support for reviewing the quality of functional annotation is critical. MicroScope allows users to analyze microbial (meta)genomes together with post-genomic experiment results if any (i.e. transcriptomics, re-sequencing of evolved strains, mutant collections, phenotype data). It combines tools and graphical interfaces to analyze genomes and to perform the expert curation of gene functions in a comparative context. Starting with a short overview of the MicroScope system, this paper focuses on some major improvements of the Web interface, mainly for the submission of genomic data and on original tools and pipelines that have been developed and integrated in the platform: computation of pan-genomes and prediction of biosynthetic gene clusters. Today the resource contains data for more than 6000 microbial genomes, and among the 2700 personal accounts (65% of which are now from foreign countries), 14% of the users are performing expert annotations, on at least a weekly basis, contributing to improve the quality of microbial genome annotations. PMID:27899624

  18. MDWiZ: a platform for the automated translation of molecular dynamics simulations.

    PubMed

    Rusu, Victor H; Horta, Vitor A C; Horta, Bruno A C; Lins, Roberto D; Baron, Riccardo

    2014-03-01

    A variety of popular molecular dynamics (MD) simulation packages were independently developed in the last decades to reach diverse scientific goals. However, such non-coordinated development of software, force fields, and analysis tools for molecular simulations gave rise to an array of software formats and arbitrary conventions for routine preparation and analysis of simulation input and output data. Different formats and/or parameter definitions are used at each stage of the modeling process despite largely contain redundant information between alternative software tools. Such Babel of languages that cannot be easily and univocally translated one into another poses one of the major technical obstacles to the preparation, translation, and comparison of molecular simulation data that users face on a daily basis. Here, we present the MDWiZ platform, a freely accessed online portal designed to aid the fast and reliable preparation and conversion of file formats that allows researchers to reproduce or generate data from MD simulations using different setups, including force fields and models with different underlying potential forms. The general structure of MDWiZ is presented, the features of version 1.0 are detailed, and an extensive validation based on GROMACS to LAMMPS conversion is presented. We believe that MDWiZ will be largely useful to the molecular dynamics community. Such fast format and force field exchange for a given system allows tailoring the chosen system to a given computer platform and/or taking advantage of a specific capabilities offered by different software engines. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Fractal dimension approach in postural control of subjects with Prader-Willi Syndrome

    PubMed Central

    2011-01-01

    Background Static posturography is user-friendly technique suitable for the study of the centre of pressure (CoP) trajectory. However, the utility of static posturography in clinical practice is somehow limited and there is a need for reliable approaches to extract physiologically meaningful information from stabilograms. The aim of this study was to quantify the postural strategy of Prader-Willi patients with the fractal dimension technique in addition to the CoP trajectory analysis in time and frequency domain. Methods 11 adult patients affected by Prader-Willi Syndrome (PWS) and 20 age-matched individuals (Control group: CG) were included in this study. Postural acquisitions were conducted by means of a force platform and the participants were required to stand barefoot on the platform with eyes open and heels at standardized distance and position for 30 seconds. Platform data were analysed in time and frequency domain. Fractal Dimension (FD) was also computed. Results The analysis of CoP vs. time showed that in PWS participants all the parameters were statistically different from CG, with greater displacements along both the antero-posterior and medio-lateral direction and longer CoP tracks. As for frequency analysis, our data showed no significant differences between PWS and CG. FD evidenced that PWS individuals were characterized by greater value in comparison with CG. Conclusions Our data showed that while the analysis in the frequency domain did not seem to explain the postural deficit in PWS, the FD method appears to provide a more informative description of it and to complement and integrate the time domain analysis. PMID:21854639

  20. Final Report. Center for Scalable Application Development Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less

  1. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  2. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  3. A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers

    NASA Technical Reports Server (NTRS)

    Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)

    1997-01-01

    The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.

  4. A System Engineering Study and Concept Development for a Humanitarian Aid and Disaster Relief Operations Management Platform

    DTIC Science & Technology

    2016-09-01

    and network. The computing and network hardware are identified and include routers, servers, firewalls, laptops , backup hard drives, smart phones...deployable hardware units will be necessary. This includes the use of ruggedized laptops and desktop computers , a projector system, communications system...ENGINEERING STUDY AND CONCEPT DEVELOPMENT FOR A HUMANITARIAN AID AND DISASTER RELIEF OPERATIONS MANAGEMENT PLATFORM by Julie A. Reed September

  5. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  6. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  7. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  8. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  9. Utilizing Internet Technologies in Observatory Control Systems

    NASA Astrophysics Data System (ADS)

    Cording, Dean

    2002-12-01

    The 'Internet boom' of the past few years has spurred the development of a number of technologies to provide services such as secure communications, reliable messaging, information publishing and application distribution for commercial applications. Over the same period, a new generation of computer languages have also developed to provide object oriented design and development, improved reliability, and cross platform compatibility. Whilst the business models of the 'dot.com' era proved to be largely unviable, the technologies that they were based upon have survived and have matured to the point were they can now be utilized to build secure, robust and complete observatory control control systems. This paper will describe how Electro Optic Systems has utilized these technologies in the development of its third generation Robotic Observatory Control System (ROCS). ROCS provides an extremely flexible configuration capability within a control system structure to provide truly autonomous robotic observatory operation including observation scheduling. ROCS was built using Internet technologies such as Java, Java Messaging Service (JMS), Lightweight Directory Access Protocol (LDAP), Secure Sockets Layer (SSL), eXtendible Markup Language (XML), Hypertext Transport Protocol (HTTP) and Java WebStart. ROCS was designed to be capable of controlling all aspects of an observatory and be able to be reconfigured to handle changing equipment configurations or user requirements without the need for an expert computer programmer. ROCS consists of many small components, each designed to perform a specific task, with the configuration of the system specified using a simple meta language. The use of small components facilitates testing and makes it possible to prove that the system is correct.

  10. Google Earth Engine: a new cloud-computing platform for global-scale earth observation data and analysis

    NASA Astrophysics Data System (ADS)

    Moore, R. T.; Hansen, M. C.

    2011-12-01

    Google Earth Engine is a new technology platform that enables monitoring and measurement of changes in the earth's environment, at planetary scale, on a large catalog of earth observation data. The platform offers intrinsically-parallel computational access to thousands of computers in Google's data centers. Initial efforts have focused primarily on global forest monitoring and measurement, in support of REDD+ activities in the developing world. The intent is to put this platform into the hands of scientists and developing world nations, in order to advance the broader operational deployment of existing scientific methods, and strengthen the ability for public institutions and civil society to better understand, manage and report on the state of their natural resources. Earth Engine currently hosts online nearly the complete historical Landsat archive of L5 and L7 data collected over more than twenty-five years. Newly-collected Landsat imagery is downloaded from USGS EROS Center into Earth Engine on a daily basis. Earth Engine also includes a set of historical and current MODIS data products. The platform supports generation, on-demand, of spatial and temporal mosaics, "best-pixel" composites (for example to remove clouds and gaps in satellite imagery), as well as a variety of spectral indices. Supervised learning methods are available over the Landsat data catalog. The platform also includes a new application programming framework, or "API", that allows scientists access to these computational and data resources, to scale their current algorithms or develop new ones. Under the covers of the Google Earth Engine API is an intrinsically-parallel image-processing system. Several forest monitoring applications powered by this API are currently in development and expected to be operational in 2011. Combining science with massive data and technology resources in a cloud-computing framework can offer advantages of computational speed, ease-of-use and collaboration, as well as transparency in data and methods. Methods developed for global processing of MODIS data to map land cover are being adopted for use with Landsat data. Specifically, the MODIS Vegetation Continuous Field product methodology has been applied for mapping forest extent and change at national scales using Landsat time-series data sets. Scaling this method to continental and global scales is enabled by Google Earth Engine computing capabilities. By combining the supervised learning VCF approach with the Landsat archive and cloud computing, unprecedented monitoring of land cover dynamics is enabled.

  11. SU-E-T-628: A Cloud Computing Based Multi-Objective Optimization Method for Inverse Treatment Planning.

    PubMed

    Na, Y; Suh, T; Xing, L

    2012-06-01

    Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.

  12. Impact of coverage on the reliability of a fault tolerant computer

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1975-01-01

    A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.

  13. 10 CFR 600.342 - Retention and access requirements for records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., reliability, and security of the original computer data. Recipients must also maintain an audit trail... related to computer usage chargeback rates), along with their supporting records, must be retained for a 3... maintained on a computer, recipients must retain the computer data on a reliable medium for the time periods...

  14. 10 CFR 600.342 - Retention and access requirements for records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., reliability, and security of the original computer data. Recipients must also maintain an audit trail... related to computer usage chargeback rates), along with their supporting records, must be retained for a 3... maintained on a computer, recipients must retain the computer data on a reliable medium for the time periods...

  15. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.

  16. Telescope Array Control System Based on Wireless Touch Screen Platform

    NASA Astrophysics Data System (ADS)

    Fu, X. N.; Huang, L.; Wei, J. Y.

    2016-07-01

    GWAC (Ground-based Wide Angle Cameras) are the ground-based observational instruments of the Sino-French cooperation SVOM (Space Variable Objects Monitor) astronomical satellite, and Mini-GWAC is a pathfinder and supplement of GWAC. In the context of the Mini-GWAC telescope array, this paper introduces the design and implementation of a kind of telescope array control system, which is based on wireless serial interface module to communicate. We describe the development and implementation of the system in detail in terms of control system principle, system hardware structure, software design, experiment, and test. The system uses the touch-control PC which is based on the Windows CE system as the upper-computer, the wireless transceiver module and PLC (Programmable Logic Controller) as the core. It has the advantages of low cost, reliable data transmission, and simple operation. So far, the control system has been applied to Mini-GWAC successfully.

  17. Impact of VLSI/VHSIC on satellite on-board signal processing

    NASA Astrophysics Data System (ADS)

    Aanstoos, J. V.; Ruedger, W. H.; Snyder, W. E.; Kelly, W. L.

    Forecasted improvements in IC fabrication techniques, such as the use of X-ray lithography, are expected to yield submicron circuit feature sizes within the decade of the 1980s. As dimensions decrease, reliability, cost, speed, power consumption and density improvements will be realized which have a significant impact on the capabilities of onboard spacecraft signal processing functions. This will in turn result in increases of the intelligence that may be deployed on spaceborne remote sensing platforms. Among programs oriented toward such goals are the silicon-based Very High Speed Integrated Circuit (VHSIC) researches sponsored by the U.S. Department of Defense, and efforts toward the development of GaAs devices which will compete with silicon VLSI technology for future applications. GaAs has an electron mobility which is five to six times that of silicon, and promises commensurate computation speed increases under low field conditions.

  18. An intelligent detecting system for permeability prediction of MBR.

    PubMed

    Han, Honggui; Zhang, Shuo; Qiao, Junfei; Wang, Xiaoshuang

    2018-01-01

    The membrane bioreactor (MBR) has been widely used to purify wastewater in wastewater treatment plants. However, a critical difficulty of the MBR is membrane fouling. To reduce membrane fouling, in this work, an intelligent detecting system is developed to evaluate the performance of MBR by predicting the membrane permeability. This intelligent detecting system consists of two main parts. First, a soft computing method, based on the partial least squares method and the recurrent fuzzy neural network, is designed to find the nonlinear relations between the membrane permeability and the other variables. Second, a complete new platform connecting the sensors and the software is built, in order to enable the intelligent detecting system to handle complex algorithms. Finally, the simulation and experimental results demonstrate the reliability and effectiveness of the proposed intelligent detecting system, underlying the potential of this system for the online membrane permeability for detecting membrane fouling of MBR.

  19. Software-defined optical network for metro-scale geographically distributed data centers.

    PubMed

    Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren

    2016-05-30

    The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.

  20. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  1. Trust models for efficient communication in Mobile Cloud Computing and their applications to e-Commerce

    NASA Astrophysics Data System (ADS)

    Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos

    2016-11-01

    Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns

  2. Ultra-wide Range Gamma Detector System for Search and Locate Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.

    2005-10-26

    Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less

  3. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, B. M.

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less

  4. Scientific Data Storage for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  5. Designing Robust and Reliable Timestamps for Remote Patient Monitoring.

    PubMed

    Clarke, Malcolm; Schluter, Paul; Reinhold, Barry; Reinhold, Brian

    2015-09-01

    Having timestamps that are robust and reliable is essential for remote patient monitoring in order for patient data to have context and to be correlated with other data. However, unlike hospital systems for which guidelines on timestamps are currently provided by HL7 and IHE, remote patient monitoring platforms are: operated in environments where it can be difficult to synchronize with reliable time sources; include devices with simple or no clock; and may store data spanning significant periods before able to upload. Existing guidelines prove inadequate. This paper analyzes the requirements and the operating scenarios of remote patient monitoring platforms and defines a framework to convey information on the conditions under which observations were made by the device and forwarded by the gateway in order for data to be managed appropriately and to include both reference to local time and an underlying continuous reference timeline. We define the timestamp formats of HL7 to denote the different conditions of operation and describe extensions to the existing definition of the HL7 timestamp to differentiate between time local to GMT (+0000) and universal coordinated time or network time protocol time where no geographic time zone is implied (-0000). We further describe how timestamps from devices having only simple or no clocks might be managed reliably by a gateway to provide timestamps that are referenced to local time and an underlying continuous reference timeline. We extend the HL7 message to include information to permit a subsequent receiver of the data to understand the quality of the timestamp and how it has been translated. We present evaluation from deploying a platform for 12 months.

  6. A Platform for Scalable Satellite and Geospatial Data Analysis

    NASA Astrophysics Data System (ADS)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  7. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity

    PubMed Central

    Seamon, Bryant A.; Teixeira, Carla; Ismail, Catheeja

    2016-01-01

    Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001). Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001) using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection methods, respectively. Comparatively, the SEM values were .72 and .81 grayscale levels, respectively, when using the RMT and FHT ROI selection methods in ImageJ. Uniform coefficients of determination (R2 = .96–.99, p < .001) indicate strong positive associations among the grayscale histogram analysis measurement conditions independent of the ROI selection methods and imaging platform. Conclusion. Our method for evaluating muscle echogenicity demonstrated a high degree of intrarater and interrater reliability using both the RMT and FHT methods across 2 common image analysis platforms. The minimal measurement error exhibited by the examiners demonstrates that the ROI selection methods used with Photoshop and ImageJ are suitable for the post-acquisition image analysis of tissue echogenicity in older adults. PMID:26925339

  8. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation

    PubMed Central

    2014-01-01

    Background A balance test provides important information such as the standard to judge an individual’s functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Methods Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). Results The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. Conclusion The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment. PMID:24912769

  9. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    PubMed

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  10. FPGA platform for prototyping and evaluation of neural network automotive applications

    NASA Technical Reports Server (NTRS)

    Aranki, N.; Tawel, R.

    2002-01-01

    In this paper we present an FPGA based reconfigurable computing platform for prototyping and evaluation of advanced neural network based applications for control and diagnostics in an automotive sub-systems.

  11. Reliability and concurrent validity of the computer workstation checklist.

    PubMed

    Baker, Nancy A; Livengood, Heather; Jacobs, Karen

    2013-01-01

    Self-report checklists are used to assess computer workstation set up, typically by workers not trained in ergonomic assessment or checklist interpretation.Though many checklists exist, few have been evaluated for reliability and validity. This study examined reliability and validity of the Computer Workstation Checklist (CWC) to identify mismatches between workers' self-reported workstation problems. The CWC was completed at baseline and at 1 month to establish reliability. Validity was determined with CWC baseline data compared to an onsite workstation evaluation conducted by an expert in computer workstation assessment. Reliability ranged from fair to near perfect (prevalence-adjusted bias-adjusted kappa, 0.38-0.93); items with the strongest agreement were related to the input device, monitor, computer table, and document holder. The CWC had greater specificity (11 of 16 items) than sensitivity (3 of 16 items). The positive predictive value was greater than the negative predictive value for all questions. The CWC has strong reliability. Sensitivity and specificity suggested workers often indicated no problems with workstation setup when problems existed. The evidence suggests that while the CWC may not be valid when used alone, it may be a suitable adjunct to an ergonomic assessment completed by professionals.

  12. The development of data acquisition and processing application system for RF ion source

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodan; Wang, Xiaoying; Hu, Chundong; Jiang, Caichao; Xie, Yahong; Zhao, Yuanzhe

    2017-07-01

    As the key ion source component of nuclear fusion auxiliary heating devices, the radio frequency (RF) ion source is developed and applied gradually to offer a source plasma with the advantages of ease of control and high reliability. In addition, it easily achieves long-pulse steady-state operation. During the process of the development and testing of the RF ion source, a lot of original experimental data will be generated. Therefore, it is necessary to develop a stable and reliable computer data acquisition and processing application system for realizing the functions of data acquisition, storage, access, and real-time monitoring. In this paper, the development of a data acquisition and processing application system for the RF ion source is presented. The hardware platform is based on the PXI system and the software is programmed on the LabVIEW development environment. The key technologies that are used for the implementation of this software programming mainly include the long-pulse data acquisition technology, multi-threading processing technology, transmission control communication protocol, and the Lempel-Ziv-Oberhumer data compression algorithm. Now, this design has been tested and applied on the RF ion source. The test results show that it can work reliably and steadily. With the help of this design, the stable plasma discharge data of the RF ion source are collected, stored, accessed, and monitored in real-time. It is shown that it has a very practical application significance for the RF experiments.

  13. Why the Survivability Onion Should Include Reliability, Availability and Maintainability (RAM)

    DTIC Science & Technology

    2013-09-01

    a ¾ Ton truck, a Jeep, or a Toyota pickup. However, assume for a moment it is a Stryker (armored, wheeled platform). The Stryker that has been...nations (Pakistan, Iran, Turkmenistan, Uzbekistan, Tajikistan, and China ), so to move a massive platform, you basically have two options; over land...quickly becoming unattractive in our current fiscal environment. What are needed are systems that will cause less life cycle burdens, while providing

  14. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  15. Examples of Nonconservatism in the CARE 3 Program

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1988-01-01

    This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.

  16. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  17. Reprocessing Multiyear GPS Data from Continuously Operating Reference Stations on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Yoon, S.

    2016-12-01

    To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.

  18. Whole-body concentrations of elements in three fish species from offshore oil platforms and natural areas in the Southern California Bight, USA

    USGS Publications Warehouse

    Love, Milton S.; Saiki, Michael K.; May, Thomas W.; Yee, Julie L.

    2013-01-01

    elements. Forty-two elements were excluded from statistical comparisons as they (1) consisted of major cations that were unlikely to accumulate to potentially toxic concentrations; (2) were not detected by the analytical procedures; or (3) were detected at concentrations too low to yield reliable quantitative measurements. The remaining 21 elements consisted of aluminum, arsenic, barium, cadmium, chromium, cobalt, copper, gallium, iron, lead, lithium, manganese, mercury, nickel, rubidium, selenium, strontium, tin, titanium, vanadium, and zinc. Statistical comparisons of these elements indicated that none consistently exhibited higher concentrations at oil platforms than at natural areas. However, the concentrations of copper, selenium, titanium, and vanadium in Pacific sanddab were unusual because small individuals exhibited either no differences between oil platforms and natural areas or significantly lower concentrations at oil platforms than at natural areas, whereas large individuals exhibited significantly higher concentrations at oil platforms than at natural areas.

  19. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Plutonium Metals, Oxides, and Solutions on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; Gough, Sean T.

    This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.

  20. Development of a Very Dense Liquid Cooled Compute Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Phillip N.; Lipp, Robert J.

    2013-12-10

    The objective of this project was to design and develop a prototype very energy efficient high density compute platform with 100% pumped refrigerant liquid cooling using commodity components and high volume manufacturing techniques. Testing at SLAC has indicated that we achieved a DCIE of 0.93 against our original goal of 0.85. This number includes both cooling and power supply and was achieved employing some of the highest wattage processors available.

  1. Design Tools for Accelerating Development and Usage of Multi-Core Computing Platforms

    DTIC Science & Technology

    2014-04-01

    Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation ; or convey...multicore PDSP platforms. The GPU- based capabilities of TDIF are currently oriented towards NVIDIA GPUs, based on the Compute Unified Device Architecture...CUDA) programming language [ NVIDIA 2007], which can be viewed as an extension of C. The multicore PDSP capabilities currently in TDIF are oriented

  2. Technical Note: scuda: A software platform for cumulative dose assessment.

    PubMed

    Park, Seyoun; McNutt, Todd; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2016-10-01

    Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.

  3. Technical Note: SCUDA: A software platform for cumulative dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seyoun; McNutt, Todd; Quon, Harry

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our imagemore » PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.« less

  4. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  5. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  6. Climate@Home: Crowdsourcing Climate Change Research

    NASA Astrophysics Data System (ADS)

    Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.

    2011-12-01

    Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.

  7. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  8. Flight simulator platform motion and air transport pilot training

    NASA Technical Reports Server (NTRS)

    Lee, Alfred T.; Bussolari, Steven R.

    1987-01-01

    The effect of a flight simulator platform motion on the performance and training of a pilot was evaluated using subjective ratings and objective performance data obtained on experienced B-727 pilots and pilots with no prior heavy aircraft flying experience flying B-727-200 aircraft simulator used by the FAA in the upgrade and transition training for air carrier operations. The results on experienced pilots did not reveal any reliable effects of wide variations in platform motion design. On the other hand, motion variations significantly affected the behavior of pilots without heavy-aircraft experience. The effect was limited to pitch attitude control inputs during the early phase of landing training.

  9. Single exposure EUV patterning of BEOL metal layers on the IMEC iN7 platform

    NASA Astrophysics Data System (ADS)

    Blanco Carballo, V. M.; Bekaert, J.; Mao, M.; Kutrzeba Kotowska, B.; Larivière, S.; Ciofi, I.; Baert, R.; Kim, R. H.; Gallagher, E.; Hendrickx, E.; Tan, L. E.; Gillijns, W.; Trivkovic, D.; Leray, P.; Halder, S.; Gallagher, M.; Lazzarino, F.; Paolillo, S.; Wan, D.; Mallik, A.; Sherazi, Y.; McIntyre, G.; Dusa, M.; Rusu, P.; Hollink, T.; Fliervoet, T.; Wittebrood, F.

    2017-03-01

    This paper summarizes findings on the iN7 platform (foundry N5 equivalent) for single exposure EUV (SE EUV) of M1 and M2 BEOL layers. Logic structures within these layers have been measured after litho and after etch, and variability was characterized both with conventional CD-SEM measurements as well as Hitachi contouring method. After analyzing the patterning of these layers, the impact of variability on potential interconnect reliability was studied by using MonteCarlo and process emulation simulations to determine if current litho/etch performance would meet success criteria for the given platform design rules.

  10. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  11. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  12. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    ERIC Educational Resources Information Center

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  13. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  14. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  15. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  16. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Fang, Xiang; Li, Ning-qiu; Fu, Xiao-zhe; Li, Kai-bin; Lin, Qiang; Liu, Li-hui; Shi, Cun-bin; Wu, Shu-qin

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.

  17. Initial experience with a handheld device digital imaging and communications in medicine viewer: OsiriX mobile on the iPhone.

    PubMed

    Choudhri, Asim F; Radvany, Martin G

    2011-04-01

    Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.

  18. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  19. A Wearable Mobile Sensor Platform to Assist Fruit Grading

    PubMed Central

    Aroca, Rafael V.; Gomes, Rafael B.; Dantas, Rummennigue R.; Calbo, Adonai G.; Gonçalves, Luiz M. G.

    2013-01-01

    Wearable computing is a form of ubiquitous computing that offers flexible and useful tools for users. Specifically, glove-based systems have been used in the last 30 years in a variety of applications, but mostly focusing on sensing people's attributes, such as finger bending and heart rate. In contrast, we propose in this work a novel flexible and reconfigurable instrumentation platform in the form of a glove, which can be used to analyze and measure attributes of fruits by just pointing or touching them with the proposed glove. An architecture for such a platform is designed and its application for intuitive fruit grading is also presented, including experimental results for several fruits. PMID:23666134

  20. Interventional radiology virtual simulator for liver biopsy.

    PubMed

    Villard, P F; Vidal, F P; ap Cenydd, L; Holbrey, R; Pisharody, S; Johnson, S; Bulpitt, A; John, N W; Bello, F; Gould, D

    2014-03-01

    Training in Interventional Radiology currently uses the apprenticeship model, where clinical and technical skills of invasive procedures are learnt during practice in patients. This apprenticeship training method is increasingly limited by regulatory restrictions on working hours, concerns over patient risk through trainees' inexperience and the variable exposure to case mix and emergencies during training. To address this, we have developed a computer-based simulation of visceral needle puncture procedures. A real-time framework has been built that includes: segmentation, physically based modelling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities, involving computer scientists, clinicians, clinical engineers and occupational psychologists. The technical implementation of the framework is a robust and real-time simulation environment combining a physical platform and an immersive computerized virtual environment. The face, content and construct validation have been previously assessed, showing the reliability and effectiveness of this framework, as well as its potential for teaching visceral needle puncture. A simulator for ultrasound-guided liver biopsy has been developed. It includes functionalities and metrics extracted from cognitive task analysis. This framework can be useful during training, particularly given the known difficulties in gaining significant practice of core skills in patients.

  1. Interface COMSOL-PHREEQC (iCP), an efficient numerical framework for the solution of coupled multiphysics and geochemistry

    NASA Astrophysics Data System (ADS)

    Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge

    2014-08-01

    This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.

  2. Input-output oriented computation algorithms for the control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Minto, K. D.

    1989-01-01

    An overview is given of work in progress aimed at developing computational algorithms addressing two important aspects in the control of large flexible space structures; namely, the selection and placement of sensors and actuators, and the resulting multivariable control law design problem. The issue of sensor/actuator set selection is particularly crucial to obtaining a satisfactory control design, as clearly a poor choice will inherently limit the degree to which good control can be achieved. With regard to control law design, the researchers are driven by concerns stemming from the practical issues associated with eventual implementation of multivariable control laws, such as reliability, limit protection, multimode operation, sampling rate selection, processor throughput, etc. Naturally, the burden imposed by dealing with these aspects of the problem can be reduced by ensuring that the complexity of the compensator is minimized. Our approach to these problems is based on extensions to input/output oriented techniques that have proven useful in the design of multivariable control systems for aircraft engines. In particular, researchers are exploring the use of relative gain analysis and the condition number as a means of quantifying the process of sensor/actuator selection and placement for shape control of a large space platform.

  3. iLid: Low-power Sensing of Fatigue and Drowsiness Measures on a Computational Eyeglass

    PubMed Central

    ROSTAMINIA, SOHA; MAYBERRY, ADDISON; GANESAN, DEEPAK; MARLIN, BENJAMIN; GUMMESON, JEREMY

    2018-01-01

    The ability to monitor eye closures and blink patterns has long been known to enable accurate assessment of fatigue and drowsiness in individuals. Many measures of the eye are known to be correlated with fatigue including coarse-grained measures like the rate of blinks as well as fine-grained measures like the duration of blinks and the extent of eye closures. Despite a plethora of research validating these measures, we lack wearable devices that can continually and reliably monitor them in the natural environment. In this work, we present a low-power system, iLid, that can continually sense fine-grained measures such as blink duration and Percentage of Eye Closures (PERCLOS) at high frame rates of 100fps. We present a complete solution including design of the sensing, signal processing, and machine learning pipeline; implementation on a prototype computational eyeglass platform; and extensive evaluation under many conditions including illumination changes, eyeglass shifts, and mobility. Our results are very encouraging, showing that we can detect blinks, blink duration, eyelid location, and fatigue-related metrics such as PERCLOS with less than a few percent error. PMID:29417956

  4. Notch filtering the nuclear environment of a spin qubit.

    PubMed

    Malinowski, Filip K; Martins, Frederico; Nissen, Peter D; Barnes, Edwin; Cywiński, Łukasz; Rudner, Mark S; Fallahi, Saeed; Gardner, Geoffrey C; Manfra, Michael J; Marcus, Charles M; Kuemmeth, Ferdinand

    2017-01-01

    Electron spins in gate-defined quantum dots provide a promising platform for quantum computation. In particular, spin-based quantum computing in gallium arsenide takes advantage of the high quality of semiconducting materials, reliability in fabricating arrays of quantum dots and accurate qubit operations. However, the effective magnetic noise arising from the hyperfine interaction with uncontrolled nuclear spins in the host lattice constitutes a major source of decoherence. Low-frequency nuclear noise, responsible for fast (10 ns) inhomogeneous dephasing, can be removed by echo techniques. High-frequency nuclear noise, recently studied via echo revivals, occurs in narrow-frequency bands related to differences in Larmor precession of the three isotopes 69 Ga, 71 Ga and 75 As (refs 15,16,17). Here, we show that both low- and high-frequency nuclear noise can be filtered by appropriate dynamical decoupling sequences, resulting in a substantial enhancement of spin qubit coherence times. Using nuclear notch filtering, we demonstrate a spin coherence time (T 2 ) of 0.87 ms, five orders of magnitude longer than typical exchange gate times, and exceeding the longest coherence times reported to date in Si/SiGe gate-defined quantum dots.

  5. Privacy and information security risks in a technology platform for home-based chronic disease rehabilitation and education.

    PubMed

    Henriksen, Eva; Burkow, Tatjana M; Johnsen, Elin; Vognild, Lars K

    2013-08-09

    Privacy and information security are important for all healthcare services, including home-based services. We have designed and implemented a prototype technology platform for providing home-based healthcare services. It supports a personal electronic health diary and enables secure and reliable communication and interaction with peers and healthcare personnel. The platform runs on a small computer with a dedicated remote control. It is connected to the patient's TV and to a broadband Internet. The platform has been tested with home-based rehabilitation and education programs for chronic obstructive pulmonary disease and diabetes. As part of our work, a risk assessment of privacy and security aspects has been performed, to reveal actual risks and to ensure adequate information security in this technical platform. Risk assessment was performed in an iterative manner during the development process. Thus, security solutions have been incorporated into the design from an early stage instead of being included as an add-on to a nearly completed system. We have adapted existing risk management methods to our own environment, thus creating our own method. Our method conforms to ISO's standard for information security risk management. A total of approximately 50 threats and possible unwanted incidents were identified and analysed. Among the threats to the four information security aspects: confidentiality, integrity, availability, and quality; confidentiality threats were identified as most serious, with one threat given an unacceptable level of High risk. This is because health-related personal information is regarded as sensitive. Availability threats were analysed as low risk, as the aim of the home programmes is to provide education and rehabilitation services; not for use in acute situations or for continuous health monitoring. Most of the identified threats are applicable for healthcare services intended for patients or citizens in their own homes. Confidentiality risks in home are different from in a more controlled environment such as a hospital; and electronic equipment located in private homes and communicating via Internet, is more exposed to unauthorised access. By implementing the proposed measures, it has been possible to design a home-based service which ensures the necessary level of information security and privacy.

  6. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    PubMed

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K

    2011-04-01

    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual measurements, while interobserver APAs ranged from 91% to 96% for QMA versus 57% to 63% for digitized manual measurements. The use of QMA software substantially improved the reliability of lumbar intervertebral measurements and the classification of instability based on flexion-extension radiographs.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisner, R.; Melin, A.; Burress, T.

    The objective of this project is to demonstrate improved reliability and increased performance made possible by deeply embedding instrumentation and controls (I&C) in nuclear power plant (NPP) components and systems. The project is employing a highly instrumented canned rotor, magnetic bearing, fluoride salt pump as its I&C technology demonstration platform. I&C is intimately part of the basic millisecond-by-millisecond functioning of the system; treating I&C as an integral part of the system design is innovative and will allow significant improvement in capabilities and performance. As systems become more complex and greater performance is required, traditional I&C design techniques become inadequate andmore » more advanced I&C needs to be applied. New I&C techniques enable optimal and reliable performance and tolerance of noise and uncertainties in the system rather than merely monitoring quasistable performance. Traditionally, I&C has been incorporated in NPP components after the design is nearly complete; adequate performance was obtained through over-design. By incorporating I&C at the beginning of the design phase, the control system can provide superior performance and reliability and enable designs that are otherwise impossible. This report describes the progress and status of the project and provides a conceptual design overview for the platform to demonstrate the performance and reliability improvements enabled by advanced embedded I&C.« less

  8. Design of Control Plane Architecture Based on Cloud Platform and Experimental Network Demonstration for Multi-domain SDON

    NASA Astrophysics Data System (ADS)

    Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan

    2016-02-01

    With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.

  9. Reliable results from stochastic simulation models

    Treesearch

    Donald L., Jr. Gochenour; Leonard R. Johnson

    1973-01-01

    Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...

  10. A mobile monitoring system of blood pressure for underserved in China by information and communication technology service.

    PubMed

    Jiang, Jiehui; Yan, Zhuangzhi; Kandachar, Prabhu; Freudenthal, Adinda

    2010-05-01

    High blood pressure (BP, hypertension) is a leading chronic condition in China and has become the main risk factor for many high-risk diseases, such as heart attacks. However, the platform for chronic disease measurement and management is still lacking, especially for underserved Chinese. To achieve the early diagnosis of hypertension, one BP monitoring system has been designed. The proposed design consists of three main parts: user domain, server domain, and channel domain. All three units and their materialization, validation tests on reliability, and usability are described in this paper, and the conclusion is that the current design concept is feasible and the system can be developed toward sufficient reliability and affordability with further optimization. This idea might also be extended into one platform for other physiological signals, such as blood sugar and ECG.

  11. Toward a virtual platform for materials processing

    NASA Astrophysics Data System (ADS)

    Schmitz, G. J.; Prahl, U.

    2009-05-01

    Any production is based on materials eventually becoming components of a final product. Material properties being determined by the microstructure of the material thus are of utmost importance both for productivity and reliability of processing during production and for application and reliability of the product components. A sound prediction of materials properties therefore is highly important. Such a prediction requires tracking of microstructure and properties evolution along the entire component life cycle starting from a homogeneous, isotropic and stress-free melt and eventually ending in failure under operational load. This article will outline ongoing activities at the RWTH Aachen University aiming at establishing a virtual platform for materials processing comprising a virtual, integrative numerical description of processes and of the microstructure evolution along the entire production chain and even extending further toward microstructure and properties evolution under operational conditions.

  12. Using Mechanical Turk for research on cancer survivors.

    PubMed

    Arch, Joanna J; Carr, Alaina L

    2017-10-01

    The successful recruitment and study of cancer survivors within psycho-oncology research can be challenging, time-consuming, and expensive, particularly for key subgroups such as young adult cancer survivors. Online crowdsourcing platforms offer a potential solution that has not yet been investigated with regard to cancer populations. The current study assessed the presence of cancer survivors on Amazon's Mechanical Turk (MTurk) and the feasibility of using MTurk as an efficient, cost-effective, and reliable psycho-oncology recruitment and research platform. During a <4-month period, cancer survivors living in the United States were recruited on MTurk to complete two assessments, spaced 1 week apart, relating to psychosocial and cancer-related functioning. The reliability and validity of responses were investigated. Within a <4-month period, 464 self-identified cancer survivors on MTurk consented to and completed an online assessment. The vast majority (79.09%) provided reliable and valid study data according to multiple indices. The sample was highly diverse in terms of U.S. geography, socioeconomic status, and cancer type, and reflected a particularly strong presence of distressed and young adult cancer survivors (median age = 36 years). A majority of participants (58.19%) responded to a second survey sent one week later. Online crowdsourcing represents a feasible, efficient, and cost-effective recruitment and research platform for cancer survivors, particularly for young adult cancer survivors and those with significant distress. We discuss remaining challenges and future recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. [Reliability and validity of the Occupational Stress Scale for Chinese offshore oil platform workers].

    PubMed

    Chen, Wei-qing; Huang, Zi-hui; Yu, De-xin; Lin, Yan-zu; Ling, Zhi-ming; Tang, Ji-song

    2003-02-01

    To evaluate the validity and reliability of the Occupational Stress Scale (OSS) for Chinese offshore oil platform workers. A 51-item self-administered questionnaire developed in the light of Cooper's questionnaire and company's special situation was used to investigate 561 subjects. 51 occupational stress items relating to offshore oil production were subjected to factor analysis, and nine latent factors were identified, which explained 62.5% of the total variance. According to the contents described by the items included in each factor, they were respectively defined as: "the interface between job and family/social life (factor 1)", "career and achievement (factor 2)", "safety (factor 3)", "management problem and relationship with others at work (factor 4)", "physical factors of workplace (factor 5)", "platform living environment (factor 6)", "role in management (factor 7)", "ergonomics (factor 8)" and "organization structure (factor 9)". Significant difference in the score of five factors was observed among 12 different job categories by analysis of variance. After adjusting for potential confounding factors (age, educational level), hierarchical multiple regression analysis indicated that the score of the OSS was significantly and positively correlated with the poor mental health of the workers (P < 0.01). The consistent test between OSS and each factor showed that Cronbach's alpha were 0.72 - 0.91. The OSS is a valid and reliable tool for measuring occupational stress, and can be used to explore occupational stress and its influence on health and safety problems in offshore oil workers.

  14. Airborne and Ground-Based Platforms for Data Collection in Small Vineyards: Examples from the UK and Switzerland

    NASA Astrophysics Data System (ADS)

    Green, David R.; Gómez, Cristina; Fahrentrapp, Johannes

    2015-04-01

    This paper presents an overview of some of the low-cost ground and airborne platforms and technologies now becoming available for data collection in small area vineyards. Low-cost UAV or UAS platforms and cameras are now widely available as the means to collect both vertical and oblique aerial still photography and airborne videography in vineyards. Examples of small aerial platforms include the AR Parrot Drone, the DJI Phantom (1 and 2), and 3D Robotics IRIS+. Both fixed-wing and rotary wings platforms offer numerous advantages for aerial image acquisition including the freedom to obtain high resolution imagery at any time required. Imagery captured can be stored on mobile devices such as an Apple iPad and shared, written directly to a memory stick or card, or saved to the Cloud. The imagery can either be visually interpreted or subjected to semi-automated analysis using digital image processing (DIP) software to extract information about vine status or the vineyard environment. At the ground-level, a radio-controlled 'rugged' model 4x4 vehicle can also be used as a mobile platform to carry a number of sensors (e.g. a Go-Pro camera) around a vineyard, thereby facilitating quick and easy field data collection from both within the vine canopy and rows. For the small vineyard owner/manager with limited financial resources, this technology has a number of distinct advantages to aid in vineyard management practices: it is relatively cheap to purchase; requires a short learning-curve to use and to master; can make use of autonomous ground control units for repetitive coverage enabling reliable monitoring; and information can easily be analysed and integrated within a GIS with minimal expertise. In addition, these platforms make widespread use of familiar and everyday, off-the-shelf technologies such as WiFi, Go-Pro cameras, Cloud computing, and smartphones or tablets as the control interface, all with a large and well established end-user support base. Whilst there are still some limitations which constrain their use, including battery power and flight time, data connectivity, and payload capacity, such platforms nevertheless offer quick, low-cost, easy, and repeatable ways to capture valuable contextual data for small vineyards, complementing other sources of data used in Precision Viticulture (PV) and vineyard management. As these technologies continue to evolve very quickly, and more lightweight sensors become available for the smaller ground and airborne platforms, this will offer even more possibilities for a wider range of information to be acquired to aid in the monitoring, mapping, and management of small vineyards. The paper is illustrated with some examples from the UK and Switzerland.

  15. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  16. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  17. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  18. The Application of COMSOL Multiphysics Package on the Modelling of Complex 3-D Lithospheric Electrical Resistivity Structures - A Case Study from the Proterozoic Orogenic belt within the North China Craton

    NASA Astrophysics Data System (ADS)

    Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.

    2017-12-01

    At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.

  19. Development of the virtual research environment for analysis, evaluation and prediction of global climate change impacts on the regional environment

    NASA Astrophysics Data System (ADS)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander; Fazliev, Alexander

    2017-04-01

    Description and the first results of the Russian Science Foundation project "Virtual computational information environment for analysis, evaluation and prediction of the impacts of global climate change on the environment and climate of a selected region" is presented. The project is aimed at development of an Internet-accessible computation and information environment providing unskilled in numerical modelling and software design specialists, decision-makers and stakeholders with reliable and easy-used tools for in-depth statistical analysis of climatic characteristics, and instruments for detailed analysis, assessment and prediction of impacts of global climate change on the environment and climate of the targeted region. In the framework of the project, approaches of "cloud" processing and analysis of large geospatial datasets will be developed on the technical platform of the Russian leading institution involved in research of climate change and its consequences. Anticipated results will create a pathway for development and deployment of thematic international virtual research laboratory focused on interdisciplinary environmental studies. VRE under development will comprise best features and functionality of earlier developed information and computing system CLIMATE (http://climate.scert.ru/), which is widely used in Northern Eurasia environment studies. The Project includes several major directions of research listed below. 1. Preparation of geo-referenced data sets, describing the dynamics of the current and possible future climate and environmental changes in detail. 2. Improvement of methods of analysis of climate change. 3. Enhancing the functionality of the VRE prototype in order to create a convenient and reliable tool for the study of regional social, economic and political consequences of climate change. 4. Using the output of the first three tasks, compilation of the VRE prototype, its validation, preparation of applicable detailed description of climate change in Western Siberia, and dissemination of the Project results. Results of the first stage of the Project implementation are presented. This work is supported by the Russian Science Foundation grant No16-19-10257.

  20. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    PubMed Central

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  1. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  2. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    PubMed

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  3. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less

  5. An Architectural Concept for Intrusion Tolerance in Air Traffic Networks

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Miner, Paul S.

    2003-01-01

    The goal of an intrusion tolerant network is to continue to provide predictable and reliable communication in the presence of a limited num ber of compromised network components. The behavior of a compromised network component ranges from a node that no longer responds to a nod e that is under the control of a malicious entity that is actively tr ying to cause other nodes to fail. Most current data communication ne tworks do not include support for tolerating unconstrained misbehavio r of components in the network. However, the fault tolerance communit y has developed protocols that provide both predictable and reliable communication in the presence of the worst possible behavior of a limited number of nodes in the system. One may view a malicious entity in a communication network as a node that has failed and is behaving in an arbitrary manner. NASA/Langley Research Center has developed one such fault-tolerant computing platform called SPIDER (Scalable Proces sor-Independent Design for Electromagnetic Resilience). The protocols and interconnection mechanisms of SPIDER may be adapted to large-sca le, distributed communication networks such as would be required for future Air Traffic Management systems. The predictability and reliabi lity guarantees provided by the SPIDER protocols have been formally v erified. This analysis can be readily adapted to similar network stru ctures.

  6. The role of atomic lines in radiation heating of the experimental space vehicle Fire-II

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2015-10-01

    The results of calculating the convective and radiation heating of the Fire-II experimental space vehicle allowing for atomic lines of atoms and ions using the NERAT-ASTEROID computer platform are presented. This computer platform is intended to solve the complete set of equations of radiation gas dynamics of viscous, heat-conductive, and physically and chemically nonequilibrium gas, as well as radiation transfer. The spectral optical properties of high temperature gases are calculated using ab initio quasi-classical and quantum-mechanical methods. The calculation of the transfer of selective thermal radiation is performed using a line-by-line method using specially generated computational grids over the radiation wavelengths, which make it possible to attain a noticeable economy of computational resources.

  7. A novel medical image data-based multi-physics simulation platform for computational life sciences.

    PubMed

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-04-06

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.

  8. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations

    NASA Astrophysics Data System (ADS)

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos

    2017-12-01

    Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.

  9. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.

    PubMed

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos

    2017-12-01

    The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.

  10. Good Practices in Free-energy Calculations

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  11. SiMon: Simulation Monitor for Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  12. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  13. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Web Platform for Sharing Modeling Software in the Field of Nonlinear Optics

    NASA Astrophysics Data System (ADS)

    Dubenskaya, Julia; Kryukov, Alexander; Demichev, Andrey

    2018-02-01

    We describe the prototype of a Web platform intended for sharing software programs for computer modeling in the rapidly developing field of the nonlinear optics phenomena. The suggested platform is built on the top of the HUBZero open-source middleware. In addition to the basic HUBZero installation we added to our platform the capability to run Docker containers via an external application server and to send calculation programs to those containers for execution. The presented web platform provides a wide range of features and might be of benefit to nonlinear optics researchers.

  15. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  16. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  17. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  18. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  19. e-Collaboration for Earth observation (E-CEO): the Cloud4SAR interferometry data challenge

    NASA Astrophysics Data System (ADS)

    Casu, Francesco; Manunta, Michele; Boissier, Enguerran; Brito, Fabrice; Aas, Christina; Lavender, Samantha; Ribeiro, Rita; Farres, Jordi

    2014-05-01

    The e-Collaboration for Earth Observation (E-CEO) project addresses the technologies and architectures needed to provide a collaborative research Platform for automating data mining and processing, and information extraction experiments. The Platform serves for the implementation of Data Challenge Contests focusing on Information Extraction for Earth Observations (EO) applications. The possibility to implement multiple processors within a Common Software Environment facilitates the validation, evaluation and transparent peer comparison among different methodologies, which is one of the main requirements rose by scientists who develop algorithms in the EO field. In this scenario, we set up a Data Challenge, referred to as Cloud4SAR (http://wiki.services.eoportal.org/tiki-index.php?page=ECEO), to foster the deployment of Interferometric SAR (InSAR) processing chains within a Cloud Computing platform. While a large variety of InSAR processing software tools are available, they require a high level of expertise and a complex user interaction to be effectively run. Computing a co-seismic interferogram or a 20-years deformation time series on a volcanic area are not easy tasks to be performed in a fully unsupervised way and/or in very short time (hours or less). Benefiting from ESA's E-CEO platform, participants can optimise algorithms on a Virtual Sandbox environment without being expert programmers, and compute results on high performing Cloud platforms. Cloud4SAR requires solving a relatively easy InSAR problem by trying to maximize the exploitation of the processing capabilities provided by a Cloud Computing infrastructure. The proposed challenge offers two different frameworks, each dedicated to participants with different skills, identified as Beginners and Experts. For both of them, the contest mainly resides in the degree of automation of the deployed algorithms, no matter which one is used, as well as in the capability of taking effective benefit from a parallel computing environment.

  20. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

Top