Sample records for enabling resource sharing

  1. Valuing Local Knowledge: Indigenous People and Intellectual Property Rights.

    ERIC Educational Resources Information Center

    Brush, Stephen B., Ed.; Stabinsky, Doreen, Ed.

    Intellectual property enables individuals to gain financially from sharing unique and useful knowledge. Compensating indigenous people for sharing their knowledge and resources might both validate and be an equitable reward for indigenous knowledge of biological resources, and might promote the conservation of those resources. This book contains…

  2. A cross-domain communication resource scheduling method for grid-enabled communication networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangquan; Wen, Xiang; Zhang, Yongding

    2011-10-01

    To support a wide range of different grid applications in environments where various heterogeneous communication networks coexist, it is important to enable advanced capabilities in on-demand and dynamical integration and efficient co-share with cross-domain heterogeneous communication resource, thus providing communication services which are impossible for single communication resource to afford. Based on plug-and-play co-share and soft integration with communication resource, Grid-enabled communication network is flexibly built up to provide on-demand communication services for gird applications with various requirements on quality of service. Based on the analysis of joint job and communication resource scheduling in grid-enabled communication networks (GECN), this paper presents a cross multi-domain communication resource cooperatively scheduling method and describes the main processes such as traffic requirement resolution for communication services, cross multi-domain negotiation on communication resource, on-demand communication resource scheduling, and so on. The presented method is to afford communication service capability to cross-domain traffic delivery in GECNs. Further research work towards validation and implement of the presented method is pointed out at last.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Gail-Joon

    The project seeks an innovative framework to enable users to access and selectively share resources in distributed environments, enhancing the scalability of information sharing. We have investigated secure sharing & assurance approaches for ad-hoc collaboration, focused on Grids, Clouds, and ad-hoc network environments.

  4. Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.

    PubMed

    Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià

    2010-01-01

    The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.

  5. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    PubMed

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  6. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing

    PubMed Central

    Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553

  7. Current knowledge and practices related to seed transmission of sugarcane pathogens and movement of seed

    USDA-ARS?s Scientific Manuscript database

    Sugarcane breeding programs benefit from sharing genetic resources. Traditionally, this has been accomplished by exchanging vegetative planting material of clones of interest. Diseases can spread during this process, and quarantines were established to enable continued sharing of germplasm while min...

  8. K-12 Networking: Breaking Down the Walls of the Learning Environment.

    ERIC Educational Resources Information Center

    Epler, Doris, Ed.

    Networks can benefit school libraries by: (1) offering multiple user access to information; (2) managing and distributing information and data; (3) allowing resources to be shared; (4) improving and enabling communications; (5) improving the management of resources; and (6) creating renewed interest in the library and its resources. As a result,…

  9. HydroShare: An online, collaborative environment for the sharing of hydrologic data and models (Invited)

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Arrigo, J.; Hooper, R. P.; Valentine, D. W.; Maidment, D. R.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. HydroShare will use the integrated Rule-Oriented Data System (iRODS) to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

  10. A Bookmarking Service for Organizing and Sharing URLs

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.

    1997-01-01

    Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.

  11. Risk Information Management Resource (RIMR): modeling an approach to defending against military medical information assurance brain drain

    NASA Astrophysics Data System (ADS)

    Wright, Willie E.

    2003-05-01

    As Military Medical Information Assurance organizations face off with modern pressures to downsize and outsource, they battle with losing knowledgeable people who leave and take with them what they know. This knowledge is increasingly being recognized as an important resource and organizations are now taking steps to manage it. In addition, as the pressures for globalization (Castells, 1998) increase, collaboration and cooperation are becoming more distributed and international. Knowledge sharing in a distributed international environment is becoming an essential part of Knowledge Management. This is a major shortfall in the current approach to capturing and sharing knowledge in Military Medical Information Assurance. This paper addresses this challenge by exploring Risk Information Management Resource (RIMR) as a tool for sharing knowledge using the concept of Communities of Practice. RIMR is based no the framework of sharing and using knowledge. This concept is done through three major components - people, process and technology. The people aspect enables remote collaboration, support communities of practice, reward and recognize knowledge sharing while encouraging storytelling. The process aspect enhances knowledge capture and manages information. While the technology aspect enhance system integration and data mining, it also utilizes intelligent agents and exploits expert systems. These coupled with supporting activities of education and training, technology infrastructure and information security enables effective information assurance collaboration.

  12. CloudMan as a platform for tool, data, and analysis distribution.

    PubMed

    Afgan, Enis; Chapman, Brad; Taylor, James

    2012-11-27

    Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions.

  13. Children Creating Multimodal Stories about a Familiar Environment

    ERIC Educational Resources Information Center

    Kervin, Lisa; Mantei, Jessica

    2017-01-01

    Storytelling is a practice that enables children to apply their literacy skills. This article shares a collaborative literacy strategy devised to enable children to create multimodal stories about their familiar school environment. The strategy uses resources, including the children's own drawings, images from Google Maps, and the Puppet Pals…

  14. The Online Learning Academy.

    ERIC Educational Resources Information Center

    Taylor, Suzanne Liebowitz; McKay, Donald P.; Culp, Ann; Baumann, Stephen; Elinich, Karen

    This paper describes the Online Learning Academy (OLLA), a World Wide Web-based presence that supports the use of telecomputing in the classroom by: connecting teachers to each other and Internet educational resources; fostering the use of online resources and collaboration; encouraging and enabling the sharing of classroom experiences; and…

  15. Networked Resources.

    ERIC Educational Resources Information Center

    Nickerson, Gord

    1991-01-01

    Describes the use and applications of the communications program Telenet for remote log-in, a basic interactive resource sharing service that enables users to connect to any machine on the Internet and conduct a session. The Virtual Terminal--the central component of Telenet--is also described, as well as problems with terminals, services…

  16. Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Listening Environments.

    ERIC Educational Resources Information Center

    Woodruff, Allison; Aoki, Paul M.; Grinter, Rebecca E.; Hurst, Amy; Szymanski, Margaret H.; Thornton, James D.

    This paper describes an electronic guidebook, "Sotto Voce," that enables visitors to share audio information by eavesdropping on each others guidebook activity. The first section discusses the design and implementation of the guidebook device, key aspects of its user interface, the design goals for the audio environment, the eavesdropping…

  17. University Education in Ontario: Shared Goals & Building Blocks.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto.

    This brochure suggests five goals that are likely to be shared by the people of Ontario, their government, and the province's publicly funded universities for a strong university system, and identifies the building blocks and resource-related commitments that would enable Ontario universities to achieve these goals. The goals are: (1) all…

  18. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  19. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    ERIC Educational Resources Information Center

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  20. Collaborative Visualization Project: shared-technology learning environments for science learning

    NASA Astrophysics Data System (ADS)

    Pea, Roy D.; Gomez, Louis M.

    1993-01-01

    Project-enhanced science learning (PESL) provides students with opportunities for `cognitive apprenticeships' in authentic scientific inquiry using computers for data-collection and analysis. Student teams work on projects with teacher guidance to develop and apply their understanding of science concepts and skills. We are applying advanced computing and communications technologies to augment and transform PESL at-a-distance (beyond the boundaries of the individual school), which is limited today to asynchronous, text-only networking and unsuitable for collaborative science learning involving shared access to multimedia resources such as data, graphs, tables, pictures, and audio-video communication. Our work creates user technology (a Collaborative Science Workbench providing PESL design support and shared synchronous document views, program, and data access; a Science Learning Resource Directory for easy access to resources including two-way video links to collaborators, mentors, museum exhibits, media-rich resources such as scientific visualization graphics), and refine enabling technologies (audiovisual and shared-data telephony, networking) for this PESL niche. We characterize participation scenarios for using these resources and we discuss national networked access to science education expertise.

  1. The Biomedical Resource Ontology (BRO) to Enable Resource Discovery in Clinical and Translational Research

    PubMed Central

    Tenenbaum, Jessica D.; Whetzel, Patricia L.; Anderson, Kent; Borromeo, Charles D.; Dinov, Ivo D.; Gabriel, Davera; Kirschner, Beth; Mirel, Barbara; Morris, Tim; Noy, Natasha; Nyulas, Csongor; Rubenson, David; Saxman, Paul R.; Singh, Harpreet; Whelan, Nancy; Wright, Zach; Athey, Brian D.; Becich, Michael J.; Ginsburg, Geoffrey S.; Musen, Mark A.; Smith, Kevin A.; Tarantal, Alice F.; Rubin, Daniel L; Lyster, Peter

    2010-01-01

    The biomedical research community relies on a diverse set of resources, both within their own institutions and at other research centers. In addition, an increasing number of shared electronic resources have been developed. Without effective means to locate and query these resources, it is challenging, if not impossible, for investigators to be aware of the myriad resources available, or to effectively perform resource discovery when the need arises. In this paper, we describe the development and use of the Biomedical Resource Ontology (BRO) to enable semantic annotation and discovery of biomedical resources. We also describe the Resource Discovery System (RDS) which is a federated, inter-institutional pilot project that uses the BRO to facilitate resource discovery on the Internet. Through the RDS framework and its associated Biositemaps infrastructure, the BRO facilitates semantic search and discovery of biomedical resources, breaking down barriers and streamlining scientific research that will improve human health. PMID:20955817

  2. A methodology toward manufacturing grid-based virtual enterprise operation platform

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu

    2010-08-01

    Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.

  3. CloudMan as a platform for tool, data, and analysis distribution

    PubMed Central

    2012-01-01

    Background Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. Results CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. Conclusions With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions. PMID:23181507

  4. Secure key storage and distribution

    DOEpatents

    Agrawal, Punit

    2015-06-02

    This disclosure describes a distributed, fault-tolerant security system that enables the secure storage and distribution of private keys. In one implementation, the security system includes a plurality of computing resources that independently store private keys provided by publishers and encrypted using a single security system public key. To protect against malicious activity, the security system private key necessary to decrypt the publication private keys is not stored at any of the computing resources. Rather portions, or shares of the security system private key are stored at each of the computing resources within the security system and multiple security systems must communicate and share partial decryptions in order to decrypt the stored private key.

  5. Mobile VR in Education: From the Fringe to the Mainstream

    ERIC Educational Resources Information Center

    Cochrane, Thomas

    2016-01-01

    This paper explores the development of virtual reality (VR) use in education and the emergence of mobile VR based content creation and sharing as a platform for enabling learner-generated content and learner-generated contexts. The author argues that an ecology of resources that maps the user content creation and sharing affordances of mobile…

  6. Dephasing-covariant operations enable asymptotic reversibility of quantum resources

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric

    2018-05-01

    We study the power of dephasing-covariant operations in the resource theories of coherence and entanglement. These are quantum operations whose actions commute with a projective measurement. In the resource theory of coherence, we find that any two states are asymptotically interconvertible under dephasing-covariant operations. This provides a rare example of a resource theory in which asymptotic reversibility can be attained without needing the maximal set of resource nongenerating operations. When extended to the resource theory of entanglement, the resultant operations share similarities with local operations and classical communication, such as prohibiting the increase of all Rényi α -entropies of entanglement under pure-state transformations. However, we show these operations are still strong enough to enable asymptotic reversibility between any two maximally correlated mixed states, even in the multipartite setting.

  7. Genetics Home Reference: keratoconus

    MedlinePlus

    ... Health Conditions Genes Chromosomes & mtDNA Resources Help Me Understand Genetics Share: Email Facebook Twitter Home Health Conditions Keratoconus Keratoconus Printable PDF Open All Close All Enable Javascript to view the expand/collapse boxes. Description Keratoconus ...

  8. The OCHIN community information network: bringing together community health centers, information technology, and data to support a patient-centered medical village.

    PubMed

    Devoe, Jennifer E; Sears, Abigail

    2013-01-01

    Creating integrated, comprehensive care practices requires access to data and informatics expertise. Information technology (IT) resources are not readily available to individual practices. One model of shared IT resources and learning is a "patient-centered medical village." We describe the OCHIN Community Health Information Network as an example of this model; community practices have come together collectively to form an organization that leverages shared IT expertise, resources, and data, providing members with the means to fully capitalize on new technologies that support improved care. This collaborative facilitates the identification of "problem sheds" through surveillance of network-wide data, enables shared learning regarding best practices, and provides a "community laboratory" for practice-based research. As an example of a community of solution, OCHIN uses health IT and data-sharing innovations to enhance partnerships between public health leaders, clinicians in community health centers, informatics experts, and policy makers. OCHIN community partners benefit from the shared IT resource (eg, a linked electronic health record, centralized data warehouse, informatics, and improvement expertise). This patient-centered medical village provides (1) the collective mechanism to build community-tailored IT solutions, (2) "neighbors" to share data and improvement strategies, and (3) infrastructure to support innovations based on electronic health records across communities, using experimental approaches.

  9. Advancing Collaboration through Hydrologic Data and Model Sharing

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.

    2015-12-01

    HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.

  10. Expanding Approaches for Understanding Impact: Integrating Technology, Curriculum, and Open Educational Resources in Science Education

    ERIC Educational Resources Information Center

    Ye, Lei; Recker, Mimi; Walker, Andrew; Leary, Heather; Yuan, Min

    2015-01-01

    This article reports results from a scale-up study of the impact of a software tool designed to support teachers in the digital learning era. This tool, the Curriculum Customization Service (CCS), enables teachers to access open educational resources from multiple providers, customize them for classroom instruction, and share them with other…

  11. Bringing Together Community Health Centers, Information Technology and Data to Support a Patient-Centered Medical Village from the OCHIN community of solutions

    PubMed Central

    DeVoe, Jennifer E.; Sears, Abigail

    2013-01-01

    Creating integrated, comprehensive care practices requires access to data and informatics expertise. Information technology (IT) resources are not readily available to individual practices. One model of shared IT resources and learning is a “patient-centered medical village.” We describe the OCHIN Community Health Information Network as an example of this model where community practices have come together collectively to form an organization which leverages shared IT expertise, resources, and data, providing members with the means to fully capitalize on new technologies that support improved care. This collaborative facilitates the identification of “problem-sheds” through surveillance of network-wide data, enables shared learning regarding best practices, and provides a “community laboratory” for practice-based research. As an example of a Community of Solution, OCHIN utilizes health IT and data-sharing innovations to enhance partnerships between public health leaders, community health center clinicians, informatics experts, and policy makers. OCHIN community partners benefit from the shared IT resource (e.g. a linked electronic health record (EHR), centralized data warehouse, informatics and improvement expertise). This patient-centered medical village provides (1) the collective mechanism to build community tailored IT solutions, (2) “neighbors” to share data and improvement strategies, and (3) infrastructure to support EHR-based innovations across communities, using experimental approaches. PMID:23657695

  12. The UCSC genome browser: what every molecular biologist should know.

    PubMed

    Mangan, Mary E; Williams, Jennifer M; Kuhn, Robert M; Lathe, Warren C

    2009-10-01

    Electronic data resources can enable molecular biologists to query and display many useful features that make benchwork more efficient and drive new discoveries. The UCSC Genome Browser provides a wealth of data and tools that advance one's understanding of genomic context for many species, enable detailed understanding of data, and provide the ability to interrogate regions of interest. Researchers can also supplement the standard display with their own data to query and share with others. Effective use of these resources has become crucial to biological research today, and this unit describes some practical applications of the UCSC Genome Browser.

  13. Controlling Distributed Planning

    NASA Technical Reports Server (NTRS)

    Clement, Bradley; Barrett, Anthony

    2004-01-01

    A system of software implements an extended version of an approach, denoted shared activity coordination (SHAC), to the interleaving of planning and the exchange of plan information among organizations devoted to different missions that normally communicate infrequently except that they need to collaborate on joint activities and/or the use of shared resources. SHAC enables the planning and scheduling systems of the organizations to coordinate by resolving conflicts while optimizing local planning solutions. The present software provides a framework for modeling and executing communication protocols for SHAC. Shared activities are represented in each interacting planning system to establish consensus on joint activities or to inform the other systems of consumption of a common resource or a change in a shared state. The representations of shared activities are extended to include information on (1) the role(s) of each participant, (2) permissions (defined as specifications of which participant controls what aspects of shared activities and scheduling thereof), and (3) constraints on the parameters of shared activities. Also defined in the software are protocols for changing roles, permissions, and constraints during the course of coordination and execution.

  14. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  15. Perspectives of the optical coherence tomography community on code and data sharing

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Mistree, Behram F. T.; Ellerbee, Audrey K.

    2015-03-01

    As optical coherence tomography (OCT) grows to be a mature and successful field, it is important for the research community to develop a stronger practice of sharing code and data. A prolific culture of sharing can enable new and emerging laboratories to enter the field, allow research groups to gain new exposure and notoriety, and enable benchmarking of new algorithms and methods. Our long-term vision is to build tools to facilitate a stronger practice of sharing within this community. In line with this goal, our first aim was to understand the perceptions and practices of the community with respect to sharing research contributions (i.e., as code and data). We surveyed 52 members of the OCT community using an online polling system. Our main findings indicate that while researchers infrequently share their code and data, they are willing to contribute their research resources to a shared repository, and they believe that such a repository would benefit both their research and the OCT community at large. We plan to use the results of this survey to design a platform targeted to the OCT research community - an effort that ultimately aims to facilitate a more prolific culture of sharing.

  16. A novel resource sharing algorithm based on distributed construction for radiant enclosure problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finzell, Peter; Bryden, Kenneth M.

    This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less

  17. A novel resource sharing algorithm based on distributed construction for radiant enclosure problems

    DOE PAGES

    Finzell, Peter; Bryden, Kenneth M.

    2017-03-06

    This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less

  18. Sharing the benefits of genetic resources: from biodiversity to human genetics.

    PubMed

    Schroeder, Doris; Lasén-Díaz, Carolina

    2006-12-01

    Benefit sharing aims to achieve an equitable exchange between the granting of access to a genetic resource and the provision of compensation. The Convention on Biological Diversity (CBD), adopted at the 1992 Earth Summit in Rio de Janeiro, is the only international legal instrument setting out obligations for sharing the benefits derived from the use of biodiversity. The CBD excludes human genetic resources from its scope, however, this article considers whether it should be expanded to include those resources, so as to enable research subjects to claim a share of the benefits to be negotiated on a case-by-case basis. Our conclusion on this question is: 'No, the CBD should not be expanded to include human genetic resources.' There are essential differences between human and non-human genetic resources, and, in the context of research on humans, an essentially fair exchange model is already available between the health care industry and research subjects. Those who contribute to research should receive benefits in the form of accessible new health care products and services, suitable for local health needs and linked to economic prosperity (e.g. jobs). When this exchange model does not apply, as is often the case in developing countries, individually negotiated benefit sharing agreements between researchers and research subjects should not be used as 'window dressing'. Instead, national governments should focus their finances on the best economic investment they could make; the investment in population health and health research as outlined by the World Health Organization's Commission on Macroeconomics and Health; whilst international barriers to such spending need to be removed.

  19. Model My Watershed and BiG CZ Data Portal: Interactive geospatial analysis and hydrological modeling web applications that leverage the Amazon cloud for scientists, resource managers and students

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.

    2016-12-01

    The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.

  20. Method for prefetching non-contiguous data structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  1. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  2. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  3. The UCSC Genome Browser: What Every Molecular Biologist Should Know

    PubMed Central

    Mangan, Mary E.; Williams, Jennifer M.; Kuhn, Robert M.; Lathe, Warren C.

    2016-01-01

    Electronic data resources can enable molecular biologists to query and display many useful features that make benchwork more efficient and drive new discoveries. The UCSC Genome Browser provides a wealth of data and tools that advance one’s understanding of genomic context for many species, enable detailed understanding of data, and provide the ability to interrogate regions of interest. Researchers can also supplement the standard display with their own data to query and share with others. Effective use of these resources has become crucial to biological research today, and this unit describes some practical applications of the UCSC Genome Browser. PMID:19816931

  4. Obsessive-Compulsive Disorder - Multiple Languages

    MedlinePlus

    ... sharing features on this page, please enable JavaScript. Russian (Русский) Somali (Af-Soomaali ) Spanish (español) HealthReach resources will open in a new window. Russian (Русский) Expand Section Obsessive Compulsive Disorder (OCD) (An ...

  5. Empowering Groups that Enable Play

    ERIC Educational Resources Information Center

    Wilson, David Sloan; Marshall, Danielle; Iserhott, Hindi

    2011-01-01

    Creating play environments for children usually requires groups of adults working together. An extensive scientific literature describes how groups function to achieve shared goals in general terms, and groups attempting to empower play may find this literature useful. Design principles for managing natural resources, identified by Elinor Ostrom…

  6. Generating community-built tools for data sharing and analysis in environmental networks

    USGS Publications Warehouse

    Read, Jordan S.; Gries, Corinna; Read, Emily K.; Klug, Jennifer; Hanson, Paul C.; Hipsey, Matthew R.; Jennings, Eleanor; O'Reilley, Catherine; Winslow, Luke A.; Pierson, Don; McBride, Christopher G.; Hamilton, David

    2016-01-01

    Rapid data growth in many environmental sectors has necessitated tools to manage and analyze these data. The development of tools often lags behind the proliferation of data, however, which may slow exploratory opportunities and scientific progress. The Global Lake Ecological Observatory Network (GLEON) collaborative model supports an efficient and comprehensive data–analysis–insight life cycle, including implementations of data quality control checks, statistical calculations/derivations, models, and data visualizations. These tools are community-built and openly shared. We discuss the network structure that enables tool development and a culture of sharing, leading to optimized output from limited resources. Specifically, data sharing and a flat collaborative structure encourage the development of tools that enable scientific insights from these data. Here we provide a cross-section of scientific advances derived from global-scale analyses in GLEON. We document enhancements to science capabilities made possible by the development of analytical tools and highlight opportunities to expand this framework to benefit other environmental networks.

  7. Defense Logistics: Army Should Track Financial Benefits Realized from its Logistics Modernization Program

    DTIC Science & Technology

    2013-11-01

    system does not support certain critical requirements, including enabling the Army to generate auditable financial statements by fiscal year 2017 ...current system will not enable the Army to generate auditable financial statements by 2017 , the statutory deadline for this goal. Increment 2, which...fourth quarter of fiscal year 2017 , all three of these enterprise resource planning systems are expected to be fully deployed, to share a common set

  8. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  9. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less

  10. Big heart data: advancing health informatics through data sharing in cardiovascular imaging.

    PubMed

    Suinesiaputra, Avan; Medrano-Gracia, Pau; Cowan, Brett R; Young, Alistair A

    2015-07-01

    The burden of heart disease is rapidly worsening due to the increasing prevalence of obesity and diabetes. Data sharing and open database resources for heart health informatics are important for advancing our understanding of cardiovascular function, disease progression and therapeutics. Data sharing enables valuable information, often obtained at considerable expense and effort, to be reused beyond the specific objectives of the original study. Many government funding agencies and journal publishers are requiring data reuse, and are providing mechanisms for data curation and archival. Tools and infrastructure are available to archive anonymous data from a wide range of studies, from descriptive epidemiological data to gigabytes of imaging data. Meta-analyses can be performed to combine raw data from disparate studies to obtain unique comparisons or to enhance statistical power. Open benchmark datasets are invaluable for validating data analysis algorithms and objectively comparing results. This review provides a rationale for increased data sharing and surveys recent progress in the cardiovascular domain. We also highlight the potential of recent large cardiovascular epidemiological studies enabling collaborative efforts to facilitate data sharing, algorithms benchmarking, disease modeling and statistical atlases.

  11. Making proteomics data accessible and reusable: Current state of proteomics databases and repositories

    PubMed Central

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-01-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685

  12. Resource sharing of online teaching materials: The lon-capa project

    NASA Astrophysics Data System (ADS)

    Bauer, Wolfgang

    2004-03-01

    The use of information technology resources in conventional lecture-based courses, in distance-learning offerings, as well as hybrid courses, is increasing. But this may put additional burden on faculty, who are now asked to deliver this new content. Additionally, it may require the installation of commercial courseware systems, putting the colleges and universities in new financial licensing dependencies. To address exactly these two problems, the lon-capa system was invented to provide an open-source, gnu public license based, courseware system that allows for sharing of educational resources across institutional and disciplinary boundaries. This presentation will focus on both aspects of the system, the courseware capabilities that allow for customized environments for individual students, and the educational resources library that enables teachers to take full advantages of the work of their colleagues. Research results on learning effectiveness, resource and system usage patterns, and customization for different learning styles will be shown. Institutional perceptions of and responses to open source courseware systems will be discussed.

  13. Open-source, community-driven microfluidics with Metafluidics.

    PubMed

    Kong, David S; Thorsen, Todd A; Babb, Jonathan; Wick, Scott T; Gam, Jeremy J; Weiss, Ron; Carr, Peter A

    2017-06-07

    Microfluidic devices have the potential to automate and miniaturize biological experiments, but open-source sharing of device designs has lagged behind sharing of other resources such as software. Synthetic biologists have used microfluidics for DNA assembly, cell-free expression, and cell culture, but a combination of expense, device complexity, and reliance on custom set-ups hampers their widespread adoption. We present Metafluidics, an open-source, community-driven repository that hosts digital design files, assembly specifications, and open-source software to enable users to build, configure, and operate a microfluidic device. We use Metafluidics to share designs and fabrication instructions for both a microfluidic ring-mixer device and a 32-channel tabletop microfluidic controller. This device and controller are applied to build genetic circuits using standard DNA assembly methods including ligation, Gateway, Gibson, and Golden Gate. Metafluidics is intended to enable a broad community of engineers, DIY enthusiasts, and other nontraditional participants with limited fabrication skills to contribute to microfluidic research.

  14. Understanding factors that influence stakeholder trust of natural resource science and institutions.

    PubMed

    Gray, Steven; Shwom, Rachael; Jordan, Rebecca

    2012-03-01

    Building trust between resource users and natural resource institutions is essential when creating conservation policies that rely on stakeholders to be effective. Trust can enable the public and agencies to engage in cooperative behaviors toward shared goals and address shared problems. Despite the increasing attention that trust has received recently in the environmental management literature, the influence that individual cognitive and behavioral factors may play in influencing levels of trust in resource management institutions, and their associated scientific assessments, remains unclear. This paper uses the case of fisheries management in the northeast to explore the relationships between an individual's knowledge of the resource, perceptions of resource health, and participatory experience on levels of trust. Using survey data collected from 244 avid recreational anglers in the Northeast U.S., we test these relationships using structural equation modeling. Results indicate that participation in fisheries management is associated with increased trust across all aspects of fisheries management. In addition, higher ratings of resource health by anglers are associated with higher levels of trust of state and regional institutions, but not federal institutions or scientific methods.

  15. Understanding Factors That Influence Stakeholder Trust of Natural Resource Science and Institutions

    NASA Astrophysics Data System (ADS)

    Gray, Steven; Shwom, Rachael; Jordan, Rebecca

    2012-03-01

    Building trust between resource users and natural resource institutions is essential when creating conservation policies that rely on stakeholders to be effective. Trust can enable the public and agencies to engage in cooperative behaviors toward shared goals and address shared problems. Despite the increasing attention that trust has received recently in the environmental management literature, the influence that individual cognitive and behavioral factors may play in influencing levels of trust in resource management institutions, and their associated scientific assessments, remains unclear. This paper uses the case of fisheries management in the northeast to explore the relationships between an individual's knowledge of the resource, perceptions of resource health, and participatory experience on levels of trust. Using survey data collected from 244 avid recreational anglers in the Northeast U.S., we test these relationships using structural equation modeling. Results indicate that participation in fisheries management is associated with increased trust across all aspects of fisheries management. In addition, higher ratings of resource health by anglers are associated with higher levels of trust of state and regional institutions, but not federal institutions or scientific methods.

  16. Quitting Smoking - Multiple Languages

    MedlinePlus

    ... sharing features on this page, please enable JavaScript. Arabic (العربية) Bengali (Bangla / বাংলা) Bosnian (bosanski) Chinese, ... HealthReach resources will open in a new window. Arabic (العربية) Expand Section Avoiding Weight Gain When Quitting ...

  17. Timetabling: A Shared Services Model

    ERIC Educational Resources Information Center

    O'Regan, Carmel

    2012-01-01

    This paper identifies common timetabling issues and options as experienced in Australian universities, and develops a rationale to inform management decisions on a suitable system and the associated policies, procedures, management structure and resources at the University of Newcastle, to enable more effective timetabling in line with the needs…

  18. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  19. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  20. A Virtual Bioinformatics Knowledge Environment for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald

    2003-01-01

    Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.

  1. The Importance and Satisfaction of Collaborative Innovation for Strategic Entrepreneurship

    ERIC Educational Resources Information Center

    Tsai, I-Chang; Lei, Han-Sheng

    2016-01-01

    Building on network, learning, resource-based and real options theories, collaborative innovation through the sharing of ideas, knowledge, expertise, and opportunities can enable both small and large firms to successfully engage in strategic entrepreneurship. We use the real case of a research-oriented organization and its incubator for analysis…

  2. Creation of Reusable Open Textbooks: Insights from the Connexions Repository

    ERIC Educational Resources Information Center

    Rodriguez-Solano, Carlos; Sánchez-Alonso, Salvador; Sicilia, Miguel-Angel

    2015-01-01

    Open textbook initiatives have appeared as an alternative to traditional publishing. These initiatives for the production of alternatively copyrighted educational resources provide a way of sharing materials through the Web. While the open model of peer-produced materials enables the global reuse of textbooks, the combination of fragments to…

  3. Immunization - Multiple Languages

    MedlinePlus

    ... sharing features on this page, please enable JavaScript. Arabic (العربية) Bengali (Bangla / বাংলা) Chinese, Simplified (Mandarin ... HealthReach resources will open in a new window. Arabic (العربية) Expand Section Global TravEpiNet (GTEN) Travelers' Rapid ...

  4. Leg Injuries and Disorders - Multiple Languages

    MedlinePlus

    ... sharing features on this page, please enable JavaScript. Arabic (العربية) Chinese, Simplified (Mandarin dialect) (简体中文) Chinese, Traditional ( ... HealthReach resources will open in a new window. Arabic (العربية) Expand Section Active Leg Range of Motion - ...

  5. Text mining resources for the life sciences.

    PubMed

    Przybyła, Piotr; Shardlow, Matthew; Aubin, Sophie; Bossy, Robert; Eckart de Castilho, Richard; Piperidis, Stelios; McNaught, John; Ananiadou, Sophia

    2016-01-01

    Text mining is a powerful technology for quickly distilling key information from vast quantities of biomedical literature. However, to harness this power the researcher must be well versed in the availability, suitability, adaptability, interoperability and comparative accuracy of current text mining resources. In this survey, we give an overview of the text mining resources that exist in the life sciences to help researchers, especially those employed in biocuration, to engage with text mining in their own work. We categorize the various resources under three sections: Content Discovery looks at where and how to find biomedical publications for text mining; Knowledge Encoding describes the formats used to represent the different levels of information associated with content that enable text mining, including those formats used to carry such information between processes; Tools and Services gives an overview of workflow management systems that can be used to rapidly configure and compare domain- and task-specific processes, via access to a wide range of pre-built tools. We also provide links to relevant repositories in each section to enable the reader to find resources relevant to their own area of interest. Throughout this work we give a special focus to resources that are interoperable-those that have the crucial ability to share information, enabling smooth integration and reusability. © The Author(s) 2016. Published by Oxford University Press.

  6. Text mining resources for the life sciences

    PubMed Central

    Shardlow, Matthew; Aubin, Sophie; Bossy, Robert; Eckart de Castilho, Richard; Piperidis, Stelios; McNaught, John; Ananiadou, Sophia

    2016-01-01

    Text mining is a powerful technology for quickly distilling key information from vast quantities of biomedical literature. However, to harness this power the researcher must be well versed in the availability, suitability, adaptability, interoperability and comparative accuracy of current text mining resources. In this survey, we give an overview of the text mining resources that exist in the life sciences to help researchers, especially those employed in biocuration, to engage with text mining in their own work. We categorize the various resources under three sections: Content Discovery looks at where and how to find biomedical publications for text mining; Knowledge Encoding describes the formats used to represent the different levels of information associated with content that enable text mining, including those formats used to carry such information between processes; Tools and Services gives an overview of workflow management systems that can be used to rapidly configure and compare domain- and task-specific processes, via access to a wide range of pre-built tools. We also provide links to relevant repositories in each section to enable the reader to find resources relevant to their own area of interest. Throughout this work we give a special focus to resources that are interoperable—those that have the crucial ability to share information, enabling smooth integration and reusability. PMID:27888231

  7. Shared learning in general practice--facilitators and barriers.

    PubMed

    van de Mortel, Thea; Silberberg, Peter; Ahern, Christine

    2013-03-01

    Capacity for teaching in general practice clinics is limited. Shared learning sessions are one form of vertically integrated teaching that may ameliorate capacity constraints. This study sought to understand the perceptions of general practitioner supervisors, learners and practice staff of the facilitators of shared learning in general practice clinics. Using a grounded theory approach, semistructured interviews were conducted and analysed to generate a theory about the topic. Thirty-five stakeholders from nine general practices participated. Facilitators of shared learning included enabling factors such as small group facilitation skills, space, administrative support and technological resources; reinforcing factors such as targeted funding, and predisposing factors such as participant attributes. Views from multiple stakeholders suggest that the implementation of shared learning in general practice clinics would be supported by an ecological approach that addresses all these factors.

  8. Shared resource control between human and computer

    NASA Technical Reports Server (NTRS)

    Hendler, James; Wilson, Reid

    1989-01-01

    The advantages of an AI system of actively monitoring human control of a shared resource (such as a telerobotic manipulator) are presented. A system is described in which a simple AI planning program gains efficiency by monitoring human actions and recognizing when the actions cause a change in the system's assumed state of the world. This enables the planner to recognize when an interaction occurs between human actions and system goals, and allows maintenance of an up-to-date knowledge of the state of the world and thus informs the operator when human action would undo a goal achieved by the system, when an action would render a system goal unachievable, and efficiently replans the establishment of goals after human intervention.

  9. Shared Canadian Curriculum in Family Medicine (SHARC-FM): Creating a national consensus on relevant and practical training for medical students.

    PubMed

    Keegan, David A; Scott, Ian; Sylvester, Michael; Tan, Amy; Horrey, Kathleen; Weston, W Wayne

    2017-04-01

    In 2006, leaders of undergraduate family medicine education programs faced a series of increasing curriculum mandates in the context of limited time and financial resources. Additionally, it became apparent that a hidden curriculum against family medicine as a career choice was active in medical schools. The Shared Canadian Curriculum in Family Medicine was developed by the Canadian Undergraduate Family Medicine Education Directors and supported by the College of Family Physicians of Canada as a national collaborative project to support medical student training in family medicine clerkship. Its key objective is to enable education leaders to meet their educational mandates, while at the same time countering the hidden curriculum and providing a route to scholarship. The Shared Canadian Curriculum in Family Medicine is an open-access, shared, national curriculum ( www.sharcfm.ca ). It contains 23 core clinical topics (determined through a modified Delphi process) with demonstrable objectives for each. It also includes low- and medium-fidelity virtual patient cases, point-of-care learning resources (clinical cards), and assessment tools, all aligned with the core topics. French translation of the resources is ongoing. The core topics, objectives, and educational resources have been adopted by medical schools across Canada, according to their needs. The lessons learned from mounting this multi-institutional collaborative project will help others develop their own collaborative curricula. Copyright© the College of Family Physicians of Canada.

  10. Combining the CIDOC CRM and MPEG-7 to Describe Multimedia in Museums.

    ERIC Educational Resources Information Center

    Hunter, Jane

    This paper describes a proposal for an interoperable metadata model, based on international standards, that has been designed to enable the description, exchange and sharing of multimedia resources both within and between cultural institutions. Domain-specific ontologies have been developed by two different ISO Working Groups to standardize the…

  11. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…

  12. Taking Our Seat at the Advocacy Table

    ERIC Educational Resources Information Center

    Laverdure, Patricia

    2017-01-01

    The Policy-Advocacy-Leadership (PAL) column is developed to initiate and facilitate important dialogue about health care and educational policy, and develop and share the knowledge, tools, and resources that enable all of us to be effective advocates for our clients and our practice and leaders in our practice settings and profession. In this…

  13. Axiope tools for data management and data sharing.

    PubMed

    Goddard, Nigel H; Cannon, Robert C; Howell, Fred W

    2003-01-01

    Many areas of biological research generate large volumes of very diverse data. Managing this data can be a difficult and time-consuming process, particularly in an academic environment where there are very limited resources for IT support staff such as database administrators. The most economical and efficient solutions are those that enable scientists with minimal IT expertise to control and operate their own desktop systems. Axiope provides one such solution, Catalyzer, which acts as flexible cataloging system for creating structured records describing digital resources. The user is able specify both the content and structure of the information included in the catalog. Information and resources can be shared by a variety of means, including automatically generated sets of web pages. Federation and integration of this information, where needed, is handled by Axiope's Mercat server. Where there is a need for standardization or compatibility of the structures usedby different researchers this canbe achieved later by applying user-defined mappings in Mercat. In this way, large-scale data sharing can be achieved without imposing unnecessary constraints or interfering with the way in which individual scientists choose to record and catalog their work. We summarize the key technical issues involved in scientific data management and data sharing, describe the main features and functionality of Axiope Catalyzer and Axiope Mercat, and discuss future directions and requirements for an information infrastructure to support large-scale data sharing and scientific collaboration.

  14. Making proteomics data accessible and reusable: current state of proteomics databases and repositories.

    PubMed

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-03-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  16. Renewable Energy Zone (REZ) Transmission Planning Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Nathan

    A REZ is a geographical area that enables the development of profitable, cost-effective, grid-connected renewable energy (RE). The REZ Transmission Planning Process is a proactive approach to plan, approve, and build transmission infrastructure connecting REZs to the power system which helps to increase the share of solar, wind and other RE resources in the power system while maintaining reliability and economics, and focuses on large-scale wind and solar resources that can be developed in sufficient quantities to warrant transmission system expansion and upgrades.

  17. Concierge: Personal Database Software for Managing Digital Research Resources

    PubMed Central

    Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro

    2007-01-01

    This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800

  18. Building and Sustaining International Scientific Partnerships Through Data Sharing

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Yoksas, T.

    2008-05-01

    Understanding global environmental processes and their regional linkages has heightened the importance of strong international scientific partnerships. At the same time, the Internet and its myriad manifestations, along with innovative web services, have amply demonstrated the compounding benefits of cyberinfrastructure and the power of networked communities. The increased globalization of science, especially in solving interdisciplinary Earth system science problems, requires that science be conducted collaboratively by distributed teams of investigators, often involving sharing of knowledge and resources like community models and other tools. The climate system, for example, is far too complex a puzzle to be unraveled by individual investigators or nations. Its understanding requires finding, collecting, integrating, and assimilating data from observations and model simulations from diverse fields and across traditional disciplinary boundaries. For the past two decades, the NSF-sponsored Unidata Program Center has been providing the data services, tools, and cyberinfrastructure leadership that advance Earth system science education and research, and enabled opportunities for broad participation. Beginning as a collection of US-based, mostly atmospheric science departments, the Unidata community now transcends international boundaries and geoscience disciplines. Today, Unidata technologies are used in many countries on all continents in research, education and operational settings, and in many international projects (e.g., IPCC assessments, International Polar Year, and THORPEX). The program places high value on the transformational changes enabled by such international scientific partnerships and continually provides opportunities to share knowledge, data, tools and other resources to advance geoscience research and education. This talk will provide an overview of Unidata's ongoing efforts to foster to international scientific partnerships toward building a globally-engaged community of educators and researchers in the geosciences. The presentation will discuss how developments in Earth and Space Science Informatics are enabling new approaches to solving geoscientific problems. The presentation will also describe how Unidata resources are being leveraged by broader initiatives in UCAR and elsewhere.

  19. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  20. tranSMART: An Open Source and Community-Driven Informatics and Data Sharing Platform for Clinical and Translational Research.

    PubMed

    Athey, Brian D; Braxenthaler, Michael; Haas, Magali; Guo, Yike

    2013-01-01

    tranSMART is an emerging global open source public private partnership community developing a comprehensive informatics-based analysis and data-sharing cloud platform for clinical and translational research. The tranSMART consortium includes pharmaceutical and other companies, not-for-profits, academic entities, patient advocacy groups, and government stakeholders. The tranSMART value proposition relies on the concept that the global community of users, developers, and stakeholders are the best source of innovation for applications and for useful data. Continued development and use of the tranSMART platform will create a means to enable "pre-competitive" data sharing broadly, saving money and, potentially accelerating research translation to cures. Significant transformative effects of tranSMART includes 1) allowing for all its user community to benefit from experts globally, 2) capturing the best of innovation in analytic tools, 3) a growing 'big data' resource, 4) convergent standards, and 5) new informatics-enabled translational science in the pharma, academic, and not-for-profit sectors.

  1. The impact of services that offer individualised funds, shared management, person-centred relationships, and self-direction on the lived experiences of consumers with mental illness.

    PubMed

    Peterson, Sunila; Buchanan, Angus; Falkmer, Torbjorn

    2014-01-01

    Mental health service providers across Australia, including Western Australia (WA), have begun to offer individualised funds, shared management, person-centred and self-directed (SPS) services. No research exists on the impact of SPS services on the lived experiences of these particular consumers. This study explored the impact of a SPS service offered for the first time in WA to consumers with mental illness. Data on sixteen consumers' lived experiences were analysed using an abbreviated grounded theory approach. These data had been developed by the consumers, Guides (staff) and an independent evaluator, and most of it had been collected in the past prior to the commencement of the study. Three over-arching categories, and related subcategories, emerged indicating that 1) access to individualised funds enabled practical and psychological benefits to consumers; 2) consistent contact in shared management and person-centred relationships enhanced the provision of timely and meaningful staff support to consumers; and 3) high quality shared management and person-centred relationships with staff and the opportunity to self-direct enabled consumers' change and growth. SPS services enhanced consumers' lived experiences and enabled staff to provide and consumers to experience timely access to recovery resources, consistent contact, responsive and high quality support, and self-direction of services. In this, consumers changed, grew and achieved desired recovery experiences. The overall impact of the SPS service seemed to be founded on the goodness of fit between person characteristics of staff and consumers, which enabled rich support that provided for corrective emotional experiences. This enabled consumers to build meaningful and hopeful lives where they started to live with, and beyond, their mental illness.

  2. Bridging Hydroinformatics Services Between HydroShare and SWATShare

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Zhao, L.; Song, C. X.; Tarboton, D. G.; Goodall, J. L.; Stealey, M.; Rajib, A.; Morsy, M. M.; Dash, P. K.; Miles, B.; Kim, I. L.

    2016-12-01

    Many cyberinfrastructure systems in the hydrologic and related domains emerged in the past decade with more being developed to address various data management and modeling needs. Although clearly beneficial to the broad user community, it is a challenging task to build interoperability across these systems due to various obstacles including technological, organizational, semantic, and social issues. This work presents our experience in developing interoperability between two hydrologic cyberinfrastructure systems - SWATShare and HydroShare. HydroShare is a large-scale online system aiming at enabling the hydrologic user community to share their data, models, and analysis online for solving complex hydrologic research questions. On the other side, SWATShare is a focused effort to allow SWAT (Soil and Water Assessment Tool) modelers share, execute and analyze SWAT models using high performance computing resources. Making these two systems interoperable required common sign-in through OAuth, sharing of models through common metadata standards and use of standard web-services for implementing key import/export functionalities. As a result, users from either community can leverage the resources and services across these systems without having to manually importing, exporting, or processing their models. Overall, this use case is an example that can serve as a model for the interoperability among other systems as no one system can provide all the functionality needed to address large interdisciplinary problems.

  3. UCLA's Molecular Screening Shared Resource: enhancing small molecule discovery with functional genomics and new technology.

    PubMed

    Damoiseaux, Robert

    2014-05-01

    The Molecular Screening Shared Resource (MSSR) offers a comprehensive range of leading-edge high throughput screening (HTS) services including drug discovery, chemical and functional genomics, and novel methods for nano and environmental toxicology. The MSSR is an open access environment with investigators from UCLA as well as from the entire globe. Industrial clients are equally welcome as are non-profit entities. The MSSR is a fee-for-service entity and does not retain intellectual property. In conjunction with the Center for Environmental Implications of Nanotechnology, the MSSR is unique in its dedicated and ongoing efforts towards high throughput toxicity testing of nanomaterials. In addition, the MSSR engages in technology development eliminating bottlenecks from the HTS workflow and enabling novel assays and readouts currently not available.

  4. A virtual community and cyberinfrastructure for collaboration in volcano research and risk mitigation

    NASA Astrophysics Data System (ADS)

    Valentine, G. A.

    2012-12-01

    VHub (short for VolcanoHub, and accessible at vhub.org) is an online platform for collaboration in research and training related to volcanoes, the hazards they pose, and risk mitigation. The underlying concept is to provide a mechanism that enables workers to share information with colleagues around the globe; VHub and similar hub technologies could prove very powerful in collaborating and communicating about circum-Pacific volcanic hazards. Collaboration occurs around several different points: (1) modeling and simulation; (2) data sharing; (3) education and training; (4) volcano observatories; and (5) project-specific groups. VHub promotes modeling and simulation in two ways: (1) some models can be implemented on VHub for online execution. This eliminates the need to download and compile a code on a local computer. VHub can provide a central "warehouse" for such models that should result in broader dissemination. VHub also provides a platform that supports the more complex CFD models by enabling the sharing of code development and problem-solving knowledge, benchmarking datasets, and the development of validation exercises. VHub also provides a platform for sharing of data and datasets. The VHub development team is implementing the iRODS data sharing middleware (see irods.org). iRODS allows a researcher to access data that are located at participating data sources around the world (a "cloud" of data) as if the data were housed in a single virtual database. Education and training is another important use of the VHub platform. Audio-video recordings of seminars, PowerPoint slide sets, and educational simulations are all items that can be placed onto VHub for use by the community or by selected collaborators. An important point is that the "manager" of a given educational resource (or any other resource, such as a dataset or a model) can control the privacy of that resource, ranging from private (only accessible by, and known to, specific collaborators) to completely public. Materials for use in the classroom can be shared via VHub. VHub is a very useful platform for project-specific collaborations. With a group site on VHub where collaborators share documents, datasets, maps, and have ongoing discussions using the discussion board function. VHub is funded by the U.S. National Science Foundation, and is participating in development of larger earth-science cyberinfrastructure initiatives (EarthCube), as well as supporting efforts such as the Global Volcano Model.

  5. Strengthening Partnerships between Special Education Teacher Educators and Schools. Induction Insights. Supporting Special Education Teachers-Policymakers [PII-3

    ERIC Educational Resources Information Center

    National Center to Inform Policy and Practice in Special Education Professional Development, 2010

    2010-01-01

    Partnerships among institutions of higher education and school districts are desirable. Partnerships enable organizations to leverage their resources as well as expand and enhance their capabilities. They also provide opportunities for personnel with specialized areas of expertise to address shared challenges. Partnerships take considerable time,…

  6. Dialysis - Multiple Languages

    MedlinePlus

    ... sharing features on this page, please enable JavaScript. Arabic (العربية) Russian (Русский) Somali (Af-Soomaali ) Spanish (español) HealthReach resources will open in a new window. Arabic (العربية) Expand Section Dialysis - العربية (Arabic) Bilingual PDF ...

  7. EPA Web Taxonomy

    EPA Pesticide Factsheets

    EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's Web based information assests. EPA's Web Taxonomy is being provided in Simple Knowledge Organization System (SKOS) format. SKOS is a standard for sharing and linking knowledge organization systems that promises to make Federal terminology resources more interoperable.

  8. Developing a Cloud-Based Online Geospatial Information Sharing and Geoprocessing Platform to Facilitate Collaborative Education and Research

    NASA Astrophysics Data System (ADS)

    Yang, Z. L.; Cao, J.; Hu, K.; Gui, Z. P.; Wu, H. Y.; You, L.

    2016-06-01

    Efficient online discovering and applying geospatial information resources (GIRs) is critical in Earth Science domain as while for cross-disciplinary applications. However, to achieve it is challenging due to the heterogeneity, complexity and privacy of online GIRs. In this article, GeoSquare, a collaborative online geospatial information sharing and geoprocessing platform, was developed to tackle this problem. Specifically, (1) GIRs registration and multi-view query functions allow users to publish and discover GIRs more effectively. (2) Online geoprocessing and real-time execution status checking help users process data and conduct analysis without pre-installation of cumbersome professional tools on their own machines. (3) A service chain orchestration function enables domain experts to contribute and share their domain knowledge with community members through workflow modeling. (4) User inventory management allows registered users to collect and manage their own GIRs, monitor their execution status, and track their own geoprocessing histories. Besides, to enhance the flexibility and capacity of GeoSquare, distributed storage and cloud computing technologies are employed. To support interactive teaching and training, GeoSquare adopts the rich internet application (RIA) technology to create user-friendly graphical user interface (GUI). Results show that GeoSquare can integrate and foster collaboration between dispersed GIRs, computing resources and people. Subsequently, educators and researchers can share and exchange resources in an efficient and harmonious way.

  9. Participatory Design of Human-Centered Cyberinfrastructure (Invited)

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Gates, A. Q.

    2010-12-01

    Cyberinfrastructure, by definition, is about people sharing resources to achieve outcomes that cannot be reached independently. CI depends not just on creating discoverable resources, or tools that allow those resources to be processed, integrated, and visualized -- but on human activation of flows of information across those resources. CI must be centered on human activities. Yet for those CI projects that are directed towards observational science, there are few models for organizing collaborative research in ways that align individual research interests into a collective vision of CI-enabled science. Given that the emerging technologies are themselves expected to change the way science is conducted, it is not simply a matter of conducting requirements analysis on how scientists currently work, or building consensus among the scientists on what is needed. Developing effective CI depends on generating a new, creative vision of problem solving within a community based on computational concepts that are, in some cases, still very abstract and theoretical. The computer science theory may (or may not) be well formalized, but the potential for impact on any particular domain is typically ill-defined. In this presentation we will describe approaches being developed and tested at the CyberShARE Center of Excellence at University of Texas in El Paso for ill-structured problem solving within cross-disciplinary teams of scientists and computer scientists working on data intensive environmental and geoscience. These approaches deal with the challenges associated with sharing and integrating knowledge across disciplines; the challenges of developing effective teamwork skills in a culture that favors independent effort; and the challenges of evolving shared, focused research goals from ill-structured, vague starting points - all issues that must be confronted by every interdisciplinary CI project. We will introduce visual and semantic-based tools that can enable the collaborative research design process and illustrate their application in designing and developing useful end-to-end data solutions for scientists. Lastly, we will outline areas of future investigation within CyberShARE that we believe have the potential for high impact.

  10. Building new computational models to support health behavior change and maintenance: new opportunities in behavioral research.

    PubMed

    Spruijt-Metz, Donna; Hekler, Eric; Saranummi, Niilo; Intille, Stephen; Korhonen, Ilkka; Nilsen, Wendy; Rivera, Daniel E; Spring, Bonnie; Michie, Susan; Asch, David A; Sanna, Alberto; Salcedo, Vicente Traver; Kukakfa, Rita; Pavel, Misha

    2015-09-01

    Adverse and suboptimal health behaviors and habits are responsible for approximately 40 % of preventable deaths, in addition to their unfavorable effects on quality of life and economics. Our current understanding of human behavior is largely based on static "snapshots" of human behavior, rather than ongoing, dynamic feedback loops of behavior in response to ever-changing biological, social, personal, and environmental states. This paper first discusses how new technologies (i.e., mobile sensors, smartphones, ubiquitous computing, and cloud-enabled processing/computing) and emerging systems modeling techniques enable the development of new, dynamic, and empirical models of human behavior that could facilitate just-in-time adaptive, scalable interventions. The paper then describes concrete steps to the creation of robust dynamic mathematical models of behavior including: (1) establishing "gold standard" measures, (2) the creation of a behavioral ontology for shared language and understanding tools that both enable dynamic theorizing across disciplines, (3) the development of data sharing resources, and (4) facilitating improved sharing of mathematical models and tools to support rapid aggregation of the models. We conclude with the discussion of what might be incorporated into a "knowledge commons," which could help to bring together these disparate activities into a unified system and structure for organizing knowledge about behavior.

  11. Enabling Linked Science in Global Climate Uncertainty Quantification (UQ) Research

    NASA Astrophysics Data System (ADS)

    Elsethagen, T.; Stephan, E.; Lin, G.; Williams, D.; Banks, E.

    2012-12-01

    This paper shares a real-world global climate UQ science use case and illustrates how a linked science application called Provenance Environment (ProvEn), currently being developed, enables and facilitates scientific teams to publish, share, link, and discover new links over their UQ research results. UQ results include terascale datasets that are published to an Earth Systems Grid Federation (ESGF) repository. ProvEn demonstrates how a scientific team conducting UQ studies can discover dataset links using its domain knowledgebase, allowing them to better understand the UQ study research objectives, the experimental protocol used, the resulting dataset lineage, related analytical findings, ancillary literature citations, along with the social network of scientists associated with the study. This research claims that scientists using this linked science approach will not only allow them to greatly benefit from understanding a particular dataset within a knowledge context, a benefit can also be seen by the cross reference of knowledge among the numerous UQ studies being stored in ESGF. ProvEn collects native forms of data provenance resources as the UQ study is carried out. The native data provenance resources can be collected from a variety of sources such as scripts, a workflow engine log, simulation log files, scientific team members etc. Schema alignment is used to translate the native forms of provenance into a set of W3C PROV-O semantic statements used as a common interchange format which will also contain URI references back to resources in the UQ study dataset for querying and cross referencing. ProvEn leverages Fedora Commons' digital object model in a Resource Oriented Architecture (ROA) (i.e. a RESTful framework) to logically organize and partition native and translated provenance resources by UQ study. The ROA also provides scientists the means to both search native and translated forms of provenance.

  12. BingEO: Enable Distributed Earth Observation Data for Environmental Research

    NASA Astrophysics Data System (ADS)

    Wu, H.; Yang, C.; Xu, Y.

    2010-12-01

    Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.

  13. The Microbial Resource Research Infrastructure MIRRI: Strength through Coordination

    PubMed Central

    Stackebrandt, Erko; Schüngel, Manuela; Martin, Dunja; Smith, David

    2015-01-01

    Microbial resources have been recognized as essential raw materials for the advancement of health and later for biotechnology, agriculture, food technology and for research in the life sciences, as their enormous abundance and diversity offer an unparalleled source of unexplored solutions. Microbial domain biological resource centres (mBRC) provide live cultures and associated data to foster and support the development of basic and applied science in countries worldwide and especially in Europe, where the density of highly advanced mBRCs is high. The not-for-profit and distributed project MIRRI (Microbial Resource Research Infrastructure) aims to coordinate access to hitherto individually managed resources by developing a pan-European platform which takes the interoperability and accessibility of resources and data to a higher level. Providing a wealth of additional information and linking to datasets such as literature, environmental data, sequences and chemistry will enable researchers to select organisms suitable for their research and enable innovative solutions to be developed. The current independent policies and managed processes will be adapted by partner mBRCs to harmonize holdings, services, training, and accession policy and to share expertise. The infrastructure will improve access to enhanced quality microorganisms in an appropriate legal framework and to resource-associated data in a more interoperable way. PMID:27682123

  14. The Microbial Resource Research Infrastructure MIRRI: Strength through Coordination.

    PubMed

    Stackebrandt, Erko; Schüngel, Manuela; Martin, Dunja; Smith, David

    2015-11-18

    Microbial resources have been recognized as essential raw materials for the advancement of health and later for biotechnology, agriculture, food technology and for research in the life sciences, as their enormous abundance and diversity offer an unparalleled source of unexplored solutions. Microbial domain biological resource centres (mBRC) provide live cultures and associated data to foster and support the development of basic and applied science in countries worldwide and especially in Europe, where the density of highly advanced mBRCs is high. The not-for-profit and distributed project MIRRI (Microbial Resource Research Infrastructure) aims to coordinate access to hitherto individually managed resources by developing a pan-European platform which takes the interoperability and accessibility of resources and data to a higher level. Providing a wealth of additional information and linking to datasets such as literature, environmental data, sequences and chemistry will enable researchers to select organisms suitable for their research and enable innovative solutions to be developed. The current independent policies and managed processes will be adapted by partner mBRCs to harmonize holdings, services, training, and accession policy and to share expertise. The infrastructure will improve access to enhanced quality microorganisms in an appropriate legal framework and to resource-associated data in a more interoperable way.

  15. The OOI Ocean Education Portal: Enabling the Development of Online Data Investigations

    NASA Astrophysics Data System (ADS)

    Lichtenwalner, C. S.; McDonnell, J. D.; Crowley, M. F.; deCharon, A.; Companion, C. J.; Glenn, S. M.

    2016-02-01

    The Ocean Observatories Initiative (OOI) was designed to transform ocean science, by establishing a long-term, multi-instrument, multi-platform research infrastructure at 7 arrays around the word. This unprecedented investment in ocean observation, funded by the National Science Foundation, provides a rich opportunity to reshape ocean science education as well. As part of the initial construction effort, an online Ocean Education Portal was developed to support the creation and sharing of educational resources by undergraduate faculty at universities and community colleges. The portal includes a suite of tools that enable the development of online activities for use as group or individual projects, which can be used during lectures or as homework assignments. The site includes: 1) a suite of interactive educational data visualization tools that provide simple and targeted interfaces to interact with OOI datasets; 2) a concept map builder that can be used by both educators and students to build networked diagrams of their knowledge; and 3) a "data investigation" builder that allows faculty to assemble resources into coherent learning modules. The site also includes a "vocabulary navigator" that provides a visual way to discover and learn about the OOI's infrastructure and scientific design. The site allows users to browse an ever-growing database of resources created by the community, and likewise, users can share resources they create with others. As the OOI begins its 25-year operational phase, it is our hope that faculty will be able to use the tools and investigations on the Ocean Education Portal to bring real ocean science research to their undergraduate students.

  16. Tasking and sharing sensing assets using controlled natural language

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Pizzocaro, Diego; Braines, David; Mott, David

    2012-06-01

    We introduce an approach to representing intelligence, surveillance, and reconnaissance (ISR) tasks at a relatively high level in controlled natural language. We demonstrate that this facilitates both human interpretation and machine processing of tasks. More specically, it allows the automatic assignment of sensing assets to tasks, and the informed sharing of tasks between collaborating users in a coalition environment. To enable automatic matching of sensor types to tasks, we created a machine-processable knowledge representation based on the Military Missions and Means Framework (MMF), and implemented a semantic reasoner to match task types to sensor types. We combined this mechanism with a sensor-task assignment procedure based on a well-known distributed protocol for resource allocation. In this paper, we re-formulate the MMF ontology in Controlled English (CE), a type of controlled natural language designed to be readable by a native English speaker whilst representing information in a structured, unambiguous form to facilitate machine processing. We show how CE can be used to describe both ISR tasks (for example, detection, localization, or identication of particular kinds of object) and sensing assets (for example, acoustic, visual, or seismic sensors, mounted on motes or unmanned vehicles). We show how these representations enable an automatic sensor-task assignment process. Where a group of users are cooperating in a coalition, we show how CE task summaries give users in the eld a high-level picture of ISR coverage of an area of interest. This allows them to make ecient use of sensing resources by sharing tasks.

  17. The UCSC Genome Browser: What Every Molecular Biologist Should Know

    PubMed Central

    Mangan, Mary E.; Williams, Jennifer M.; Kuhn, Robert M.; Lathe, Warren C.

    2014-01-01

    Electronic data resources can enable molecular biologists to quickly get information from around the world that a decade ago would have been buried in papers scattered throughout the library. The ability to access, query, and display these data make benchwork much more efficient and drive new discoveries. Increasingly, mastery of software resources and corresponding data repositories is required to fully explore the volume of data generated in biomedical and agricultural research, because only small amounts of data are actually found in traditional publications. The UCSC Genome Browser provides a wealth of data and tools that advance understanding of genomic context for many species, enable detailed analysis of data, and provide the ability to interrogate regions of interest across disparate data sets from a wide variety of sources. Researchers can also supplement the standard display with their own data to query and share this with others. Effective use of these resources has become crucial to biological research today, and this unit describes some practical applications of the UCSC Genome Browser. PMID:24984850

  18. Enabling Discoveries in Earth Sciences Through the Geosciences Network (GEON)

    NASA Astrophysics Data System (ADS)

    Seber, D.; Baru, C.; Memon, A.; Lin, K.; Youn, C.

    2005-12-01

    Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, semantic data integration, high-end computations and 4D visualization in easy-to-use web-based environments. The GEON Network currently allows users to search and register Earth science resources such as data sets (GIS layers, GMT files, geoTIFF images, ASCII files, relational databases etc), software applications or ontologies. Portal based access mechanisms enable developers to built dynamic user interfaces to conduct advanced processing and modeling efforts across distributed computers and supercomputers. Researchers and educators can access the networked resources through the GEON portal and its portlets that were developed to conduct better and more comprehensive science and educational studies. For example, the SYNSEIS portlet in GEON enables users to access in near-real time seismic waveforms from the IRIS Data Management Center, easily build a 3D geologic model within the area of the seismic station(s) and the epicenter and perform a 3D synthetic seismogram analysis to understand the lithospheric structure and earthquake source parameters for any given earthquake in the US. Similarly, GEON's workbench area enables users to create their own work environment and copy, visualize and analyze any data sets within the network, and create subsets of the data sets for their own purposes. Since all these resources are built as part of a Service-oriented Architecture (SOA), they are also used in other development platforms. One such platform is Kepler Workflow system which can access web service based resources and provides users with graphical programming interfaces to build a model to conduct computations and/or visualization efforts using the networked resources. Developments in the area of semantic integration of the networked datasets continue to advance and prototype studies can be accessed via the GEON portal at www.geongrid.org

  19. School Partners in ILLINET. Automation Options for School Library Resource Sharing in Illinois. Final Report [and] Partners in ILLINET. Special Report.

    ERIC Educational Resources Information Center

    Howrey, Mary M.

    This study was funded by the Library Services and Construction Act (LSCA) to enable the Illinois School Library Media Association (ISLMA) to plan the automation of the state's school libraries. The research was intended to identify current national programs of interest to ISLMA, identify current automation programs within Illinois library systems,…

  20. Informatics Infrastructure for the Materials Genome Initiative

    NASA Astrophysics Data System (ADS)

    Dima, Alden; Bhaskarla, Sunil; Becker, Chandler; Brady, Mary; Campbell, Carelyn; Dessauw, Philippe; Hanisch, Robert; Kattner, Ursula; Kroenlein, Kenneth; Newrock, Marcus; Peskin, Adele; Plante, Raymond; Li, Sheng-Yen; Rigodiat, Pierre-François; Amaral, Guillaume Sousa; Trautt, Zachary; Schmitt, Xavier; Warren, James; Youssef, Sharief

    2016-08-01

    A materials data infrastructure that enables the sharing and transformation of a wide range of materials data is an essential part of achieving the goals of the Materials Genome Initiative. We describe two high-level requirements of such an infrastructure as well as an emerging open-source implementation consisting of the Materials Data Curation System and the National Institute of Standards and Technology Materials Resource Registry.

  1. Herzberg, Hygiene and the Motivation to Reuse: Towards a Three-Factor Theory to Explain Motivation to Share and Use OER

    ERIC Educational Resources Information Center

    Pegler, Chris

    2012-01-01

    The list of barriers and enablers that influence the use of open educational resources (OER) is extensive. Factors and influences relating to reuse may have been noted within projects, operating within a short time span, or within specific conditions which limit generalizability. Evidence of reuse in practice has often emerged as isolated examples…

  2. Globus | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.

  3. Online collaboration and model sharing in volcanology via VHub.org

    NASA Astrophysics Data System (ADS)

    Valentine, G.; Patra, A. K.; Bajo, J. V.; Bursik, M. I.; Calder, E.; Carn, S. A.; Charbonnier, S. J.; Connor, C.; Connor, L.; Courtland, L. M.; Gallo, S.; Jones, M.; Palma Lizana, J. L.; Moore-Russo, D.; Renschler, C. S.; Rose, W. I.

    2013-12-01

    VHub (short for VolcanoHub, and accessible at vhub.org) is an online platform for barrier free access to high end modeling and simulation and collaboration in research and training related to volcanoes, the hazards they pose, and risk mitigation. The underlying concept is to provide a platform, building upon the successful HUBzero software infrastructure (hubzero.org), that enables workers to collaborate online and to easily share information, modeling and analysis tools, and educational materials with colleagues around the globe. Collaboration occurs around several different points: (1) modeling and simulation; (2) data sharing; (3) education and training; (4) volcano observatories; and (5) project-specific groups. VHub promotes modeling and simulation in two ways: (1) some models can be implemented on VHub for online execution. VHub can provide a central warehouse for such models that should result in broader dissemination. VHub also provides a platform that supports the more complex CFD models by enabling the sharing of code development and problem-solving knowledge, benchmarking datasets, and the development of validation exercises. VHub also provides a platform for sharing of data and datasets. The VHub development team is implementing the iRODS data sharing middleware (see irods.org). iRODS allows a researcher to access data that are located at participating data sources around the world (a cloud of data) as if the data were housed in a single virtual database. Projects associated with VHub are also going to introduce the use of data driven workflow tools to support the use of multistage analysis processes where computing and data are integrated for model validation, hazard analysis etc. Audio-video recordings of seminars, PowerPoint slide sets, and educational simulations are all items that can be placed onto VHub for use by the community or by selected collaborators. An important point is that the manager of a given educational resource (or any other resource, such as a dataset or a model) can control the privacy of that resource, ranging from private (only accessible by, and known to, specific collaborators) to completely public. VHub is a very useful platform for project-specific collaborations. With a group site on VHub collaborators share documents, datasets, maps, and have ongoing discussions using the discussion board function. VHub is funded by the U.S. National Science Foundation, and is participating in development of larger earth-science cyberinfrastructure initiatives (EarthCube), as well as supporting efforts such as the Global Volcano Model. Emerging VHub-facilitated efforts include model benchmarking, collaborative code development, and growth in online modeling tools.

  4. GeoChronos: An On-line Collaborative Platform for Earth Observation Scientists

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Kiddle, C.; Curry, R.; Markatchev, N.; Zonta-Pastorello, G., Jr.; Rivard, B.; Sanchez-Azofeifa, G. A.; Simmonds, R.; Tan, T.

    2009-12-01

    Recent advances in cyberinfrastructure are offering new solutions to the growing challenges of managing and sharing large data volumes. Web 2.0 and social networking technologies, provide the means for scientists to collaborate and share information more effectively. Cloud computing technologies can provide scientists with transparent and on-demand access to applications served over the Internet in a dynamic and scalable manner. Semantic Web technologies allow for data to be linked together in a manner understandable by machines, enabling greater automation. Combining all of these technologies together can enable the creation of very powerful platforms. GeoChronos (http://geochronos.org/), part of a CANARIE Network Enabled Platforms project, is an online collaborative platform that incorporates these technologies to enable members of the earth observation science community to share data and scientific applications and to collaborate more effectively. The GeoChronos portal is built on an open source social networking platform called Elgg. Elgg provides a full set of social networking functionalities similar to Facebook including blogs, tags, media/document sharing, wikis, friends/contacts, groups, discussions, message boards, calendars, status, activity feeds and more. An underlying cloud computing infrastructure enables scientists to access dynamically provisioned applications via the portal for visualizing and analyzing data. Users are able to access and run the applications from any computer that has a Web browser and Internet connectivity and do not need to manage and maintain the applications themselves. Semantic Web Technologies, such as the Resource Description Framework (RDF) are being employed for relating and linking together spectral, satellite, meteorological and other data. Social networking functionality plays an integral part in facilitating the sharing of data and applications. Examples of recent GeoChronos users during the early testing phase have included the IAI International Wireless Sensor Networking Summer School at the University of Alberta, and the IAI Tropi-Dry community. Current GeoChronos activities include the development of a web-based spectral library and related analytical and visualization tools, in collaboration with members of the SpecNet community. The GeoChronos portal will be open to all members of the earth observation science community when the project nears completion at the end of 2010.

  5. Assessing Public Metabolomics Metadata, Towards Improving Quality.

    PubMed

    Ferreira, João D; Inácio, Bruno; Salek, Reza M; Couto, Francisco M

    2017-12-13

    Public resources need to be appropriately annotated with metadata in order to make them discoverable, reproducible and traceable, further enabling them to be interoperable or integrated with other datasets. While data-sharing policies exist to promote the annotation process by data owners, these guidelines are still largely ignored. In this manuscript, we analyse automatic measures of metadata quality, and suggest their application as a mean to encourage data owners to increase the metadata quality of their resources and submissions, thereby contributing to higher quality data, improved data sharing, and the overall accountability of scientific publications. We analyse these metadata quality measures in the context of a real-world repository of metabolomics data (i.e. MetaboLights), including a manual validation of the measures, and an analysis of their evolution over time. Our findings suggest that the proposed measures can be used to mimic a manual assessment of metadata quality.

  6. Optimizing CMS build infrastructure via Apache Mesos

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad

    2015-12-01

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.

  7. Bridging semantics and syntax with graph algorithms—state-of-the-art of extracting biomedical relations

    PubMed Central

    Uzuner, Özlem; Szolovits, Peter

    2017-01-01

    Research on extracting biomedical relations has received growing attention recently, with numerous biological and clinical applications including those in pharmacogenomics, clinical trial screening and adverse drug reaction detection. The ability to accurately capture both semantic and syntactic structures in text expressing these relations becomes increasingly critical to enable deep understanding of scientific papers and clinical narratives. Shared task challenges have been organized by both bioinformatics and clinical informatics communities to assess and advance the state-of-the-art research. Significant progress has been made in algorithm development and resource construction. In particular, graph-based approaches bridge semantics and syntax, often achieving the best performance in shared tasks. However, a number of problems at the frontiers of biomedical relation extraction continue to pose interesting challenges and present opportunities for great improvement and fruitful research. In this article, we place biomedical relation extraction against the backdrop of its versatile applications, present a gentle introduction to its general pipeline and shared resources, review the current state-of-the-art in methodology advancement, discuss limitations and point out several promising future directions. PMID:26851224

  8. SEEK: a systems biology data and model management platform.

    PubMed

    Wolstencroft, Katherine; Owen, Stuart; Krebs, Olga; Nguyen, Quyen; Stanford, Natalie J; Golebiewski, Martin; Weidemann, Andreas; Bittkowski, Meik; An, Lihua; Shockley, David; Snoep, Jacky L; Mueller, Wolfgang; Goble, Carole

    2015-07-11

    Systems biology research typically involves the integration and analysis of heterogeneous data types in order to model and predict biological processes. Researchers therefore require tools and resources to facilitate the sharing and integration of data, and for linking of data to systems biology models. There are a large number of public repositories for storing biological data of a particular type, for example transcriptomics or proteomics, and there are several model repositories. However, this silo-type storage of data and models is not conducive to systems biology investigations. Interdependencies between multiple omics datasets and between datasets and models are essential. Researchers require an environment that will allow the management and sharing of heterogeneous data and models in the context of the experiments which created them. The SEEK is a suite of tools to support the management, sharing and exploration of data and models in systems biology. The SEEK platform provides an access-controlled, web-based environment for scientists to share and exchange data and models for day-to-day collaboration and for public dissemination. A plug-in architecture allows the linking of experiments, their protocols, data, models and results in a configurable system that is available 'off the shelf'. Tools to run model simulations, plot experimental data and assist with data annotation and standardisation combine to produce a collection of resources that support analysis as well as sharing. Underlying semantic web resources additionally extract and serve SEEK metadata in RDF (Resource Description Format). SEEK RDF enables rich semantic queries, both within SEEK and between related resources in the web of Linked Open Data. The SEEK platform has been adopted by many systems biology consortia across Europe. It is a data management environment that has a low barrier of uptake and provides rich resources for collaboration. This paper provides an update on the functions and features of the SEEK software, and describes the use of the SEEK in the SysMO consortium (Systems biology for Micro-organisms), and the VLN (virtual Liver Network), two large systems biology initiatives with different research aims and different scientific communities.

  9. AAS Career Services

    NASA Astrophysics Data System (ADS)

    Marvel, Kevin B.

    2012-08-01

    The American Astronomical Society provides substantial programs in the area of Career Services.Motivated by the Society's mission to enhance and share humanity's understanding of the Universe, the AAS provides a central resource for advertising positions, interviewing opportunities at its annual winter meeting and information, workshops and networks to enable astronomers to find employment.The programs of the Society in this area are overseen by an active committee on employment and the AAS Council itself.Additional resources that help characterize the field, its growth and facts about employment such as salaries and type of jobs available are regularly summarized and reported on by the American Institute of Physics.

  10. Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-10-07

    We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.

  11. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  12. Architecture for the Interdisciplinary Earth Data Alliance

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2016-12-01

    The Interdisciplinary Earth Data Alliance (IEDA) is leading an EarthCube (EC) Integrative Activity to develop a governance structure and technology framework that enables partner data systems to share technology, infrastructure, and practice for documenting, curating, and accessing heterogeneous geoscience data. The IEDA data facility provides capabilities in an extensible framework that enables domain-specific requirements for each partner system in the Alliance to be integrated into standardized cross-domain workflows. The shared technology infrastructure includes a data submission hub, a domain-agnostic file-based repository, an integrated Alliance catalog and a Data Browser for data discovery across all partner holdings, as well as services for registering identifiers for datasets (DOI) and samples (IGSN). The submission hub will be a platform that facilitates acquisition of cross-domain resource documentation and channels users into domain and resource-specific workflows tailored for each partner community. We are exploring an event-based message bus architecture with a standardized plug-in interface for adding capabilities. This architecture builds on the EC CINERGI metadata pipeline as well as the message-based architecture of the SEAD project. Plug-in components for file introspection to match entities to a data type registry (extending EC Digital Crust and Research Data Alliance work), extract standardized keywords (using CINERGI components), location, cruise, personnel and other metadata linkage information (building on GeoLink and existing IEDA partner components). The submission hub will feed submissions to appropriate partner repositories and service endpoints targeted by domain and resource type for distribution. The Alliance governance will adopt patterns (vocabularies, operations, resource types) for self-describing data services using standard HTTP protocol for simplified data access (building on EC GeoWS and other `RESTful' approaches). Exposure of resource descriptions (datasets and service distributions) for harvesting by commercial search engines as well as geoscience-data focused crawlers (like EC B-Cube crawler) will increase discoverability of IEDA resources with minimal effort by curators.

  13. Intercultural Knowledge Flows in Edge Organizations: Trust as an Enabler

    DTIC Science & Technology

    2005-06-01

    that organization will benefit from that knowledge (Szulanski, 2000). Organizations are not always aware of everything that they know. In order...transfer is immediate and seamless (Shannon & Weaver, 1949), and, in a way, fluid. For example, in a review of the benefits of resource sharing, Hansen...organization benefits both individual members who maintain trust relationships with one another, and the organization as a whole (Fine & Holyfield, 1996

  14. On localization attacks against cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Ge, Linqiang; Yu, Wei; Sistani, Mohammad Ali

    2013-05-01

    One of the key characteristics of cloud computing is the device and location independence that enables the user to access systems regardless of their location. Because cloud computing is heavily based on sharing resource, it is vulnerable to cyber attacks. In this paper, we investigate a localization attack that enables the adversary to leverage central processing unit (CPU) resources to localize the physical location of server used by victims. By increasing and reducing CPU usage through the malicious virtual machine (VM), the response time from the victim VM will increase and decrease correspondingly. In this way, by embedding the probing signal into the CPU usage and correlating the same pattern in the response time from the victim VM, the adversary can find the location of victim VM. To determine attack accuracy, we investigate features in both the time and frequency domains. We conduct both theoretical and experimental study to demonstrate the effectiveness of such an attack.

  15. Exploring Nurse Manager Support of Evidence-Based Practice: Clinical Nurse Perceptions.

    PubMed

    Caramanica, Laura; Spiva, LeeAnna

    2018-05-01

    The study identifies what constitutes nurse manager (NM) support and other resources that enable clinical nurses (CNs) to engage in evidence-based practice (EBP). Clinical nurses report that NM support enables them to use EBP but what constitutes NM support is still unclear. Nurse managers, CNs, and EBP mentors received specialized education and use a team approach for EBP. Data were collected preintervention, mid-intervention, and postintervention from observations, interviews, journaling, and surveys. Results demonstrate how NMs can perform their role responsibilities and still engage CNs to develop a spirit of inquiry, seek answers to their clinical questions using EBP, and advance their clinical performance to improve patient outcomes. Four NM supportive behaviors emerged: cultivating a shared EBP vision, ensuring use of EBP, communicating the value of EBP, and providing resources for EBP. Through education and support, NMs describe supportive behaviors necessary for the successful conduction of EBP by CNs.

  16. Enabling the Capture and Sharing of NASA Technical Expertise Through Communities of Practice

    NASA Technical Reports Server (NTRS)

    Topousis, Daria E.; Dennehy, Cornelius J.; Lebsock, Kenneth L.

    2011-01-01

    Historically, engineers at the National Aeronautics and Space Administration (NASA) had few opportunities or incentives to share their technical expertise across the Agency. Its center- and project- focused culture often meant that knowledge never left organizational and geographic boundaries. With increasingly complex missions, the closeout of the Shuttle Program, and a new generation entering the workforce, developing a knowledge sharing culture became critical. To address this need, the Office of the Chief Engineer established communities of practice on the NASA Engineering Network. These communities were strategically aligned with NASA's core competencies in such disciplines as avionics, flight mechanics, life support, propulsion, structures, loads and dynamics, human factors, and guidance, navigation, and control. This paper describes the process used to identify and develop communities, from establishing simple websites that compiled discipline-specific resources to fostering a knowledge-sharing environment through collaborative and interactive technologies. It includes qualitative evidence of improved availability and transfer of knowledge. It focuses on pivotal capabilities that increased knowledge exchange such as a custom-made Ask An Expert system, community contact lists, publication of key resources, and submission forms that allowed any user to propose content for the sites. It discusses the peer relationships that developed through the communities and the leadership and infrastructure that made them possible.

  17. Resource integration and shared outcomes at the watershed scale

    Treesearch

    Eleanor S. Towns

    2000-01-01

    Shared resources are universal resources that are vital for sustaining communities, enhancing our quality of life and preserving ecosystem health. We have a shared responsibility to conserve shared resources and preserve their integrity for future generations. Resource integration is accomplished through ecosystem management, often at a watershed scale. The shared...

  18. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  19. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  20. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  1. DASH, the data and specimen hub of the National Institute of Child Health and Human Development

    PubMed Central

    Hazra, Rohan; Tenney, Susan; Shlionskaya, Alexandra; Samavedam, Rajni; Baxter, Kristin; Ilekis, John; Weck, Jennifer; Willinger, Marian; Grave, Gilman; Tsilou, Katerina; Songco, David

    2018-01-01

    The benefits of data sharing are well-established and an increasing number of policies require that data be shared upon publication of the main study findings. As data sharing becomes the new norm, there is a heightened need for additional resources to drive efficient data reuse. This article describes the development and implementation of the Data and Specimen Hub (DASH) by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) to promote data sharing from NICHD-funded studies and enable researchers to comply with NIH data sharing policies. DASH’s flexible architecture is designed to archive diverse data types and formats from NICHD’s broad scientific portfolio in a manner that promotes FAIR data sharing principles. Performance of DASH over two years since launch is promising: the number of available studies and data requests are growing; three manuscripts have been published from data reanalysis, all within two years of access. Critical success factors included NICHD leadership commitment, stakeholder engagement and close coordination between the governance body and technical team. PMID:29557977

  2. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  3. Shared protection based virtual network mapping in space division multiplexing optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie

    2018-05-01

    Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.

  4. The UCSC Genome Browser: What Every Molecular Biologist Should Know.

    PubMed

    Mangan, Mary E; Williams, Jennifer M; Kuhn, Robert M; Lathe, Warren C

    2014-07-01

    Electronic data resources can enable molecular biologists to quickly get information from around the world that a decade ago would have been buried in papers scattered throughout the library. The ability to access, query, and display these data makes benchwork much more efficient and drives new discoveries. Increasingly, mastery of software resources and corresponding data repositories is required to fully explore the volume of data generated in biomedical and agricultural research, because only small amounts of data are actually found in traditional publications. The UCSC Genome Browser provides a wealth of data and tools that advance understanding of genomic context for many species, enable detailed analysis of data, and provide the ability to interrogate regions of interest across disparate data sets from a wide variety of sources. Researchers can also supplement the standard display with their own data to query and share this with others. Effective use of these resources has become crucial to biological research today, and this unit describes some practical applications of the UCSC Genome Browser. Copyright © 2014 John Wiley & Sons, Inc.

  5. High-resolution digital brain atlases: a Hubble telescope for the brain.

    PubMed

    Jones, Edward G; Stone, James M; Karten, Harvey J

    2011-05-01

    We describe implementation of a method for digitizing at microscopic resolution brain tissue sections containing normal and experimental data and for making the content readily accessible online. Web-accessible brain atlases and virtual microscopes for online examination can be developed using existing computer and internet technologies. Resulting databases, made up of hierarchically organized, multiresolution images, enable rapid, seamless navigation through the vast image datasets generated by high-resolution scanning. Tools for visualization and annotation of virtual microscope slides enable remote and universal data sharing. Interactive visualization of a complete series of brain sections digitized at subneuronal levels of resolution offers fine grain and large-scale localization and quantification of many aspects of neural organization and structure. The method is straightforward and replicable; it can increase accessibility and facilitate sharing of neuroanatomical data. It provides an opportunity for capturing and preserving irreplaceable, archival neurohistological collections and making them available to all scientists in perpetuity, if resources could be obtained from hitherto uninterested agencies of scientific support. © 2011 New York Academy of Sciences.

  6. Grid enablement of OpenGeospatial Web Services: the G-OWS Working Group

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo

    2010-05-01

    In last decades two main paradigms for resource sharing emerged and reached maturity: the Web and the Grid. They both demonstrate suitable for building Distributed Computing Infrastructures (DCIs) supporting the coordinated sharing of resources (i.e. data, information, services, etc) on the Internet. Grid and Web DCIs have much in common as a result of their underlying Internet technology (protocols, models and specifications). However, being based on different requirements and architectural approaches, they show some differences as well. The Web's "major goal was to be a shared information space through which people and machines could communicate" [Berners-Lee 1996]. The success of the Web, and its consequent pervasiveness, made it appealing for building specialized systems like the Spatial Data Infrastructures (SDIs). In this systems the introduction of Web-based geo-information technologies enables specialized services for geospatial data sharing and processing. The Grid was born to achieve "flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources" [Foster 2001]. It specifically focuses on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In the Earth and Space Sciences (ESS) the most part of handled information is geo-referred (geo-information) since spatial and temporal meta-information is of primary importance in many application domains: Earth Sciences, Disasters Management, Environmental Sciences, etc. On the other hand, in several application areas there is the need of running complex models which require the large processing and storage capabilities that the Grids are able to provide. Therefore the integration of geo-information and Grid technologies might be a valuable approach in order to enable advanced ESS applications. Currently both geo-information and Grid technologies have reached a high level of maturity, allowing to build such an integration on existing solutions. More specifically, the Open Geospatial Consortium (OGC) Web Services (OWS) specifications play a fundamental role in geospatial information sharing (e.g. in INSPIRE Implementing Rules, GEOSS architecture, GMES Services, etc.). On the Grid side, the gLite middleware, developed in the European EGEE (Enabling Grids for E-sciencE) Projects, is widely spread in Europe and beyond, proving its high scalability and it is one of the middleware chosen for the future European Grid Infrastructure (EGI) initiative. Therefore the convergence between OWS and gLite technologies would be desirable for a seamless access to the Grid capabilities through OWS-compliant systems. Anyway, to achieve this harmonization there are some obstacles to overcome. Firstly, a semantics mismatch must be addressed: gLite handle low-level (e.g. close to the machine) concepts like "file", "data", "instruments", "job", etc., while geo-information services handle higher-level (closer to the human) concepts like "coverage", "observation", "measurement", "model", etc. Secondly, an architectural mismatch must be addressed: OWS implements a Web Service-Oriented-Architecture which is stateless, synchronous and with no embedded security (which is demanded to other specs), while gLite implements the Grid paradigm in an architecture which is stateful, asynchronous (even not fully event-based) and with strong embedded security (based on the VO paradigm). In recent years many initiatives and projects have worked out possible approaches for implementing Grid-enabled OWSs. Just to mention some: (i) in 2007 the OGC has signed a Memorandum of Understanding with the Open Grid Forum, "a community of users, developers, and vendors leading the global standardization effort for grid computing."; (ii) the OGC identified "WPS Profiles - Conflation; and Grid processing" as one of the tasks in the Geo Processing Workflow theme of the OWS Phase 6 (OWS-6); (iii) several national, European and international projects investigated different aspects of this integration, developing demonstrators and Proof-of-Concepts; In this context, "gLite enablement of OpenGeospatial Web Services" (G-OWS) is an initiative started in 2008 by the European CYCLOPS, GENESI-DR, and DORII Projects Consortia in order to collect/coordinate experiences on the enablement of OWS on top of the gLite middleware [GOWS]. Currently G-OWS counts ten member organizations from Europe and beyond, and four European Projects involved. It broadened its scope to the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Its operational objectives are the following: i) to contribute to the OGC-OGF initiative; ii) to release a reference implementation as standard gLite APIs (under the gLite software license); iii) to release a reference model (including procedures and guidelines) for OWS Grid-ification, as far as gLite is concerned; iv) to foster and promote the formation of consortiums for participation to projects/initiatives aimed at building Grid-enabled SDIs To achieve this objectives G-OWS bases its activities on two main guiding principles: a) the adoption of a service-oriented architecture based on the information modelling approach, and b) standardization as a means of achieving interoperability (i.e. adoption of standards from ISO TC211, OGC OWS, OGF). In the first year of activity G-OWS has designed a general architectural framework stemming from the FP6 CYCLOPS studies and enriched by the outcomes of other projects and initiatives involved (i.e. FP7 GENESI-DR, FP7 DORII, AIST GeoGrid, etc.). Some proof-of-concepts have been developed to demonstrate the flexibility and scalability of such architectural framework. The G-OWS WG developed implementations of gLite-enabled Web Coverage Service (WCS) and Web Processing Service (WPS), and an implementation of a Shibboleth authentication for gLite-enabled OWS in order to evaluate the possible integration of Web and Grid security models. The presentation will aim to communicate the G-OWS organization, activities, future plans and means to involve the ESSI community. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Foster 2001] I. Foster, C. Kesselman and S. Tuecke, "The Anatomy of the Grid. The International Journal ofHigh Performance Computing Applications", 15(3):200-222, Fall 2001 [GOWS] G-OWS WG, https://www.g-ows.org/, accessed: 15 January 2010

  7. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  8. Trusted Data Sharing and Imagery Workflow for Disaster Response in Partnership with the State of California

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Aubrey, A. D.; Rosinski, A.; Morentz, J.; Beilin, P.; Jones, D.

    2016-12-01

    Providing actionable data for situational awareness following an earthquake or other disaster is critical to decision makers in order to improve their ability to anticipate requirements and provide appropriate resources for response. Key information on the nature, magnitude and scope of damage, or Essential Elements of Information (EEI), necessary to achieve situational awareness are often generated from a wide array of organizations and disciplines, using any number of geospatial and non-geospatial technologies. We have worked in partnership with the California Earthquake Clearinghouse to develop actionable data products for use in their response efforts, particularly in regularly scheduled, statewide exercises like the recent 2016 Cascadia Rising NLE, the May 2015 Capstone/SoCal NLE/Ardent Sentry Exercises and in the August 2014 South Napa earthquake activation and plan to participate in upcoming exercises with the National Guard (Vigilant Guard 17) and the USGS (Haywired). Our efforts over the past several years have been to aid in enabling coordination between research scientists, applied scientists and decision makers in order to reduce duplication of effort, maximize information sharing, translate scientific results into actionable information for decision-makers, and increase situational awareness. We will present perspectives on developing tools for decision support and data discovery in partnership with the Clearinghouse. Products delivered include map layers as part of the common operational data plan for the Clearinghouse delivered through XchangeCore Web Service Data Orchestration and the SpotOnResponse field analysis application. We are exploring new capabilities for real-time collaboration using GeoCollaborate®. XchangeCore allows real-time, two-way information sharing, enabling users to create merged datasets from multiple providers; SpotOnResponse provides web-enabled secure information exchange, collaboration, and field analysis for responders; and GeoCollaborate® enables users to access, share, manipulate, and interact across disparate platforms, connecting public and private sector agencies and organizations rapidly on the same map at the same time, allowing improved collaborative decision making on the same datasets simultaneously.

  9. p-BioSPRE—an information and communication technology framework for transnational biomaterial sharing and access

    PubMed Central

    Weiler, Gabriele; Schröder, Christina; Schera, Fatima; Dobkowicz, Matthias; Kiefer, Stephan; Heidtke, Karsten R; Hänold, Stefanie; Nwankwo, Iheanyi; Forgó, Nikolaus; Stanulla, Martin; Eckert, Cornelia; Graf, Norbert

    2014-01-01

    Biobanks represent key resources for clinico-genomic research and are needed to pave the way to personalised medicine. To achieve this goal, it is crucial that scientists can securely access and share high-quality biomaterial and related data. Therefore, there is a growing interest in integrating biobanks into larger biomedical information and communication technology (ICT) infrastructures. The European project p-medicine is currently building an innovative ICT infrastructure to meet this need. This platform provides tools and services for conducting research and clinical trials in personalised medicine. In this paper, we describe one of its main components, the biobank access framework p-BioSPRE (p-medicine Biospecimen Search and Project Request Engine). This generic framework enables and simplifies access to existing biobanks, but also to offer own biomaterial collections to research communities, and to manage biobank specimens and related clinical data over the ObTiMA Trial Biomaterial Manager. p-BioSPRE takes into consideration all relevant ethical and legal standards, e.g., safeguarding donors’ personal rights and enabling biobanks to keep control over the donated material and related data. The framework thus enables secure sharing of biomaterial within open and closed research communities, while flexibly integrating related clinical and omics data. Although the development of the framework is mainly driven by user scenarios from the cancer domain, in this case, acute lymphoblastic leukaemia and Wilms tumour, it can be extended to further disease entities. PMID:24567758

  10. Design, Development, and Initial Evaluation of a Terminology for Clinical Decision Support and Electronic Clinical Quality Measurement.

    PubMed

    Lin, Yanhua; Staes, Catherine J; Shields, David E; Kandula, Vijay; Welch, Brandon M; Kawamoto, Kensaku

    2015-01-01

    When coupled with a common information model, a common terminology for clinical decision support (CDS) and electronic clinical quality measurement (eCQM) could greatly facilitate the distributed development and sharing of CDS and eCQM knowledge resources. To enable such scalable knowledge authoring and sharing, we systematically developed an extensible and standards-based terminology for CDS and eCQM in the context of the HL7 Virtual Medical Record (vMR) information model. The development of this terminology entailed three steps: (1) systematic, physician-curated concept identification from sources such as the Health Information Technology Standards Panel (HITSP) and the SNOMED-CT CORE problem list; (2) concept de-duplication leveraging the Unified Medical Language System (UMLS) MetaMap and Metathesaurus; and (3) systematic concept naming using standard terminologies and heuristic algorithms. This process generated 3,046 concepts spanning 68 domains. Evaluation against representative CDS and eCQM resources revealed approximately 50-70% concept coverage, indicating the need for continued expansion of the terminology.

  11. Optimizing CMS build infrastructure via Apache Mesos

    DOE PAGES

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...

    2015-12-23

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  12. Enabling Interoperable Space Robots With the Joint Technical Architecture for Robotic Systems (JTARS)

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville

    2005-01-01

    Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.

  13. Design, Development, and Initial Evaluation of a Terminology for Clinical Decision Support and Electronic Clinical Quality Measurement

    PubMed Central

    Lin, Yanhua; Staes, Catherine J; Shields, David E; Kandula, Vijay; Welch, Brandon M; Kawamoto, Kensaku

    2015-01-01

    When coupled with a common information model, a common terminology for clinical decision support (CDS) and electronic clinical quality measurement (eCQM) could greatly facilitate the distributed development and sharing of CDS and eCQM knowledge resources. To enable such scalable knowledge authoring and sharing, we systematically developed an extensible and standards-based terminology for CDS and eCQM in the context of the HL7 Virtual Medical Record (vMR) information model. The development of this terminology entailed three steps: (1) systematic, physician-curated concept identification from sources such as the Health Information Technology Standards Panel (HITSP) and the SNOMED-CT CORE problem list; (2) concept de-duplication leveraging the Unified Medical Language System (UMLS) MetaMap and Metathesaurus; and (3) systematic concept naming using standard terminologies and heuristic algorithms. This process generated 3,046 concepts spanning 68 domains. Evaluation against representative CDS and eCQM resources revealed approximately 50–70% concept coverage, indicating the need for continued expansion of the terminology. PMID:26958220

  14. Optimizing CMS build infrastructure via Apache Mesos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  15. World Water Online (WWO) Status and Prospects

    NASA Astrophysics Data System (ADS)

    Arctur, David; Maidment, David

    2013-04-01

    Water resources, weather, and natural disasters are not constrained by local, regional or national boundaries. Effective research, planning, and response to major events call for improved coordination and data sharing among many organizations, which requires improved interoperability among the organizations' diverse information systems. Just for the historical time series records of surface freshwater resources data compiled by U.S. national agencies, there are over 23 million distributed datasets available today. Cataloguing and searching efficiently for specific content from this many datasets presents a challenge to current standards and practices for digital geospatial catalogues. This presentation summarizes a new global platform for water resource information discovery and sharing, that provides coordinated, interactive access to water resource metadata for the complete holdings of the Global Runoff Data Centre, the U.S. Geological Survey, and other primary sources. In cases where the data holdings are not restricted by national policy, this interface enables direct access to the water resource data, hydrographs, and other derived products. This capability represents a framework in which any number of other services can be integrated in user-accessible workflows, such as to perform watershed delineation from any point on the stream network. World Water Online web services for mapping and metadata have been registered with GEOSS. In addition to summarizing the architecture and capabilities of World Water Online, future plans for integration with GEOSS and EarthCube will be presented.

  16. Optimization of shared autonomy vehicle control architectures for swarm operations.

    PubMed

    Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R

    2010-08-01

    The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.

  17. The extent of interorganizational resource sharing among local health departments: the association with organizational characteristics and institutional factors.

    PubMed

    Vest, Joshua R; Shah, Gulzar H

    2012-11-01

    Resource sharing, arrangements between local health departments (LHDs) for joint programs or to share staff, is a growing occurrence. The post-9/11 influx of federal funding and new public health preparedness responsibilities dramatically increased the occurrence of these inter-LHD relationships, and several states have pursed more intrastate collaboration. This article describes the current state of resource sharing among LHDs and identifies the factors associated with resource sharing. Using the National Association of County & City Health Officials' 2010 Profile Survey, we determined the self-reported number of shared programmatic activities and the number of shared organizational functions for a sample of LHDs. Negative binomial regression models described the relationships between factors suggested by interorganizational theory and the counts of sharing activities. We examined the extent of resource sharing using 2 different count variables: (1) number of shared programmatic activities and (2) number of shared organizational functions. About one-half of all LHDs are engaged in resource sharing. The extent of sharing was lower for those serving larger populations, with city jurisdictions, or of larger size. Sharing was more extensive for state-governed LHDs, those covering multiple jurisdictions, states with centralized governance, and in instances of financial constraint. Many LHDs are engaged in a greater extent of resource sharing than others. Leaders of LHDs can work within the context of these factors to leverage resource sharing to meet their organizational needs.

  18. GeoSci: Practices to Collaboratively Build Online Resources for Geophysics Education

    NASA Astrophysics Data System (ADS)

    Heagy, L. J.; Cockett, R.; Oldenburg, D.

    2016-12-01

    What happens when you apply best practices of software development to the development of educational resources? GeoSci (http://geosci.xyz) is our experiment examining this question. In 2007, a web-based "textbook" resource: Geophysics for Practicing Geoscientists (GPG, https://www.eoas.ubc.ca/courses/eosc350/content/index.htm) was created to serve as the primary resource for an undergraduate applied geophysics course at UBC taken primarily by non-geophysics majors. The web-based resource, allowed students to navigate through the concepts in a nonlinear way using hyperlinks, and enabled interactive content to be embedded. Subsequent to the web-based release for our UBC course, this resource has also seen widespread international use across the geophysical community. The available resources and best practices have advanced significantly since 2007. The format in which the GPG was originally developed (raw html and css) hindered improvements and thus maintenance and development of the resource was essentially reduced to correcting typos. Bringing this resource to sustainable state in which it can be built upon, edited and adapted has required looking to other disciplines such as software maintenance and development. By applying leading practices from open source software development, including versioning, testing, automated deployment as well as open development practices, such as issue tracking and employing creative commons licensing, we have worked to create a revamped GPG (http://gpg.geosci.xyz) that can be collaborated on and extended. The GPG and a companion resource for electromagnetics have been worked on by over 25 people, with much of the development happening in parallel. In this presentation, we will share our experience, identify what we see as some of the key learnings that have enabled collaboration in resource development, and present a vision for how we see these resources being sustained in the future.

  19. Cross-Jurisdictional Resource Sharing in Changing Public Health Landscape: Contributory Factors and Theoretical Explanations.

    PubMed

    Shah, Gulzar H; Badana, Adrian N S; Robb, Claire; Livingood, William C

    2016-01-01

    Local health departments (LHDs) are striving to meet public health needs within their jurisdictions, amidst fiscal restraints and complex dynamic environment. Resource sharing across jurisdictions is a critical opportunity for LHDs to continue to enhance effectiveness and increase efficiency. This research examines the extent of cross-jurisdictional resource sharing among LHDs, the programmatic areas and organizational functions for which LHDs share resources, and LHD characteristics associated with resource sharing. Data from the National Association of County & City Health Officials' 2013 National Profile of LHDs were used. Descriptive statistics and multinomial logistic regression were performed for the 5 implementation-oriented outcome variables of interest, with 3 levels of implementation. More than 54% of LHDs shared resources such as funding, staff, or equipment with 1 or more other LHDs on a continuous, recurring basis. Results from the multinomial regression analysis indicate that economies of scale (population size and metropolitan status) had significant positive influences (at P ≤ .05) on resource sharing. Engagement in accreditation, community health assessment, community health improvement planning, quality improvement, and use of the Community Guide were associated with lower levels of engagement in resource sharing. Doctoral degree of the top executive and having 1 or more local boards of health carried a positive influence on resource sharing. Cross-jurisdictional resource sharing is a viable and commonly used process to overcome the challenges of new and emerging public health problems within the constraints of restricted budgets. LHDs, particularly smaller LHDs with limited resources, should consider increased resource sharing to address emerging challenges.

  20. Nasa's Experiences Enabling the Capture and Sharing of Technical Expertise Through Communities of Practice

    NASA Astrophysics Data System (ADS)

    Topousis, Daria E.; Dennehy, Cornelius J.; Lebsock, Kenneth L.

    2012-12-01

    Historically, engineers at the National Aeronautics and Space Administration (NASA) had few opportunities or incentives to share their technical expertise across the Agency. Its center- and project-focused culture often meant that knowledge never left organizational and geographic boundaries. The need to develop a knowledge sharing culture became critical as a result of increasingly complex missions, closeout of the Shuttle Program, and a new generation of engineers entering the workforce. To address this need, the Office of the Chief Engineer established communities of practice on the NASA Engineering Network. These communities were strategically aligned with NASA's core competencies in such disciplines as avionics, flight mechanics, life support, propulsion, structures, loads and dynamics, human factors, and guidance, navigation, and control. This paper is a case study of NASA's implementation of a system that would identify and develop communities, from establishing simple websites that compiled discipline-specific resources to fostering a knowledge-sharing environment through collaborative and interactive technologies. It includes qualitative evidence of improved availability and transfer of knowledge. It focuses on capabilities that increased knowledge exchange such as a custom-made Ask An Expert system, community contact lists, publication of key resources, and submission forms that allowed any user to propose content for the sites. It discusses the peer relationships that developed through the communities and the leadership and infrastructure that made them possible.

  1. Raising Virtual Laboratories in Australia onto global platforms

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.

    2016-12-01

    Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a small (and often requested) step. Similarly, most of the tools, software, and other technologies could be shared across infrastructures globally. Therefore, it is now time to better connect the Australian VLs with similar initiatives elsewhere to create international platforms that can contribute to global research challenges.

  2. A case study: semantic integration of gene-disease associations for type 2 diabetes mellitus from literature and biomedical data resources.

    PubMed

    Rebholz-Schuhmann, Dietrich; Grabmüller, Christoph; Kavaliauskas, Silvestras; Croset, Samuel; Woollard, Peter; Backofen, Rolf; Filsell, Wendy; Clark, Dominic

    2014-07-01

    In the Semantic Enrichment of the Scientific Literature (SESL) project, researchers from academia and from life science and publishing companies collaborated in a pre-competitive way to integrate and share information for type 2 diabetes mellitus (T2DM) in adults. This case study exposes benefits from semantic interoperability after integrating the scientific literature with biomedical data resources, such as UniProt Knowledgebase (UniProtKB) and the Gene Expression Atlas (GXA). We annotated scientific documents in a standardized way, by applying public terminological resources for diseases and proteins, and other text-mining approaches. Eventually, we compared the genetic causes of T2DM across the data resources to demonstrate the benefits from the SESL triple store. Our solution enables publishers to distribute their content with little overhead into remote data infrastructures, such as into any Virtual Knowledge Broker. Copyright © 2013. Published by Elsevier Ltd.

  3. The Study Team for Early Life Asthma Research (STELAR) consortium ‘Asthma e-lab’: team science bringing data, methods and investigators together

    PubMed Central

    Custovic, Adnan; Ainsworth, John; Arshad, Hasan; Bishop, Christopher; Buchan, Iain; Cullinan, Paul; Devereux, Graham; Henderson, John; Holloway, John; Roberts, Graham; Turner, Steve; Woodcock, Ashley; Simpson, Angela

    2015-01-01

    We created Asthma e-Lab, a secure web-based research environment to support consistent recording, description and sharing of data, computational/statistical methods and emerging findings across the five UK birth cohorts. The e-Lab serves as a data repository for our unified dataset and provides the computational resources and a scientific social network to support collaborative research. All activities are transparent, and emerging findings are shared via the e-Lab, linked to explanations of analytical methods, thus enabling knowledge transfer. eLab facilitates the iterative interdisciplinary dialogue between clinicians, statisticians, computer scientists, mathematicians, geneticists and basic scientists, capturing collective thought behind the interpretations of findings. PMID:25805205

  4. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  5. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  6. Virtual Patients on the Semantic Web: A Proof-of-Application Study

    PubMed Central

    Dafli, Eleni; Antoniou, Panagiotis; Ioannidis, Lazaros; Dombros, Nicholas; Topps, David

    2015-01-01

    Background Virtual patients are interactive computer simulations that are increasingly used as learning activities in modern health care education, especially in teaching clinical decision making. A key challenge is how to retrieve and repurpose virtual patients as unique types of educational resources between different platforms because of the lack of standardized content-retrieving and repurposing mechanisms. Semantic Web technologies provide the capability, through structured information, for easy retrieval, reuse, repurposing, and exchange of virtual patients between different systems. Objective An attempt to address this challenge has been made through the mEducator Best Practice Network, which provisioned frameworks for the discovery, retrieval, sharing, and reuse of medical educational resources. We have extended the OpenLabyrinth virtual patient authoring and deployment platform to facilitate the repurposing and retrieval of existing virtual patient material. Methods A standalone Web distribution and Web interface, which contains an extension for the OpenLabyrinth virtual patient authoring system, was implemented. This extension was designed to semantically annotate virtual patients to facilitate intelligent searches, complex queries, and easy exchange between institutions. The OpenLabyrinth extension enables OpenLabyrinth authors to integrate and share virtual patient case metadata within the mEducator3.0 network. Evaluation included 3 successive steps: (1) expert reviews; (2) evaluation of the ability of health care professionals and medical students to create, share, and exchange virtual patients through specific scenarios in extended OpenLabyrinth (OLabX); and (3) evaluation of the repurposed learning objects that emerged from the procedure. Results We evaluated 30 repurposed virtual patient cases. The evaluation, with a total of 98 participants, demonstrated the system’s main strength: the core repurposing capacity. The extensive metadata schema presentation facilitated user exploration and filtering of resources. Usability weaknesses were primarily related to standard computer applications’ ease of use provisions. Most evaluators provided positive feedback regarding educational experiences on both content and system usability. Evaluation results replicated across several independent evaluation events. Conclusions The OpenLabyrinth extension, as part of the semantic mEducator3.0 approach, is a virtual patient sharing approach that builds on a collection of Semantic Web services and federates existing sources of clinical and educational data. It is an effective sharing tool for virtual patients and has been merged into the next version of the app (OpenLabyrinth 3.3). Such tool extensions may enhance the medical education arsenal with capacities of creating simulation/game-based learning episodes, massive open online courses, curricular transformations, and a future robust infrastructure for enabling mobile learning. PMID:25616272

  7. Virtual patients on the semantic Web: a proof-of-application study.

    PubMed

    Dafli, Eleni; Antoniou, Panagiotis; Ioannidis, Lazaros; Dombros, Nicholas; Topps, David; Bamidis, Panagiotis D

    2015-01-22

    Virtual patients are interactive computer simulations that are increasingly used as learning activities in modern health care education, especially in teaching clinical decision making. A key challenge is how to retrieve and repurpose virtual patients as unique types of educational resources between different platforms because of the lack of standardized content-retrieving and repurposing mechanisms. Semantic Web technologies provide the capability, through structured information, for easy retrieval, reuse, repurposing, and exchange of virtual patients between different systems. An attempt to address this challenge has been made through the mEducator Best Practice Network, which provisioned frameworks for the discovery, retrieval, sharing, and reuse of medical educational resources. We have extended the OpenLabyrinth virtual patient authoring and deployment platform to facilitate the repurposing and retrieval of existing virtual patient material. A standalone Web distribution and Web interface, which contains an extension for the OpenLabyrinth virtual patient authoring system, was implemented. This extension was designed to semantically annotate virtual patients to facilitate intelligent searches, complex queries, and easy exchange between institutions. The OpenLabyrinth extension enables OpenLabyrinth authors to integrate and share virtual patient case metadata within the mEducator3.0 network. Evaluation included 3 successive steps: (1) expert reviews; (2) evaluation of the ability of health care professionals and medical students to create, share, and exchange virtual patients through specific scenarios in extended OpenLabyrinth (OLabX); and (3) evaluation of the repurposed learning objects that emerged from the procedure. We evaluated 30 repurposed virtual patient cases. The evaluation, with a total of 98 participants, demonstrated the system's main strength: the core repurposing capacity. The extensive metadata schema presentation facilitated user exploration and filtering of resources. Usability weaknesses were primarily related to standard computer applications' ease of use provisions. Most evaluators provided positive feedback regarding educational experiences on both content and system usability. Evaluation results replicated across several independent evaluation events. The OpenLabyrinth extension, as part of the semantic mEducator3.0 approach, is a virtual patient sharing approach that builds on a collection of Semantic Web services and federates existing sources of clinical and educational data. It is an effective sharing tool for virtual patients and has been merged into the next version of the app (OpenLabyrinth 3.3). Such tool extensions may enhance the medical education arsenal with capacities of creating simulation/game-based learning episodes, massive open online courses, curricular transformations, and a future robust infrastructure for enabling mobile learning.

  8. Is the Organizational Culture of the U.S. Army Congruent with the Professional Development of Its Senior Level Officer Corps

    DTIC Science & Technology

    2010-09-01

    U.S. Army War College. Yeung, A. K. O., Brockbank , J. W. and Ulrich , D. O., (1991), “Organizational Culture and Human Resources Practices: An...organizational members. Accordingly, Mar- tin et al. ( 1997 ), emphasize that studies of organiza- tional culture share a common objective, which is “to...actions of organizational members” (Martin et al., 1997 , p. 3). An organization’s culture enables its members to work through the basic prob- lems of

  9. Managing effectively in the downsized organization.

    PubMed

    Arnold, Edwin; Pulich, Marcia

    2003-01-01

    Many health care institutions have downsized in recent years for a variety of reasons including cost savings and the need to be proactive in restructuring the organization for more effective performance. In a downsized organization, top management must develop new strategies to enable line managers at all levels to operate effectively. New policies for human resource strategic planning, selective hiring, employee empowerment, training and development, reduction of status distinctions, sharing of appropriate information with employees, and paying for performance must be implemented.

  10. Negotiating designs of multi-purpose reservoir systems in international basins

    NASA Astrophysics Data System (ADS)

    Geressu, Robel; Harou, Julien

    2016-04-01

    Given increasing agricultural and energy demands, coordinated management of multi-reservoir systems could help increase production without further stressing available water resources. However, regional or international disputes about water-use rights pose a challenge to efficient expansion and management of many large reservoir systems. Even when projects are likely to benefit all stakeholders, agreeing on the design, operation, financing, and benefit sharing can be challenging. This is due to the difficulty of considering multiple stakeholder interests in the design of projects and understanding the benefit trade-offs that designs imply. Incommensurate performance metrics, incomplete knowledge on system requirements, lack of objectivity in managing conflict and difficulty to communicate complex issue exacerbate the problem. This work proposes a multi-step hybrid multi-objective optimization and multi-criteria ranking approach for supporting negotiation in water resource systems. The approach uses many-objective optimization to generate alternative efficient designs and reveal the trade-offs between conflicting objectives. This enables informed elicitation of criteria weights for further multi-criteria ranking of alternatives. An ideal design would be ranked as best by all stakeholders. Resource-sharing mechanisms such as power-trade and/or cost sharing may help competing stakeholders arrive at designs acceptable to all. Many-objective optimization helps suggests efficient designs (reservoir site, its storage size and operating rule) and coordination levels considering the perspectives of multiple stakeholders simultaneously. We apply the proposed approach to a proof-of-concept study of the expansion of the Blue Nile transboundary reservoir system.

  11. Dataworks for GNSS: Software for Supporting Data Sharing and Federation of Geodetic Networks

    NASA Astrophysics Data System (ADS)

    Boler, F. M.; Meertens, C. M.; Miller, M. M.; Wier, S.; Rost, M.; Matykiewicz, J.

    2015-12-01

    Continuously-operating Global Navigation Satellite System (GNSS) networks are increasingly being installed globally for a wide variety of science and societal applications. GNSS enables Earth science research in areas including tectonic plate interactions, crustal deformation in response to loading by tectonics, magmatism, water and ice, and the dynamics of water - and thereby energy transfer - in the atmosphere at regional scale. The many individual scientists and organizations that set up GNSS stations globally are often open to sharing data, but lack the resources or expertise to deploy systems and software to manage and curate data and metadata and provide user tools that would support data sharing. UNAVCO previously gained experience in facilitating data sharing through the NASA-supported development of the Geodesy Seamless Archive Centers (GSAC) open source software. GSAC provides web interfaces and simple web services for data and metadata discovery and access, supports federation of multiple data centers, and simplifies transfer of data and metadata to long-term archives. The NSF supported the dissemination of GSAC to multiple European data centers forming the European Plate Observing System. To expand upon GSAC to provide end-to-end, instrument-to-distribution capability, UNAVCO developed Dataworks for GNSS with NSF funding to the COCONet project, and deployed this software on systems that are now operating as Regional GNSS Data Centers as part of the NSF-funded TLALOCNet and COCONet projects. Dataworks consists of software modules written in Python and Java for data acquisition, management and sharing. There are modules for GNSS receiver control and data download, a database schema for metadata, tools for metadata handling, ingest software to manage file metadata, data file management scripts, GSAC, scripts for mirroring station data and metadata from partner GSACs, and extensive software and operator documentation. UNAVCO plans to provide a cloud VM image of Dataworks that would allow standing up a Dataworks-enabled GNSS data center without requiring upfront investment in server hardware. By enabling data creators to organize their data and metadata for sharing, Dataworks helps scientists expand their data curation awareness and responsibility, and enhances data access for all.

  12. Electronic Resource Sharing in Community Colleges: A Snapshot of Florida, Wisconsin, Texas, and Louisiana.

    ERIC Educational Resources Information Center

    Mahoney, Brian D.

    2000-01-01

    States that several states are establishing networks for resource sharing. Florida offers these resources through the Florida Distance Learning Library Initiative, Wisconsin has BadgerLink and WISCAT, TexShare provides library resource sharing in Texas, and Louisiana has LOUIS and LLN. These are some of the states successfully demonstrating…

  13. On Representative Spaceflight Instrument and Associated Instrument Sensor Web Framework

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Patel, Umeshkumar; Vootukuru, Meg

    2007-01-01

    Sensor Web-based adaptation and sharing of space flight mission resources, including those of the Space-Ground and Control-User communication segment, could greatly benefit from utilization of heritage Internet Protocols and devices applied for Spaceflight (SpaceIP). This had been successfully demonstrated by a few recent spaceflight experiments. However, while terrestrial applications of Internet protocols are well developed and understood (mostly due to billions of dollars in investments by the military and industry), the spaceflight application of Internet protocols is still in its infancy. Progress in the developments of SpaceIP-enabled instrument components will largely determine the SpaceIP utilization of those investments and acceptance in years to come. Likewise SpaceIP, the development of commercial real-time and instrument colocated computational resources, data compression and storage, can be enabled on-board a spacecraft and, in turn, support a powerful application to Sensor Web-based design of a spaceflight instrument. Sensor Web-enabled reconfiguration and adaptation of structures for hardware resources and information systems will commence application of Field Programmable Arrays (FPGA) and other aerospace programmable logic devices for what this technology was intended. These are a few obvious potential benefits of Sensor Web technologies for spaceflight applications. However, they are still waiting to be explored. This is because there is a need for a new approach to spaceflight instrumentation in order to make these mature sensor web technologies applicable for spaceflight. In this paper we present an approach in developing related and enabling spaceflight instrument-level technologies based on the new concept of a representative spaceflight Instrument Sensor Web (ISW).

  14. Retrieving and routing quantum information in a quantum network

    NASA Astrophysics Data System (ADS)

    Sazim, S.; Chiranjeevi, V.; Chakrabarty, I.; Srinathan, K.

    2015-12-01

    In extant quantum secret sharing protocols, once the secret is shared in a quantum network ( qnet) it cannot be retrieved, even if the dealer wishes that his/her secret no longer be available in the network. For instance, if the dealer is part of the two qnets, say {{Q}}_1 and {{Q}}_2 and he/she subsequently finds that {{Q}}_2 is more reliable than {{Q}}_1, he/she may wish to transfer all her secrets from {{Q}}_1 to {{Q}}_2. Known protocols are inadequate to address such a revocation. In this work we address this problem by designing a protocol that enables the source/dealer to bring back the information shared in the network, if desired. Unlike classical revocation, the no-cloning theorem automatically ensures that the secret is no longer shared in the network. The implications of our results are multi-fold. One interesting implication of our technique is the possibility of routing qubits in asynchronous qnets. By asynchrony we mean that the requisite data/resources are intermittently available (but not necessarily simultaneously) in the qnet. For example, we show that a source S can send quantum information to a destination R even though (a) S and R share no quantum resource, (b) R's identity is unknown to S at the time of sending the message, but is subsequently decided, (c) S herself can be R at a later date and/or in a different location to bequeath her information (`backed-up' in the qnet) and (d) importantly, the path chosen for routing the secret may hit a dead end due to resource constraints, congestion, etc., (therefore the information needs to be back-tracked and sent along an alternate path). Another implication of our technique is the possibility of using insecure resources. For instance, if the quantum memory within an organization is insufficient, it may safely store (using our protocol) its private information with a neighboring organization without (a) revealing critical data to the host and (b) losing control over retrieving the data. Putting the two implications together, namely routing and secure storage, it is possible to envision applications like quantum mail (qmail) as an outsourced service.

  15. Building an International Geosciences Network (i-GEON) for cyberinfrastructure-based Research and Education

    NASA Astrophysics Data System (ADS)

    Seber, D.; Baru, C.

    2007-05-01

    The Geosciences Network (GEON) project is a collaboration among multiple institutions to develop a cyberinfrastructure (CI) platform in support of integrative geoscience research activities. Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, resource discovery, semantic data integration, high-end computations and 4D visualization in an easy-to-use web-based environment. The cyberinfrastructure in GEON is required to support an inherently distributed system, since the scientists, who are users as well as providers of resources, are themselves distributed. International collaborations are a natural extension of GEON; the geoscience research requires strong international collaborations. The goals of the i-GEON activities are to collaborate with international partners and jointly build a cyberinfrastructure for the geosciences to enable collaborative work environments. International partners can participate in GEON efforts, establish GEON nodes at their universities, institutes, or agencies and also contribute data and tools to the network. Via jointly run cyberinfrastructure workshops, the GEON team also introduces students, scientists, and research professionals to the concepts of IT-based geoscience research and education. Currently, joint activities are underway with the Chinese Academy of Sciences in China, the GEO Grid project at AIST in Japan, and the University of Hyderabad in India (where the activity is funded by the Indo-US Science and Technology Forum). Several other potential international partnerships are under consideration. iGEON is open to all international partners who are interested in working towards the goal of data sharing, managing and integration via IT-based platforms. Information about GEON and its international activities can be found at http:www.geongrid.org/

  16. Better bioinformatics through usability analysis.

    PubMed

    Bolchini, Davide; Finkelstein, Anthony; Perrone, Vito; Nagl, Sylvia

    2009-02-01

    Improving the usability of bioinformatics resources enables researchers to find, interact with, share, compare and manipulate important information more effectively and efficiently. It thus enables researchers to gain improved insights into biological processes with the potential, ultimately, of yielding new scientific results. Usability 'barriers' can pose significant obstacles to a satisfactory user experience and force researchers to spend unnecessary time and effort to complete their tasks. The number of online biological databases available is growing and there is an expanding community of diverse users. In this context there is an increasing need to ensure the highest standards of usability. Using 'state-of-the-art' usability evaluation methods, we have identified and characterized a sample of usability issues potentially relevant to web bioinformatics resources, in general. These specifically concern the design of the navigation and search mechanisms available to the user. The usability issues we have discovered in our substantial case studies are undermining the ability of users to find the information they need in their daily research activities. In addition to characterizing these issues, specific recommendations for improvements are proposed leveraging proven practices from web and usability engineering. The methods and approach we exemplify can be readily adopted by the developers of bioinformatics resources.

  17. HRSA's collaborative efforts with national organizations to expand primary care for the medically underserved.

    PubMed Central

    Crane, A B

    1991-01-01

    As the Federal agency that provides leadership in expanding access to primary health care, the Health Resources and Services Administration (HRSA) manages some 50 programs directed toward the delivery of services and strengthening the base of national health resources. An enabling element of the agency's strategy is the expansion of partnerships with national associations, private foundations, and other entities that share a concern for the health care of the medically underserved. Cooperative efforts with national organizations are intended to promote the integration of public and private resources and encourage adoption of efficient approaches to organizing and financing health care. Medical education in the primary care specialties, State programs for women and children, involvement of managed care organizations with low-income populations, and programs concerning the uninsured are the foci of some of these collaborative relationships. PMID:1899932

  18. Wireless Shared Resources: Sharing Right-Of-Way For Wireless Telecommunications, Guidance On Legal And Institutional Issues

    DOT National Transportation Integrated Search

    1997-06-06

    PUBLIC-PRIVATE PARTNERSHIPS SHARED RESOURCE PROJECTS ARE PUBLIC-PRIVATE ARRANGEMENTS THAT INVOLVE SHARING PUBLIC PROPERTY SUCH AS RIGHTS-OF-WAY AND PRIVATE RESOURCES SUCH AS TELECOMMUNICATIONS CAPACITY AND EXPERTISE. TYPICALLY, PRIVATE TELECOMMUNI...

  19. The HydroShare Collaborative Repository for the Hydrology Community

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of, and collaboration around, "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting our approach to making this system easy to use and serving the needs of the hydrology community represented by the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI). Metadata for uploaded files is harvested automatically or captured using easy to use web user interfaces. Users are encouraged to add or create resources in HydroShare early in the data life cycle. To encourage this we allow users to share and collaborate on HydroShare resources privately among individual users or groups, entering metadata while doing the work. HydroShare also provides enhanced functionality for users through web apps that provide tools and computational capability for actions on resources. HydroShare's architecture broadly is comprised of: (1) resource storage, (2) resource exploration website, and (3) web apps for actions on resources. System components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports. Web apps are hosted on separate servers, which may be 3rd party servers. They are registered in HydroShare using a web app resource that configures the connectivity for them to be discovered and launched directly from resource types they are associated with.

  20. International comparisons of health system performance among OECD countries: opportunities and data privacy protection challenges.

    PubMed

    Oderkirk, Jillian; Ronchi, Elettra; Klazinga, Niek

    2013-09-01

    Health data constitute a significant resource in most OECD countries that could be used to improve health system performance. Well-intended policies to allay concerns about breaches of confidentiality and to reduce potential misuse of personal health information may be limiting data use. A survey of 20 OECD countries explored the extent to which countries have developed and use personal health data and the reasons why data use may be problematic in some. Countries are divided, with one-half engaged regularly in national data linkage studies to monitor health care quality. Country variation is linked to risk management in granting an exemption to patient consent requirements; in sharing identifiable data among government authorities; and in project approvals and granting access to data. The resources required to comply with data protection requirements is a secondary problem. The sharing of person-level data across borders for international comparisons is rarely reported and there were few examples of studies of health system performance. Laws and policies enabling data sharing and data linkage are needed to strengthen national information infrastructure. To develop international studies comparing health care quality and health system performance, actions are needed to address heterogeneity in data protection practices. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Optimizing health information technology's role in enabling comparative effectiveness research.

    PubMed

    Navathe, Amol S; Conway, Patrick H

    2010-12-01

    Health information technology (IT) is a key enabler of comparative effectiveness research (CER). Health IT standards for data sharing are essential to advancing the research data infrastructure, and health IT is critical to the next step of incorporating clinical data into data sources. Four key principles for advancement of CER are (1) utilization of data as a strategic asset, (2) leveraging public-private partnerships, (3) building robust, scalable technology platforms, and (4) coordination of activities across government agencies. To maximize the value of the resources, payers and providers must contribute data to initiatives, engage with government agencies on lessons learned, continue to develop new technologies that address key challenges, and utilize the data to improve patient outcomes and conduct research.

  2. Impacts of globalisation on foodborne parasites.

    PubMed

    Robertson, Lucy J; Sprong, Hein; Ortega, Ynes R; van der Giessen, Joke W B; Fayer, Ron

    2014-01-01

    Globalisation is a manmade phenomenon encompassing the spread and movement of everything, animate and inanimate, material and intangible, around the planet. The intentions of globalisation may be worthy--but may also have unintended consequences. Pathogens may also be spread, enabling their establishment in new niches and exposing new human and animal populations to infection. The plethora of foodborne parasites that could be distributed by globalisation has only recently been acknowledged and will provide challenges for clinicians, veterinarians, diagnosticians, and everyone concerned with food safety. Globalisation may also provide the resources to overcome some of these challenges. It will facilitate sharing of methods and approaches, and establishment of systems and databases that enable control of parasites entering the global food chain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Metadata based management and sharing of distributed biomedical data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Liu, Peiya

    2014-01-01

    Biomedical research data sharing is becoming increasingly important for researchers to reuse experiments, pool expertise and validate approaches. However, there are many hurdles for data sharing, including the unwillingness to share, lack of flexible data model for providing context information, difficulty to share syntactically and semantically consistent data across distributed institutions, and high cost to provide tools to share the data. SciPort is a web-based collaborative biomedical data sharing platform to support data sharing across distributed organisations. SciPort provides a generic metadata model to flexibly customise and organise the data. To enable convenient data sharing, SciPort provides a central server based data sharing architecture with a one-click data sharing from a local server. To enable consistency, SciPort provides collaborative distributed schema management across distributed sites. To enable semantic consistency, SciPort provides semantic tagging through controlled vocabularies. SciPort is lightweight and can be easily deployed for building data sharing communities. PMID:24834105

  4. A ‘resource allocator’ for transcription based on a highly fragmented T7 RNA polymerase

    PubMed Central

    Segall-Shapiro, Thomas H; Meyer, Adam J; Ellington, Andrew D; Sontag, Eduardo D; Voigt, Christopher A

    2014-01-01

    Synthetic genetic systems share resources with the host, including machinery for transcription and translation. Phage RNA polymerases (RNAPs) decouple transcription from the host and generate high expression. However, they can exhibit toxicity and lack accessory proteins (σ factors and activators) that enable switching between different promoters and modulation of activity. Here, we show that T7 RNAP (883 amino acids) can be divided into four fragments that have to be co-expressed to function. The DNA-binding loop is encoded in a C-terminal 285-aa ‘σ fragment’, and fragments with different specificity can direct the remaining 601-aa ‘core fragment’ to different promoters. Using these parts, we have built a resource allocator that sets the core fragment concentration, which is then shared by multiple σ fragments. Adjusting the concentration of the core fragment sets the maximum transcriptional capacity available to a synthetic system. Further, positive and negative regulation is implemented using a 67-aa N-terminal ‘α fragment’ and a null (inactivated) σ fragment, respectively. The α fragment can be fused to recombinant proteins to make promoters responsive to their levels. These parts provide a toolbox to allocate transcriptional resources via different schemes, which we demonstrate by building a system which adjusts promoter activity to compensate for the difference in copy number of two plasmids. PMID:25080493

  5. Baseline Suitability Analysis

    DTIC Science & Technology

    2013-07-18

    VA) • DFAS • Human Resources - HR Shared Services (Indianapolis, IN) • Personnel Security - HR Shared Services (Indianapolis, IN) DHRA...Security (Camp Lejeune) No Yes Yes AAFES Human Resources No No No Force Protection Yes Yes Yes DFAS Human Resources - HR Shared Services No...No No Personnel Security - HR Shared Services Yes Yes Yes DLA Human Resources No No Yes Personnel Security Yes Yes Yes DoDEA Human

  6. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  7. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  8. A framework for effective collaboration: a case study of collaboration in nursing education in the Western Cape, South Africa.

    PubMed

    Daniels, Felicity M; Khanyile, Thembisile D

    2013-09-01

    A fundamental purpose of mergers between higher education institutions (HEIs) in 2002 was to enable sharing of scarce resources between more advanced universities and those historically disadvantaged by the apartheid system of the South African Government. A common teaching platform for undergraduate nursing education in the Western Cape was established in 2005, in line with the transformation of the higher education system, as a collaborative initiative between three universities. In order to evaluate the common teaching platform, Stuffelbeam's context, input, process, product (CIPP) research model was employed. A sample of 108 participants was selected through stratified purposive sampling, and included three deputy vice-chancellors, three deans, three heads of department, 18 lecturers and 81 students. Semi-structured interviews were held with the staff members, whilst the students participated in focus group interviews. Open-ended questions informed by literature and the CIPP evaluation model were developed and used to guide the interviews. This enabled the researcher to obtain a rich description of the participants' experiences. The data were analysed inductively. The results revealed that the main purpose of collaboration was not achieved due to the lack of a common understanding of the concept of collaboration and its purpose; a lack of readiness to collaborate and a lack of sharing of resources. A framework for effective collaboration was developed based on the results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Exploiting Expertise and Knowledge Sharing Online for the Benefit of NASA's GN&C Community of Practice

    NASA Technical Reports Server (NTRS)

    Topousis, Daria E.; Lebsock, Kenneth L.; Dennehy, Cornelius J.

    2010-01-01

    In 2004, NASA faced major knowledge sharing challenges due to geographically isolated field centers that inhibited engineers from sharing their experiences, expertise, ideas, and lessons learned. The necessity to collaborate on complex development projects and the reality of constrained project resources together drove the need for ensuring that personnel at all NASA centers had comparable skill sets and that engineers could find resources in a timely fashion. Mission failures and new directions for the Agency also demanded better collaborative tools for NASA's engineering workforce. In response to these needs, the online NASA Engineering Network (NEN) was formed by the NASA Office of the Chief Engineer to provide a multi-faceted system for overcoming geographic and cultural barriers. NEN integrates communities of practice with a cross-repository search and the Lessons Learned Information System. This paper describes the features of the GN&C engineering discipline CoP site which went live on NEN in May of 2008 as an online means of gathering input and guidance from practitioners. It allows GN&C discipline expertise captured at one field center to be shared in a collaborative way with the larger discipline CoP spread across the entire Agency. The site enables GN&C engineers to find the information they need quickly, to find solutions to questions from experienced engineers, and to connect with other practitioners regardless of geographic location, thus increasing the probability of project success.

  10. Shared resources : sharing right-of-way for telecommunications : identification, review and analysis of legal and institutional issues

    DOT National Transportation Integrated Search

    1996-04-01

    This report presents the results of research on the institutional and non-technical issues related to shared resource projects. Shared resource projects are a particular form of public-private partnering that may help public agencies underwrite their...

  11. The HVAC Challenges of Upgrading an Old Lab for High-end Light Microscopes

    PubMed Central

    Richard, R.; Martone, P.; Callahan, L.M.

    2014-01-01

    The University of Rochester Medical Center forms the centerpiece of the University of Rochester's health research, teaching, patient care, and community outreach missions. Within this large facility of over 5 million square feet, demolition and remodeling of existing spaces is a constant activity. With more than $145 million in federal research funding, lab space is frequently repurposed and renovated to support this work. The URMC Medical Center Facilities Organization supporting small to medium space renovations is constantly challenged and constrained by the existing mechanical infrastructure and budgets to deliver a renovated space that functions within the equipment environmental parameters. One recent project, sponsored by the URMC Shared Resources Laboratory, demonstrates these points. The URMC Light Microscopy Shared Resource Laboratory requested renovation of a 121 sq. ft. room in a 40 year old building which would enable placement of a laser capture microdissection microscope and a Pascal 5 laser scanning confocal microscope with the instruments separated by a blackout curtain. This poster discusses the engineering approach implemented to bring an older lab into the environmental specifications needed for the proper operation of the high-end light microscopes.

  12. The National Network forTechnology Entrepreneurship and Commercialization (N2TEC): Bringing New Technologies to Market

    NASA Astrophysics Data System (ADS)

    Allen, Kathleen

    2003-03-01

    N2TEC, the National Network for Technology Entrepreneurship and Commercialization, is a National Science Foundation "Partnerships for Innovation" initiative designed to raise the level of innovation and technology commercialization in colleges, universities, and communities across the nation. N2TEC is creating a network of people and institutions, and a set of technology tools that will facilitate the pooling of resources and knowledge and enable faculty and students to share those resources and collaborate without regard to geographic boundaries. N2TEC will become the backbone by which educational institutions across the nation can move their technologies into new venture startups. The ultimate goal is to create new wealth and strengthen local, regional and national economies.

  13. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  14. System design and implementation of digital-image processing using computational grids

    NASA Astrophysics Data System (ADS)

    Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping

    2005-06-01

    As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.

  15. Synchronization of Finite State Shared Resources

    DTIC Science & Technology

    1976-03-01

    IMHI uiw mmm " AFOSR -TR- 70- 0^8 3 QC o SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Sei neide.- DEPARTMENT of COMPUTER...34" ■ ■ ^ I I. i. . : ,1 . i-i SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Schneider Department of Computer...SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. ABSTRACT The problem of synchronizing a set of operations defined on a shared resource

  16. Cake: Enabling High-level SLOs on Shared Storage Systems

    DTIC Science & Technology

    2012-11-07

    Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang Shivaram Venkataraman Sara Alspaugh Randy H. Katz Ion Stoica Electrical...Date) * * * * * * * Professor R. Katz Second Reader (Date) Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang, Shivaram Venkataraman ...Report MIT-LCS-TR-667, MIT, Laboratory for Computer Science, 1995. [39] A. Wang, S. Venkataraman , S. Alspaugh, I. Stoica, and R. Katz. Sweet storage SLOs

  17. Partnership For Edge Physics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Manish

    In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less

  18. A Cyber Enabled Collaborative Environment for Creating, Sharing and Using Data and Modeling Driven Curriculum Modules for Hydrology Education

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Ruddell, B. L.; Fox, S.; Iverson, E. A. R.

    2014-12-01

    With the access to emerging datasets and computational tools, there is a need to bring these capabilities into hydrology classrooms. However, developing curriculum modules using data and models to augment classroom teaching is hindered by a steep technology learning curve, rapid technology turnover, and lack of an organized community cyberinfrastructure (CI) for the dissemination, publication, and sharing of the latest tools and curriculum material for hydrology and geoscience education. The objective of this project is to overcome some of these limitations by developing a cyber enabled collaborative environment for publishing, sharing and adoption of data and modeling driven curriculum modules in hydrology and geosciences classroom. The CI is based on Carleton College's Science Education Resource Center (SERC) Content Management System. Building on its existing community authoring capabilities the system is being extended to allow assembly of new teaching activities by drawing on a collection of interchangeable building blocks; each of which represents a step in the modeling process. Currently the system hosts more than 30 modules or steps, which can be combined to create multiple learning units. Two specific units: Unit Hydrograph and Rational Method, have been used in undergraduate hydrology class-rooms at Purdue University and Arizona State University. The structure of the CI and the lessons learned from its implementation, including preliminary results from student assessments of learning will be presented.

  19. Conflict in Protected Areas: Who Says Co-Management Does Not Work?

    PubMed Central

    Arts, Bas; Vranckx, An; Léon-Sicard, Tomas; Van Damme, Patrick

    2015-01-01

    Natural resource-related conflicts can be extremely destructive and undermine environmental protection. Since the 1990s co-management schemes, whereby the management of resources is shared by public and/or private sector stakeholders, have been a main strategy for reducing these conflicts worldwide. Despite initial high hopes, in recent years co-management has been perceived as falling short of expectations. However, systematic assessments of its role in conflict prevention or mitigation are non-existent. Interviews with 584 residents from ten protected areas in Colombia revealed that co-management can be successful in reducing conflict at grassroots level, as long as some critical enabling conditions, such as effective participation in the co-management process, are fulfilled not only on paper but also by praxis. We hope these findings will re-incentivize global efforts to make co-management work in protected areas and other common pool resource contexts, such as fisheries, agriculture, forestry and water management. PMID:26714036

  20. Religion as a resource for positive youth development: religion, social capital, and moral outcomes.

    PubMed

    Ebstyne King, Pamela; Furrow, James L

    2004-09-01

    Although existing literature demonstrates that developmental benefits are associated with religion for adolescents, little is understood about the dynamics of this relationship. Drawing on social capital theory, this study tested a conceptual model exploring socially embedded religious influences on moral outcomes. A three-dimensional model of social capital demonstrated how social interaction, trust, and shared vision enable social ties associated with religiousness to influence moral behavior. Structural equation modeling was used with data gathered from 735 urban youths to test a proposed model of the effects of religiousness on moral outcomes. Results suggested that religiously active youths report higher levels of social capital resources and that the influence of adolescent religiousness on moral outcomes was mediated through social capital resources. Suggestions for further research and implications for faith-based youth development organizations are considered. Copyright 2004 American Psychological Association

  1. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  2. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  3. Enabling large-scale next-generation sequence assembly with Blacklight

    PubMed Central

    Couger, M. Brian; Pipes, Lenore; Squina, Fabio; Prade, Rolf; Siepel, Adam; Palermo, Robert; Katze, Michael G.; Mason, Christopher E.; Blood, Philip D.

    2014-01-01

    Summary A variety of extremely challenging biological sequence analyses were conducted on the XSEDE large shared memory resource Blacklight, using current bioinformatics tools and encompassing a wide range of scientific applications. These include genomic sequence assembly, very large metagenomic sequence assembly, transcriptome assembly, and sequencing error correction. The data sets used in these analyses included uncategorized fungal species, reference microbial data, very large soil and human gut microbiome sequence data, and primate transcriptomes, composed of both short-read and long-read sequence data. A new parallel command execution program was developed on the Blacklight resource to handle some of these analyses. These results, initially reported previously at XSEDE13 and expanded here, represent significant advances for their respective scientific communities. The breadth and depth of the results achieved demonstrate the ease of use, versatility, and unique capabilities of the Blacklight XSEDE resource for scientific analysis of genomic and transcriptomic sequence data, and the power of these resources, together with XSEDE support, in meeting the most challenging scientific problems. PMID:25294974

  4. Outsourcing. Health care organizations are considering strategic goals in making outsourcing decisions.

    PubMed

    Chin, T L

    1997-08-01

    More health care organizations are outsourcing the management of some or all of their information systems. Executives at many organizations that have tried outsourcing say it enables them to focus on core competencies, better allocate resources, get more information technology at less cost, share risks of implementing information technology with outsourcers and guarantee access to skilled labor. But the information technology outsourcing market remains relatively small in health care because many CIOs still are wary of turning over control of important functions to outsiders.

  5. ZINC: A Free Tool to Discover Chemistry for Biology

    PubMed Central

    2012-01-01

    ZINC is a free public resource for ligand discovery. The database contains over twenty million commercially available molecules in biologically relevant representations that may be downloaded in popular ready-to-dock formats and subsets. The Web site also enables searches by structure, biological activity, physical property, vendor, catalog number, name, and CAS number. Small custom subsets may be created, edited, shared, docked, downloaded, and conveyed to a vendor for purchase. The database is maintained and curated for a high purchasing success rate and is freely available at zinc.docking.org. PMID:22587354

  6. USDA Climate Hubs - delivering usable information and tools to farmers, ranchers and forest land managers - Communication insights from the Regions

    NASA Astrophysics Data System (ADS)

    Johnson, R.; Steele, R.

    2016-12-01

    The USDA Climate Hubs were established in 2014 to develop and deliver science-based, region-specific information and technologies, with USDA agencies and partners, to agricultural and natural resource managers to enable climate-informed decision-making. In the two and half years of existence, our regional leads have gained insights into communicating with the agricultural and forestry communities throughout the different regions of the country. Perspectives differ somewhat among regions and sectors. This talk will share those various insights.

  7. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  8. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  9. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  10. Internest food sharing within wood ant colonies: resource redistribution behavior in a complex system

    PubMed Central

    Robinson, Elva J.H.

    2016-01-01

    Resource sharing is an important cooperative behavior in many animals. Sharing resources is particularly important in social insect societies, as division of labor often results in most individuals including, importantly, the reproductives, relying on other members of the colony to provide resources. Sharing resources between individuals is therefore fundamental to the success of social insects. Resource sharing is complicated if a colony inhabits several spatially separated nests, a nesting strategy common in many ant species. Resources must be shared not only between individuals in a single nest but also between nests. We investigated the behaviors facilitating resource redistribution between nests in a dispersed-nesting population of wood ant Formica lugubris. We marked ants, in the field, as they transported resources along the trails between nests of a colony, to investigate how the behavior of individual workers relates to colony-level resource exchange. We found that workers from a particular nest “forage” to other nests in the colony, treating them as food sources. Workers treating other nests as food sources means that simple, pre-existing foraging behaviors are used to move resources through a distributed system. It may be that this simple behavioral mechanism facilitates the evolution of this complex life-history strategy. PMID:27004016

  11. Design of Community Resource Inventories as a Component of Scalable Earth Science Infrastructure: Experience of the Earthcube CINERGI Project

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Valentine, D. W., Jr.; Grethe, J. S.; Hsu, L.; Malik, T.; Bermudez, L. E.; Gupta, A.; Lehnert, K. A.; Whitenack, T.; Ozyurt, I. B.; Condit, C.; Calderon, R.; Musil, L.

    2014-12-01

    EarthCube is envisioned as a cyberinfrastructure that fosters new, transformational geoscience by enabling sharing, understanding and scientifically-sound and efficient re-use of formerly unconnected data resources, software, models, repositories, and computational power. Its purpose is to enable science enterprise and workforce development via an extensible and adaptable collaboration and resource integration framework. A key component of this vision is development of comprehensive inventories supporting resource discovery and re-use across geoscience domains. The goal of the EarthCube CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project is to create a methodology and assemble a large inventory of high-quality information resources with standard metadata descriptions and traceable provenance. The inventory is compiled from metadata catalogs maintained by geoscience data facilities, as well as from user contributions. The latter mechanism relies on community resource viewers: online applications that support update and curation of metadata records. Once harvested into CINERGI, metadata records from domain catalogs and community resource viewers are loaded into a staging database implemented in MongoDB, and validated for compliance with ISO 19139 metadata schema. Several types of metadata defects detected by the validation engine are automatically corrected with help of several information extractors or flagged for manual curation. The metadata harvesting, validation and processing components generate provenance statements using W3C PROV notation, which are stored in a Neo4J database. Thus curated metadata, along with the provenance information, is re-published and accessed programmatically and via a CINERGI online application. This presentation focuses on the role of resource inventories in a scalable and adaptable information infrastructure, and on the CINERGI metadata pipeline and its implementation challenges. Key project components are described at the project's website (http://workspace.earthcube.org/cinergi), which also provides access to the initial resource inventory, the inventory metadata model, metadata entry forms and a collection of the community resource viewers.

  12. Using Linked Open Data and Semantic Integration to Search Across Geoscience Repositories

    NASA Astrophysics Data System (ADS)

    Mickle, A.; Raymond, L. M.; Shepherd, A.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Narock, T.; Schildhauer, M.; Wiebe, P. H.

    2014-12-01

    The MBLWHOI Library is a partner in the OceanLink project, an NSF EarthCube Building Block, applying semantic technologies to enable knowledge discovery, sharing and integration. OceanLink is testing ontology design patterns that link together: two data repositories, Rolling Deck to Repository (R2R), Biological and Chemical Oceanography Data Management Office (BCO-DMO); the MBLWHOI Library Institutional Repository (IR) Woods Hole Open Access Server (WHOAS); National Science Foundation (NSF) funded awards; and American Geophysical Union (AGU) conference presentations. The Library is collaborating with scientific users, data managers, DSpace engineers, experts in ontology design patterns, and user interface developers to make WHOAS, a DSpace repository, linked open data enabled. The goal is to allow searching across repositories without any of the information providers having to change how they manage their collections. The tools developed for DSpace will be made available to the community of users. There are 257 registered DSpace repositories in the United Stated and over 1700 worldwide. Outcomes include: Integration of DSpace with OpenRDF Sesame triple store to provide SPARQL endpoint for the storage and query of RDF representation of DSpace resources, Mapping of DSpace resources to OceanLink ontology, and DSpace "data" add on to provide resolvable linked open data representation of DSpace resources.

  13. Phases of "pre-engagement" capacity building: discovery, exploration, and trial alliance.

    PubMed

    Campbell-Voytal, Kimberly

    2010-01-01

    Academic prevention researchers who engage limited-resource communities may find that organizational or community capacity for prevention is low. Community organizations, neighborhoods, and academic partners may lack shared issue awareness, mutual interests, and interactive skills necessary for collaborative intervention. Existing capacity building models either ignore a 'pre-engagement' phase or acknowledge it without offering strategic detail. An exploratory or developmental phase before active engagement can be achieved through co-located work in a community setting. The construct, "ecology of practice," provides conceptual background for examining how "shared work" introduces and prepares partners for future collaboration consistent with community-based participatory research (CBPR) principles. This paper presents two case studies where pre-engagement capacity building involved partners who were initially unaware, disinterested, or unable to engage in preventive interventions. These cases illustrate how mutual participation in shared "ecologies of practice" enabled an exchange of cultural knowledge, skill, and language that laid the groundwork for future preventive intervention. A trajectory of developmental work in each case occurred over 5 years. Historical timelines, interviews, and personal communications between community and academic leaders were reviewed and common themes identified. A model of "pre-capacity building" emerged. Capacity-building models that detail strategies for developing equitable engagement in under-resourced settings will more effectively move best practices into vulnerable communities. Preventive interventions must be translated equitably if health disparities are to be reduced.

  14. Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.

    PubMed

    Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor

    2016-01-01

    In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.

  15. Being Sticker Rich: Numerical Context Influences Children’s Sharing Behavior

    PubMed Central

    Posid, Tasha; Fazio, Allyse; Cordes, Sara

    2015-01-01

    Young children spontaneously share resources with anonymous recipients, but little is known about the specific circumstances that promote or hinder these prosocial tendencies. Children (ages 3–11) received a small (12) or large (30) number of stickers, and were then given the opportunity to share their windfall with either one or multiple anonymous recipients (Dictator Game). Whether a child chose to share or not varied as a function of age, but was uninfluenced by numerical context. Moreover, children’s giving was consistent with a proportion-based account, such that children typically donated a similar proportion (but different absolute number) of the resources given to them, regardless of whether they originally received a small or large windfall. The proportion of resources donated, however, did vary based on the number of recipients with whom they were allowed to share, such that on average, children shared more when there were more recipients available, particularly when they had more resources, suggesting they take others into consideration when making prosocial decisions. Finally, results indicated that a child’s gender also predicted sharing behavior, with males generally sharing more resources than females. Together, findings suggest that the numerical contexts under which children are asked to share, as well as the quantity of resources that they have to share, may interact to promote (or hinder) altruistic behaviors throughout childhood. PMID:26535900

  16. 38 CFR 17.240 - Sharing specialized medical resources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... medical resources. 17.240 Section 17.240 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Sharing of Medical Facilities, Equipment, and Information § 17.240 Sharing specialized medical resources. Subject to such terms and conditions as the Under Secretary for Health shall prescribe...

  17. 38 CFR 17.240 - Sharing specialized medical resources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... medical resources. 17.240 Section 17.240 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Sharing of Medical Facilities, Equipment, and Information § 17.240 Sharing specialized medical resources. Subject to such terms and conditions as the Under Secretary for Health shall prescribe...

  18. Exploratory study on Marine SDI implementation in Malaysia

    NASA Astrophysics Data System (ADS)

    Tarmidi, Zakri; Mohd Shariff, Abdul Rashid; Rodzi Mahmud, Ahmad; Zaiton Ibrahim, Zelina; Halim Hamzah, Abdul

    2016-06-01

    This paper discusses the explanatory study of the implementation of spatial data sharing between Malaysia's marine organisations. The survey method was selected with questionnaire as an instrument for data collection and analysis. The aim of the questionnaire was to determine the critical factors in enabling marine spatial data sharing in Malaysia, and the relationship between these indicators. A questionnaire was sent to 48 marine and coastal organisations in Malaysia, with 84.4% of respondents answering the questionnaire. The respondents selected were people who involved directly with GIS application in the organisations. The results show there are three main issues in implementing spatial data sharing; (1) GIS planning and implementation in the organisation, (2) spatial data sharing knowledge and implementation in the organisation and (3) collaboration to enable spatial data sharing within and between organisations. To improve GIS implementation, spatial data sharing implementation and collaboration in enabling spatial data sharing, a conceptual collaboration model was proposed with components of marine GIS strategic planning, spatial data sharing strategies and collaboration strategy.

  19. Utilizing Online Connectivity to Combat Reduced Federal Spending

    NASA Astrophysics Data System (ADS)

    Mayall, T.

    2013-12-01

    With a diminishing grant pool and increasing competition for federal funding, utilizing free online resources to collaborate with other scientists, share information and insights, and promote your research is critical to success. As budgets tighten, efficient use of both time and money is becoming more and more important. Tools such as Mendeley, ResearchGate, and Science Exchange enable scientists to promote their own work while gaining valuable connections and collaborations. Additionally, scientists can build their online presence to increase visibility for potential funding. Through intelligent use of these online tools, scientists can increase their chances of funding and minimize wasted time and resources. For this session, I will examine how to adapt to the changing landscape of federal funding through the effective use of social media and online tools.

  20. Mitochondrial Disease Sequence Data Resource (MSeqDR): a global grass-roots consortium to facilitate deposition, curation, annotation, and integrated analysis of genomic data for the mitochondrial disease clinical and research communities.

    PubMed

    Falk, Marni J; Shen, Lishuang; Gonzalez, Michael; Leipzig, Jeremy; Lott, Marie T; Stassen, Alphons P M; Diroma, Maria Angela; Navarro-Gomez, Daniel; Yeske, Philip; Bai, Renkui; Boles, Richard G; Brilhante, Virginia; Ralph, David; DaRe, Jeana T; Shelton, Robert; Terry, Sharon F; Zhang, Zhe; Copeland, William C; van Oven, Mannis; Prokisch, Holger; Wallace, Douglas C; Attimonelli, Marcella; Krotoski, Danuta; Zuchner, Stephan; Gai, Xiaowu

    2015-03-01

    Success rates for genomic analyses of highly heterogeneous disorders can be greatly improved if a large cohort of patient data is assembled to enhance collective capabilities for accurate sequence variant annotation, analysis, and interpretation. Indeed, molecular diagnostics requires the establishment of robust data resources to enable data sharing that informs accurate understanding of genes, variants, and phenotypes. The "Mitochondrial Disease Sequence Data Resource (MSeqDR) Consortium" is a grass-roots effort facilitated by the United Mitochondrial Disease Foundation to identify and prioritize specific genomic data analysis needs of the global mitochondrial disease clinical and research community. A central Web portal (https://mseqdr.org) facilitates the coherent compilation, organization, annotation, and analysis of sequence data from both nuclear and mitochondrial genomes of individuals and families with suspected mitochondrial disease. This Web portal provides users with a flexible and expandable suite of resources to enable variant-, gene-, and exome-level sequence analysis in a secure, Web-based, and user-friendly fashion. Users can also elect to share data with other MSeqDR Consortium members, or even the general public, either by custom annotation tracks or through the use of a convenient distributed annotation system (DAS) mechanism. A range of data visualization and analysis tools are provided to facilitate user interrogation and understanding of genomic, and ultimately phenotypic, data of relevance to mitochondrial biology and disease. Currently available tools for nuclear and mitochondrial gene analyses include an MSeqDR GBrowse instance that hosts optimized mitochondrial disease and mitochondrial DNA (mtDNA) specific annotation tracks, as well as an MSeqDR locus-specific database (LSDB) that curates variant data on more than 1300 genes that have been implicated in mitochondrial disease and/or encode mitochondria-localized proteins. MSeqDR is integrated with a diverse array of mtDNA data analysis tools that are both freestanding and incorporated into an online exome-level dataset curation and analysis resource (GEM.app) that is being optimized to support needs of the MSeqDR community. In addition, MSeqDR supports mitochondrial disease phenotyping and ontology tools, and provides variant pathogenicity assessment features that enable community review, feedback, and integration with the public ClinVar variant annotation resource. A centralized Web-based informed consent process is being developed, with implementation of a Global Unique Identifier (GUID) system to integrate data deposited on a given individual from different sources. Community-based data deposition into MSeqDR has already begun. Future efforts will enhance capabilities to incorporate phenotypic data that enhance genomic data analyses. MSeqDR will fill the existing void in bioinformatics tools and centralized knowledge that are necessary to enable efficient nuclear and mtDNA genomic data interpretation by a range of shareholders across both clinical diagnostic and research settings. Ultimately, MSeqDR is focused on empowering the global mitochondrial disease community to better define and explore mitochondrial diseases. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Mitochondrial Disease Sequence Data Resource (MSeqDR): A global grass-roots consortium to facilitate deposition, curation, annotation, and integrated analysis of genomic data for the mitochondrial disease clinical and research communities

    PubMed Central

    Falk, Marni J.; Shen, Lishuang; Gonzalez, Michael; Leipzig, Jeremy; Lott, Marie T.; Stassen, Alphons P.M.; Diroma, Maria Angela; Navarro-Gomez, Daniel; Yeske, Philip; Bai, Renkui; Boles, Richard G.; Brilhante, Virginia; Ralph, David; DaRe, Jeana T.; Shelton, Robert; Terry, Sharon; Zhang, Zhe; Copeland, William C.; van Oven, Mannis; Prokisch, Holger; Wallace, Douglas C.; Attimonelli, Marcella; Krotoski, Danuta; Zuchner, Stephan; Gai, Xiaowu

    2014-01-01

    Success rates for genomic analyses of highly heterogeneous disorders can be greatly improved if a large cohort of patient data is assembled to enhance collective capabilities for accurate sequence variant annotation, analysis, and interpretation. Indeed, molecular diagnostics requires the establishment of robust data resources to enable data sharing that informs accurate understanding of genes, variants, and phenotypes. The “Mitochondrial Disease Sequence Data Resource (MSeqDR) Consortium” is a grass-roots effort facilitated by the United Mitochondrial Disease Foundation to identify and prioritize specific genomic data analysis needs of the global mitochondrial disease clinical and research community. A central Web portal (https://mseqdr.org) facilitates the coherent compilation, organization, annotation, and analysis of sequence data from both nuclear and mitochondrial genomes of individuals and families with suspected mitochondrial disease. This Web portal provides users with a flexible and expandable suite of resources to enable variant-, gene-, and exome-level sequence analysis in a secure, Web-based, and user-friendly fashion. Users can also elect to share data with other MSeqDR Consortium members, or even the general public, either by custom annotation tracks or through use of a convenient distributed annotation system (DAS) mechanism. A range of data visualization and analysis tools are provided to facilitate user interrogation and understanding of genomic, and ultimately phenotypic, data of relevance to mitochondrial biology and disease. Currently available tools for nuclear and mitochondrial gene analyses include an MSeqDR GBrowse instance that hosts optimized mitochondrial disease and mitochondrial DNA (mtDNA) specific annotation tracks, as well as an MSeqDR locus-specific database (LSDB) that curates variant data on more than 1,300 genes that have been implicated in mitochondrial disease and/or encode mitochondria-localized proteins. MSeqDR is integrated with a diverse array of mtDNA data analysis tools that are both freestanding and incorporated into an online exome-level dataset curation and analysis resource (GEM.app) that is being optimized to support needs of the MSeqDR community. In addition, MSeqDR supports mitochondrial disease phenotyping and ontology tools, and provides variant pathogenicity assessment features that enable community review, feedback, and integration with the public ClinVar variant annotation resource. A centralized Web-based informed consent process is being developed, with implementation of a Global Unique Identifier (GUID) system to integrate data deposited on a given individual from different sources. Community-based data deposition into MSeqDR has already begun. Future efforts will enhance capabilities to incorporate phenotypic data that enhance genomic data analyses. MSeqDR will fill the existing void in bioinformatics tools and centralized knowledge that are necessary to enable efficient nuclear and mtDNA genomic data interpretation by a range of shareholders across both clinical diagnostic and research settings. Ultimately, MSeqDR is focused on empowering the global mitochondrial disease community to better define and explore mitochondrial disease. PMID:25542617

  2. Enabling a Scientific Cloud Marketplace: VGL (Invited)

    NASA Astrophysics Data System (ADS)

    Fraser, R.; Woodcock, R.; Wyborn, L. A.; Vote, J.; Rankine, T.; Cox, S. J.

    2013-12-01

    The Virtual Geophysics Laboratory (VGL) provides a flexible, web based environment where researchers can browse data and use a variety of scientific software packaged into tool kits that run in the Cloud. Both data and tool kits are published by multiple researchers and registered with the VGL infrastructure forming a data and application marketplace. The VGL provides the basic work flow of Discovery and Access to the disparate data sources and a Library for tool kits and scripting to drive the scientific codes. Computation is then performed on the Research or Commercial Clouds. Provenance information is collected throughout the work flow and can be published alongside the results allowing for experiment comparison and sharing with other researchers. VGL's "mix and match" approach to data, computational resources and scientific codes, enables a dynamic approach to scientific collaboration. VGL allows scientists to publish their specific contribution, be it data, code, compute or work flow, knowing the VGL framework will provide other components needed for a complete application. Other scientists can choose the pieces that suit them best to assemble an experiment. The coarse grain workflow of the VGL framework combined with the flexibility of the scripting library and computational toolkits allows for significant customisation and sharing amongst the community. The VGL utilises the cloud computational and storage resources from the Australian academic research cloud provided by the NeCTAR initiative and a large variety of data accessible from national and state agencies via the Spatial Information Services Stack (SISS - http://siss.auscope.org). VGL v1.2 screenshot - http://vgl.auscope.org

  3. The essential nature of sharing in science.

    PubMed

    Fischer, Beth A; Zigmond, Michael J

    2010-12-01

    Advances in science are the combined result of the efforts of a great many scientists, and in many cases, their willingness to share the products of their research. These products include data sets, both small and large, and unique research resources not commercially available, such as cell lines and software programs. The sharing of these resources enhances both the scope and the depth of research, while making more efficient use of time and money. However, sharing is not without costs, many of which are borne by the individual who develops the research resource. Sharing, for example, reduces the uniqueness of the resources available to a scientist, potentially influencing the originator's perceived productivity and ultimately his or her competitiveness for jobs, promotions, and grants. Nevertheless, for most researchers-particularly those using public funds-sharing is no longer optional but must be considered an obligation to science, the funding agency, and ultimately society at large. Most funding agencies, journals, and professional societies now require a researcher who has published work involving a unique resource to make that resource available to other investigators. Changes could be implemented to mitigate some of the costs. The creator of the resource could explore the possibility of collaborating with those who request it. In addition, institutions that employ and fund researchers could change their policies and practices to make sharing a more attractive and viable option. For example, when evaluating an individual's productivity, institutions could provide credit for the impact a researcher has had on their field through the provision of their unique resources to other investigators, regardless of whether that impact is reflected in the researcher's list of publications. In addition, increased funding for the development and maintenance of user-friendly public repositories for data and research resources would also help to reduce barriers to sharing by minimizing the time, effort, and funding needed by individual investigators to comply with requests for their unique resource. Indeed, sharing is an imperative, but it is also essential to find ways to protect for both the original owner of the resource and those wishing to share it.

  4. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  5. Revenue-sharing clubs provide economic insurance and incentives for sustainability in common-pool resource systems.

    PubMed

    Tilman, Andrew R; Levin, Simon; Watson, James R

    2018-06-05

    Harvesting behaviors of natural resource users, such as farmers, fishermen and aquaculturists, are shaped by season-to-season and day-to-day variability, or in other words risk. Here, we explore how risk-mitigation strategies can lead to sustainable use and improved management of common-pool natural resources. Over-exploitation of unmanaged natural resources, which lowers their long-term productivity, is a central challenge facing societies. While effective top-down management is a possible solution, it is not available if the resource is outside the jurisdictional bounds of any management entity, or if existing institutions cannot effectively impose sustainable-use rules. Under these conditions, alternative approaches to natural resource governance are required. Here, we study revenue-sharing clubs as a mechanism by which resource users can mitigate their income volatility and importantly, as a co-benefit, are also incentivized to reduce their effort, leading to reduced over-exploitation and improved resource governance. We use game theoretic analyses and agent-based modeling to determine the conditions in which revenue-sharing can be beneficial for resource management as well as resource users. We find that revenue-sharing agreements can emerge and lead to improvements in resource management when there is large variability in production/revenue and when this variability is uncorrelated across members of the revenue-sharing club. Further, we show that if members of the revenue-sharing collective can sell their product at a price premium, then the range of ecological and economic conditions under which revenue-sharing can be a tool for management greatly expands. These results have implications for the design of bottom-up management, where resource users themselves are incentivized to operate in ecologically sustainable and economically advantageous ways. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Lowering the barriers to computational modeling of Earth's surface: coupling Jupyter Notebooks with Landlab, HydroShare, and CyberGIS for research and education.

    NASA Astrophysics Data System (ADS)

    Bandaragoda, C.; Castronova, A. M.; Phuong, J.; Istanbulluoglu, E.; Strauch, R. L.; Nudurupati, S. S.; Tarboton, D. G.; Wang, S. W.; Yin, D.; Barnhart, K. R.; Tucker, G. E.; Hutton, E.; Hobley, D. E. J.; Gasparini, N. M.; Adams, J. M.

    2017-12-01

    The ability to test hypotheses about hydrology, geomorphology and atmospheric processes is invaluable to research in the era of big data. Although community resources are available, there remain significant educational, logistical and time investment barriers to their use. Knowledge infrastructure is an emerging intellectual framework to understand how people are creating, sharing and distributing knowledge - which has been dramatically transformed by Internet technologies. In addition to the technical and social components in a cyberinfrastructure system, knowledge infrastructure considers educational, institutional, and open source governance components required to advance knowledge. We are designing an infrastructure environment that lowers common barriers to reproducing modeling experiments for earth surface investigation. Landlab is an open-source modeling toolkit for building, coupling, and exploring two-dimensional numerical models. HydroShare is an online collaborative environment for sharing hydrologic data and models. CyberGIS-Jupyter is an innovative cyberGIS framework for achieving data-intensive, reproducible, and scalable geospatial analytics using the Jupyter Notebook based on ROGER - the first cyberGIS supercomputer, so that models that can be elastically reproduced through cloud computing approaches. Our team of geomorphologists, hydrologists, and computer geoscientists has created a new infrastructure environment that combines these three pieces of software to enable knowledge discovery. Through this novel integration, any user can interactively execute and explore their shared data and model resources. Landlab on HydroShare with CyberGIS-Jupyter supports the modeling continuum from fully developed modelling applications, prototyping new science tools, hands on research demonstrations for training workshops, and classroom applications. Computational geospatial models based on big data and high performance computing can now be more efficiently developed, improved, scaled, and seamlessly reproduced among multidisciplinary users, thereby expanding the active learning curriculum and research opportunities for students in earth surface modeling and informatics.

  7. AppVis: Enabling data-rich apps in app inventor

    NASA Astrophysics Data System (ADS)

    Harunani, Farzeen

    MIT App Inventor has enabled middle school students to learn computing while creating their own apps--including apps that serve community needs. However, few resources exist for building apps that gather and share data. There is a need for new tools and an instructional materials for students to build data-enabled, community-focused apps. We developed an extension for App Inventor, called AppVis, which allows app-makers to publish and retrieve data from iSENSE, our existing web-based collaborative data visualization platform. We used AppVis and supporting instructional materials in two one-week summer camps attended by a total of 33 middle school students. Based on student interview data and analysis of their final apps, our approach was broadly accessible to a diverse population of students. Students were motivated to build apps that could be used by their own communities. This thesis presents the design of AppVis and results from students' work in summer camps.

  8. A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.

    PubMed

    Wang, Lujia; Liu, Ming; Meng, Max Q-H

    2017-02-01

    Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.

  9. A Case Study Optimizing Human Resources in Rwanda's First Dental School: Three Innovative Management Tools.

    PubMed

    Hackley, Donna M; Mumena, Chrispinus H; Gatarayiha, Agnes; Cancedda, Corrado; Barrow, Jane R

    2018-06-01

    Harvard School of Dental Medicine, University of Maryland School of Dentistry, and the University of Rwanda (UR) are collaborating to create Rwanda's first School of Dentistry as part of the Human Resources for Health (HRH) Rwanda initiative that aims to strengthen the health care system of Rwanda. The HRH oral health team developed three management tools to measure progress in systems-strengthening efforts: 1) the road map is an operations plan for the entire dental school and facilitates delivery of the curriculum and management of human and material resources; 2) each HRH U.S. faculty member develops a work plan with targeted deliverables for his or her rotation, which is facilitated with biweekly flash reports that measure progress and keep the faculty member focused on his or her specific deliverables; and 3) the redesigned HRH twinning model, changed from twinning of an HRH faculty member with a single Rwandan faculty member to twinning with multiple Rwandan faculty members based on shared academic interests and goals, has improved efficiency, heightened engagement of the UR dental faculty, and increased the impact of HRH U.S. faculty members. These new tools enable the team to measure its progress toward the collaborative's goals and understand the successes and challenges in moving toward the planned targets. The tools have been valuable instruments in fostering discussion around priorities and deployment of resources as well as in developing strong relationships, enabling two-way exchange of knowledge, and promoting sustainability.

  10. The National Network of State Perinatal Quality Collaboratives: A Growing Movement to Improve Maternal and Infant Health.

    PubMed

    Henderson, Zsakeba T; Ernst, Kelly; Simpson, Kathleen Rice; Berns, Scott; Suchdev, Danielle B; Main, Elliott; McCaffrey, Martin; Lee, Karyn; Rouse, Tara Bristol; Olson, Christine K

    2018-03-01

    State Perinatal Quality Collaboratives (PQCs) are networks of multidisciplinary teams working to improve maternal and infant health outcomes. To address the shared needs across state PQCs and enable collaboration, Centers for Disease Control and Prevention (CDC), in partnership with March of Dimes and perinatal quality improvement experts from across the country, supported the development and launch of the National Network of Perinatal Quality Collaboratives (NNPQC). This process included assessing the status of PQCs in this country and identifying the needs and resources that would be most useful to support PQC development. National representatives from 48 states gathered for the first meeting of the NNPQC to share best practices for making measurable improvements in maternal and infant health. The number of state PQCs has grown considerably over the past decade, with an active PQC or a PQC in development in almost every state. However, PQCs have some common challenges that need to be addressed. After its successful launch, the NNPQC is positioned to ensure that every state PQC has access to key tools and resources that build capacity to actively improve maternal and infant health outcomes and healthcare quality.

  11. The National Network of State Perinatal Quality Collaboratives: A Growing Movement to Improve Maternal and Infant Health.

    PubMed

    Henderson, Zsakeba T; Ernst, Kelly; Simpson, Kathleen Rice; Berns, Scott D; Suchdev, Danielle B; Main, Elliott; McCaffrey, Martin; Lee, Karyn; Rouse, Tara Bristol; Olson, Christine K

    2018-02-01

    State Perinatal Quality Collaboratives (PQCs) are networks of multidisciplinary teams working to improve maternal and infant health outcomes. To address the shared needs across state PQCs and enable collaboration, Centers for Disease Control and Prevention, in partnership with March of Dimes and perinatal quality improvement experts from across the country, supported the development and launch of the National Network of PQCs National Network of Perinatal Quality Collaboratives (NNPQC). This process included assessing the status of PQCs in this country and identifying the needs and resources that would be most useful to support PQC development. National representatives from 48 states gathered for the first meeting of the NNPQC to share best practices for making measurable improvements in maternal and infant health. The number of state PQCs has grown considerably over the past decade, with an active PQC or a PQC in development in almost every state. However, PQCs have some common challenges that need to be addressed. After its successful launch, the NNPQC is positioned to ensure that every state PQC has access to key tools and resources that build capacity to actively improve maternal and infant health outcomes and healthcare quality.

  12. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  13. A Belief-based Trust Model for Dynamic Service Selection

    NASA Astrophysics Data System (ADS)

    Ali, Ali Shaikh; Rana, Omer F.

    Provision of services across institutional boundaries has become an active research area. Many such services encode access to computational and data resources (comprising single machines to computational clusters). Such services can also be informational, and integrate different resources within an institution. Consequently, we envision a service rich environment in the future, where service consumers can intelligently decide between which services to select. If interaction between service providers/users is automated, it is necessary for these service clients to be able to automatically chose between a set of equivalent (or similar) services. In such a scenario trust serves as a benchmark to differentiate between service providers. One might therefore prioritize potential cooperative partners based on the established trust. Although many approaches exist in literature about trust between online communities, the exact nature of trust for multi-institutional service sharing remains undefined. Therefore, the concept of trust suffers from an imperfect understanding, a plethora of definitions, and informal use in the literature. We present a formalism for describing trust within multi-institutional service sharing, and provide an implementation of this; enabling the agent to make trust-based decision. We evaluate our formalism through simulation.

  14. MIRASS: medical informatics research activity support system using information mashup network.

    PubMed

    Kiah, M L M; Zaidan, B B; Zaidan, A A; Nabi, Mohamed; Ibraheem, Rabiu

    2014-04-01

    The advancement of information technology has facilitated the automation and feasibility of online information sharing. The second generation of the World Wide Web (Web 2.0) enables the collaboration and sharing of online information through Web-serving applications. Data mashup, which is considered a Web 2.0 platform, plays an important role in information and communication technology applications. However, few ideas have been transformed into education and research domains, particularly in medical informatics. The creation of a friendly environment for medical informatics research requires the removal of certain obstacles in terms of search time, resource credibility, and search result accuracy. This paper considers three glitches that researchers encounter in medical informatics research; these glitches include the quality of papers obtained from scientific search engines (particularly, Web of Science and Science Direct), the quality of articles from the indices of these search engines, and the customizability and flexibility of these search engines. A customizable search engine for trusted resources of medical informatics was developed and implemented through data mashup. Results show that the proposed search engine improves the usability of scientific search engines for medical informatics. Pipe search engine was found to be more efficient than other engines.

  15. Enabling the sharing of neuroimaging data through well-defined intermediate levels of visibility.

    PubMed

    Smith, Kenneth; Jajodia, Sushil; Swarup, Vipin; Hoyt, Jeffrey; Hamilton, Gail; Faatz, Donald; Cornett, Todd

    2004-08-01

    The sharing of neuroimagery data offers great benefits to science, however, data owners sharing their data face substantial custodial responsibilities, such as ensuring data sets are correctly interpreted in their new shared context, protecting the identity and privacy of human research participants, and safeguarding the understood order of use. Given choices of sharing widely or not at all, the result will often be no sharing, due to the inability of data owners to control their exposure to the risks associated with data sharing. In this context, data sharing is enabled by providing data owners with well-defined intermediate levels of data visibility, progressing incrementally toward public visibility. In this paper, we define a novel and general data sharing model, Structured Sharing Communities (SSC), meeting this requirement. Arbitrary visibility levels representing collaborative agreements, consortium memberships, research organizations, and other affiliations are structured into a policy space through explicit paths of permissible information flow. Operations enable users and applications to manage the visibility of data and enforce access permissions and restrictions. We show how a policy space can be implemented in realistic neuroinformatic architectures with acceptable assurance of correctness, and briefly describe an open source implementation effort.

  16. CILogon: An Integrated Identity and Access Management Platform for Science

    NASA Astrophysics Data System (ADS)

    Basney, J.

    2016-12-01

    When scientists work together, they use web sites and other software to share their ideas and data. To ensure the integrity of their work, these systems require the scientists to log in and verify that they are part of the team working on a particular science problem. Too often, the identity and access verification process is a stumbling block for the scientists. Scientific research projects are forced to invest time and effort into developing and supporting Identity and Access Management (IAM) services, distracting them from the core goals of their research collaboration. CILogon provides an IAM platform that enables scientists to work together to meet their IAM needs more effectively so they can allocate more time and effort to their core mission of scientific research. The CILogon platform enables federated identity management and collaborative organization management. Federated identity management enables researchers to use their home organization identities to access cyberinfrastructure, rather than requiring yet another username and password to log on. Collaborative organization management enables research projects to define user groups for authorization to collaboration platforms (e.g., wikis, mailing lists, and domain applications). CILogon's IAM platform serves the unique needs of research collaborations, namely the need to dynamically form collaboration groups across organizations and countries, sharing access to data, instruments, compute clusters, and other resources to enable scientific discovery. CILogon provides a software-as-a-service platform to ease integration with cyberinfrastructure, while making all software components publicly available under open source licenses to enable re-use. Figure 1 illustrates the components and interfaces of this platform. CILogon has been operational since 2010 and has been used by over 7,000 researchers from more than 170 identity providers to access cyberinfrastructure including Globus, LIGO, Open Science Grid, SeedMe, and XSEDE. The "CILogon 2.0" platform, launched in 2016, adds support for virtual organization (VO) membership management, identity linking, international collaborations, and standard integration protocols, through integration with the Internet2 COmanage collaboration software.

  17. Important Non-Technical Skills in Video-Assisted Thoracoscopic Surgery Lobectomy: Team Perspectives.

    PubMed

    Gjeraa, Kirsten; Mundt, Anna S; Spanager, Lene; Hansen, Henrik J; Konge, Lars; Petersen, René H; Østergaard, Doris

    2017-07-01

    Safety in the operating room is dependent on the team's non-technical skills. The importance of non-technical skills appears to be different for minimally invasive surgery as compared with open surgery. The aim of this study was to identify which non-technical skills are perceived by team members to be most important for patient safety, in the setting of video-assisted thoracoscopic surgery (VATS) lobectomy. This was an explorative, semistructured interview-based study with 21 participants from all four thoracic surgery centers in Denmark that perform VATS lobectomy. Data analysis was deductive, and directed content analysis was used to code the text into the Oxford Non-Technical Skills system for evaluating operating teams' non-technical skills. The most important non-technical skills described by the VATS teams were planning and preparation, situation awareness, problem solving, leadership, risk assessment, and teamwork. These non-technical skills enabled the team to achieve shared mental models, which in turn facilitated their efforts to anticipate next steps. This was viewed as important by the participants as they saw VATS lobectomy as a high-risk procedure with complementary and overlapping scopes of practice between surgical and anesthesia subteams. This study identified six non-technical skills that serve as the foundation for shared mental models of the patient, the current situation, and team resources. These findings contribute three important additions to the shared mental model construct: planning and preparation, risk assessment, and leadership. Shared mental models are crucial for patient safety because they enable VATS teams to anticipate problems through adaptive patterns of both implicit and explicit coordination. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Multidimensional proteomics for cell biology.

    PubMed

    Larance, Mark; Lamond, Angus I

    2015-05-01

    The proteome is a dynamic system in which each protein has interconnected properties - dimensions - that together contribute to the phenotype of a cell. Measuring these properties has proved challenging owing to their diversity and dynamic nature. Advances in mass spectrometry-based proteomics now enable the measurement of multiple properties for thousands of proteins, including their abundance, isoform expression, turnover rate, subcellular localization, post-translational modifications and interactions. Complementing these experimental developments are new data analysis, integration and visualization tools as well as data-sharing resources. Together, these advances in the multidimensional analysis of the proteome are transforming our understanding of various cellular and physiological processes.

  19. HydroShare: A Platform for Collaborative Data and Model Sharing in Hydrology

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting its use as a virtual environment supporting education and research. HydroShare has components that support: (1) resource storage, (2) resource exploration, and (3) web apps for actions on resources. The HydroShare data discovery, sharing and publishing functions as well as HydroShare web apps provide the capability to analyze data and execute models completely in the cloud (servers remote from the user) overcoming desktop platform limitations. The HydroShare GIS app provides a basic capability to visualize spatial data. The HydroShare JupyterHub Notebook app provides flexible and documentable execution of Python code snippets for analysis and modeling in a way that results can be shared among HydroShare users and groups to support research collaboration and education. We will discuss how these developments can be used to support different types of educational efforts in Hydrology where being completely web based is of value in an educational setting as students can all have access to the same functionality regardless of their computer.

  20. Exploring Resource Sharing between Secondary School Teachers of Agriculture and Science Departments Nationally.

    ERIC Educational Resources Information Center

    Dormody, Thomas J.

    1992-01-01

    A survey of 372 secondary agriculture teachers received 274 responses showing a majority of agriculture and science departments share resources, although at low levels. Many more predicted future sharing. Equipment and supplies were most often shared, instructional services least often. (SK)

  1. Representing Hydrologic Models as HydroShare Resources to Facilitate Model Sharing and Collaboration

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Goodall, J. L.; Mbewe, P.

    2013-12-01

    The CUAHSI HydroShare project is a collaborative effort that aims to provide software for sharing data and models within the hydrologic science community. One of the early focuses of this work has been establishing metadata standards for describing models and model-related data as HydroShare resources. By leveraging this metadata definition, a prototype extension has been developed to create model resources that can be shared within the community using the HydroShare system. The extension uses a general model metadata definition to create resource objects, and was designed so that model-specific parsing routines can extract and populate metadata fields from model input and output files. The long term goal is to establish a library of supported models where, for each model, the system has the ability to extract key metadata fields automatically, thereby establishing standardized model metadata that will serve as the foundation for model sharing and collaboration within HydroShare. The Soil Water & Assessment Tool (SWAT) is used to demonstrate this concept through a case study application.

  2. Attention and Visuospatial Working Memory Share the Same Processing Resources

    PubMed Central

    Feng, Jing; Pratt, Jay; Spence, Ian

    2012-01-01

    Attention and visuospatial working memory (VWM) share very similar characteristics; both have the same upper bound of about four items in capacity and they recruit overlapping brain regions. We examined whether both attention and VWM share the same processing resources using a novel dual-task costs approach based on a load-varying dual-task technique. With sufficiently large loads on attention and VWM, considerable interference between the two processes was observed. A further load increase on either process produced reciprocal increases in interference on both processes, indicating that attention and VWM share common resources. More critically, comparison among four experiments on the reciprocal interference effects, as measured by the dual-task costs, demonstrates no significant contribution from additional processing other than the shared processes. These results support the notion that attention and VWM share the same processing resources. PMID:22529826

  3. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  4. Towards a collaborative, global infrastructure for biodiversity assessment

    PubMed Central

    Guralnick, Robert P; Hill, Andrew W; Lane, Meredith

    2007-01-01

    Biodiversity data are rapidly becoming available over the Internet in common formats that promote sharing and exchange. Currently, these data are somewhat problematic, primarily with regard to geographic and taxonomic accuracy, for use in ecological research, natural resources management and conservation decision-making. However, web-based georeferencing tools that utilize best practices and gazetteer databases can be employed to improve geographic data. Taxonomic data quality can be improved through web-enabled valid taxon names databases and services, as well as more efficient mechanisms to return systematic research results and taxonomic misidentification rates back to the biodiversity community. Both of these are under construction. A separate but related challenge will be developing web-based visualization and analysis tools for tracking biodiversity change. Our aim was to discuss how such tools, combined with data of enhanced quality, will help transform today's portals to raw biodiversity data into nexuses of collaborative creation and sharing of biodiversity knowledge. PMID:17594421

  5. Description and testing of the Geo Data Portal: Data integration framework and Web processing services for environmental science collaboration

    USGS Publications Warehouse

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.

    2011-01-01

    Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.

  6. Data and Models as Social Objects in the HydroShare System for Collaboration in the Hydrology Community and Beyond

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.; Crawley, S.; Ramirez, M.; Sadler, J.; Xue, Z.; Bandaragoda, C.

    2016-12-01

    How do you share and publish hydrologic data and models for a large collaborative project? HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier. HydroShare has been developed with U.S. National Science Foundation support under the auspices of the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) to support the collaboration and community cyberinfrastructure needs of the hydrology research community. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. We cast hydrologic datasets and models as "social objects" that can be shared, collaborated around, annotated, published and discovered. In addition to data and model sharing, HydroShare supports web application programs (apps) that can act on data stored in HydroShare, just as software programs on your PC act on your data locally. This can free you from some of the limitations of local computing capacity and challenges in installing and maintaining software on your own PC. HydroShare's web-based cyberinfrastructure can take work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This presentation will describe HydroShare's collaboration functionality that enables both public and private sharing with individual users and collaborative user groups, and makes it easier for collaborators to iterate on shared datasets and models, creating multiple versions along the way, and publishing them with a permanent landing page, metadata description, and citable Digital Object Identifier (DOI) when the work is complete. This presentation will also describe the web app architecture that supports interoperability with third party servers functioning as application engines for analysis and processing of big hydrologic datasets. While developed to support the cyberinfrastructure needs of the hydrology community, the informatics infrastructure for programmatic interoperability of web resources has a generality beyond the solution of hydrology problems that will be discussed.

  7. A Bibliographic Bank for Resource Sharing in Library Systems: A Feasibility Study. Final Report.

    ERIC Educational Resources Information Center

    Schwartz, Eugene S.; Saxe, Henry I.

    This study of resource sharing among public libraries was made possible by six library systems in northern Illinois. With the organization of the library systems and development of interlibrary loan services and other cooperative activities, the problem of extending resource sharing among member libraries and between library systems arose. Several…

  8. Shared Resources

    Treesearch

    David B. Butts

    1987-01-01

    Wildfires do not respect property boundaries. Whole geographic regions are typically impacted by major wildfire outbreaks. Various fire related resources can be shared to solve such crises; whether they are shared, and how they are shared depends to a great extent upon the rapport among the agencies involved. Major progress has been achieved over the past decade...

  9. [Feeding changes for three Sphoeroides species (Tetraodontiformes: Tetraodontidae) after Isidore hurricane impact in Carbonera Inlet, Southeastern Gulf of Mexico].

    PubMed

    Palacios-Sánchez, Sonia Eugenia; Vega-Cendejas, María Eugenia

    2010-12-01

    The coexistence of ecologically similar species may occur because of resources distribution, such as prey and habitat type and segregation time, that minimizes the interspecific competition. The changes brought about by Hurricane Isidore in the distribution of food resources by three coexisting fish species of the family Tetraodontidae (Sphoeroides nephelus, S. spengleri and S testudineus), were analyzed at the Carbonera Inlet. Sphoeroides spp. based their food on benthic organisms; principally, they consume mussels (Brachidontes sp.), barnacles (Balanus sp.) and gastropods (Crepidula sp). Before hurricane impact, the three species share the available food resources in different proportions (bivalves, gastropods, barnacles and decapods), according to different strategies that enabled them to coexist and reduce interspecific competition. After the impact, the abundance of available prey decreased and the interespecific competition for food increased, leading to S. testudines and S. nephelus change their trophic spectrum (xiphosurans, amphipods, isopods and detritus) and displacing S. splengleri of the inlet. The distribution of food resources was conditioned by the abundance and diversity of prey, as well as the adaptive response of each species.

  10. Utilizing AI in Temporal, Spatial, and Resource Scheduling

    NASA Technical Reports Server (NTRS)

    Stottler, Richard; Kalton, Annaka; Bell, Aaron

    2006-01-01

    Aurora is a software system enabling the rapid, easy solution of complex scheduling problems involving spatial and temporal constraints among operations and scarce resources (such as equipment, workspace, and human experts). Although developed for use in the International Space Station Processing Facility, Aurora is flexible enough that it can be easily customized for application to other scheduling domains and adapted as the requirements change or become more precisely known over time. Aurora s scheduling module utilizes artificial-intelligence (AI) techniques to make scheduling decisions on the basis of domain knowledge, including knowledge of constraints and their relative importance, interdependencies among operations, and possibly frequent changes in governing schedule requirements. Unlike many other scheduling software systems, Aurora focuses on resource requirements and temporal scheduling in combination. For example, Aurora can accommodate a domain requirement to schedule two subsequent operations to locations adjacent to a shared resource. The graphical interface allows the user to quickly visualize the schedule and perform changes reflecting additional knowledge or alterations in the situation. For example, the user might drag the activity corresponding to the start of operations to reflect a late delivery.

  11. Multimode entanglement in reconfigurable graph states using optical frequency combs

    PubMed Central

    Cai, Y.; Roslund, J.; Ferrini, G.; Arzani, F.; Xu, X.; Fabre, C.; Treps, N.

    2017-01-01

    Multimode entanglement is an essential resource for quantum information processing and quantum metrology. However, multimode entangled states are generally constructed by targeting a specific graph configuration. This yields to a fixed experimental setup that therefore exhibits reduced versatility and scalability. Here we demonstrate an optical on-demand, reconfigurable multimode entangled state, using an intrinsically multimode quantum resource and a homodyne detection apparatus. Without altering either the initial squeezing source or experimental architecture, we realize the construction of thirteen cluster states of various sizes and connectivities as well as the implementation of a secret sharing protocol. In particular, this system enables the interrogation of quantum correlations and fluctuations for any multimode Gaussian state. This initiates an avenue for implementing on-demand quantum information processing by only adapting the measurement process and not the experimental layout. PMID:28585530

  12. Novel statistical tools for management of public databases facilitate community-wide replicability and control of false discovery.

    PubMed

    Rosset, Saharon; Aharoni, Ehud; Neuvirth, Hani

    2014-07-01

    Issues of publication bias, lack of replicability, and false discovery have long plagued the genetics community. Proper utilization of public and shared data resources presents an opportunity to ameliorate these problems. We present an approach to public database management that we term Quality Preserving Database (QPD). It enables perpetual use of the database for testing statistical hypotheses while controlling false discovery and avoiding publication bias on the one hand, and maintaining testing power on the other hand. We demonstrate it on a use case of a replication server for GWAS findings, underlining its practical utility. We argue that a shift to using QPD in managing current and future biological databases will significantly enhance the community's ability to make efficient and statistically sound use of the available data resources. © 2014 WILEY PERIODICALS, INC.

  13. Shared communications. Volume I, a summary and literature review

    DOT National Transportation Integrated Search

    2004-09-01

    This paper provides a review of examples from the literature of shared communication resources and of agencies and/or organizations that share communication resources. The primary emphasis is on rural, intelligent transportation system communications...

  14. All inequality is not equal: children correct inequalities using resource value.

    PubMed

    Shaw, Alex; Olson, Kristina R

    2013-01-01

    Fairness concerns guide children's judgments about how to share resources with others. However, it is unclear from past research if children take extant inequalities or the value of resources involved in an inequality into account when sharing with others; these questions are the focus of the current studies. In all experiments, children saw an inequality between two recipients-one had two more resources than another. What varied between conditions was the value of the resources that the child could subsequently distribute. When the resources were equal in value to those involved in the original inequality, children corrected the previous inequality by giving two resources to the child with fewer resources (Experiment 1). However, as the value of the resources increased relative to those initially shared by the experimenter, children were more likely to distribute the two high value resources equally between the two recipients, presumably to minimize the overall inequality in value (Experiments 1 and 2). We found that children specifically use value, not just size, when trying to equalize outcomes (Experiment 3) and further found that children focus on the relative rather than absolute value of the resources they share-when the experimenter had unequally distributed the same high value resource that the child would later share, children corrected the previous inequality by giving two high value resources to the person who had received fewer high value resources. These results illustrate that children attempt to correct past inequalities and try to maintain equality not just in the count of resources but also by using the value of resources.

  15. Neurocarta: aggregating and sharing disease-gene relations for the neurosciences.

    PubMed

    Portales-Casamar, Elodie; Ch'ng, Carolyn; Lui, Frances; St-Georges, Nicolas; Zoubarev, Anton; Lai, Artemis Y; Lee, Mark; Kwok, Cathy; Kwok, Willie; Tseng, Luchia; Pavlidis, Paul

    2013-02-26

    Understanding the genetic basis of diseases is key to the development of better diagnoses and treatments. Unfortunately, only a small fraction of the existing data linking genes to phenotypes is available through online public resources and, when available, it is scattered across multiple access tools. Neurocarta is a knowledgebase that consolidates information on genes and phenotypes across multiple resources and allows tracking and exploring of the associations. The system enables automatic and manual curation of evidence supporting each association, as well as user-enabled entry of their own annotations. Phenotypes are recorded using controlled vocabularies such as the Disease Ontology to facilitate computational inference and linking to external data sources. The gene-to-phenotype associations are filtered by stringent criteria to focus on the annotations most likely to be relevant. Neurocarta is constantly growing and currently holds more than 30,000 lines of evidence linking over 7,000 genes to 2,000 different phenotypes. Neurocarta is a one-stop shop for researchers looking for candidate genes for any disorder of interest. In Neurocarta, they can review the evidence linking genes to phenotypes and filter out the evidence they're not interested in. In addition, researchers can enter their own annotations from their experiments and analyze them in the context of existing public annotations. Neurocarta's in-depth annotation of neurodevelopmental disorders makes it a unique resource for neuroscientists working on brain development.

  16. Learning about water resource sharing through game play

    NASA Astrophysics Data System (ADS)

    Ewen, Tracy; Seibert, Jan

    2016-10-01

    Games are an optimal way to teach about water resource sharing, as they allow real-world scenarios to be enacted. Both students and professionals learning about water resource management can benefit from playing games, through the process of understanding both the complexity of sharing of resources between different groups and decision outcomes. Here we address how games can be used to teach about water resource sharing, through both playing and developing water games. An evaluation of using the web-based game Irrigania in the classroom setting, supported by feedback from several educators who have used Irrigania to teach about the sustainable use of water resources, and decision making, at university and high school levels, finds Irrigania to be an effective and easy tool to incorporate into a curriculum. The development of two water games in a course for masters students in geography is also presented as a way to teach and communicate about water resource sharing. Through game development, students learned soft skills, including critical thinking, problem solving, team work, and time management, and overall the process was found to be an effective way to learn about water resource decision outcomes. This paper concludes with a discussion of learning outcomes from both playing and developing water games.

  17. Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses

    PubMed Central

    Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M.; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V.; Ma’ayan, Avi

    2018-01-01

    Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated ‘canned’ analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools. PMID:29485625

  18. Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses.

    PubMed

    Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V; Ma'ayan, Avi

    2018-02-27

    Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated 'canned' analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools.

  19. Proceedings of the First International Linked Science Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouchard, Line Catherine; Kauppinnen, Tomi; Kessler, Carsten

    2011-01-01

    Scientific efforts are traditionally published only as articles, with an estimate of millions of publications worldwide per year; the growth rate of PubMed alone is now 1 papers per minute. The validation of scientific results requires reproducible methods, which can only be achieved if the same data, processes, and algorithms as those used in the original experiments were available. However, the problem is that although publications, methods and datasets are very related, they are not always openly accessible and interlinked. Even where data is discoverable, accessible and assessable, significant challenges remain in the reuse of the data, in particular facilitatingmore » the necessary correlation, integration and synthesis of data across levels of theory, techniques and disciplines. In the LISC 2011 (1st International Workshop on Linked Science) we will discuss and present results of new ways of publishing, sharing, linking, and analyzing such scientific resources motivated by driving scientific requirements, as well as reasoning over the data to discover interesting new links and scientific insights. Making entities identifiable and referenceable using URIs augmented by semantic, scientifically relevant annotations greatly facilitates access and retrieval for data which used to be hardly accessible. This Linked Science approach, i.e., publishing, sharing and interlinking scientific resources and data, is of particular importance for scientific research, where sharing is crucial for facilitating reproducibility and collaboration within and across disciplines. This integrated process, however, has not been established yet. Bibliographic contents are still regarded as the main scientific product, and associated data, models and software are either not published at all, or published in separate places, often with no reference to the respective paper. In the workshop we will discuss whether and how new emerging technologies (Linked Data, and semantic technologies more generally) can realize the vision of Linked Science. We see that this depends on their enabling capability throughout the research process, leading up to extended publications and data sharing environments. Our workshop aims to address challenges related to enabling the easy creation of data bundles - data, processes, tools, provenance and annotation - supporting both publication and reuse of the data. Secondly, we look for tools and methods for the easy correlation, integration and synthesis of shared data. This problem is often found in many disciplines (including astronomy, biology, geosciences, cultural heritage, earth, climate, environmental and ecological sciences and impacts etc.), as they need to span techniques, levels of theory, scales, and disciplines. With the advent of Linked Science, it is timely and crucial to address these identified research challenges through both practical and formal approaches.« less

  20. The pivotal role of nurse managers, leaders and educators in enabling excellence in nursing care.

    PubMed

    McSherry, Robert; Pearce, Paddy; Grimwood, Karen; McSherry, Wilfred

    2012-01-01

    The aims of this paper are to present the findings from a discursive analysis of key issues associated with providing excellence in nursing care; and to provide an exemplar framework to support excellence in nursing care and describe the potential benefits when excellence in nursing care occurs. The challenge facing the nursing profession is in ensuring that the core principles of dignity, respect, compassion and person (people) centered care become central to all aspects of nursing practice. To regain the public and professional confidence in nursing, nurse leaders, managers and educators play a pivotal role in improving the image of nursing. Excellence in nursing care will only happen by ensuring that nurse managers, leaders and educators are able to respond to the complexity of reform and change by leading, managing, enabling, empowering, encouraging and resourcing staff to be innovative and entrepreneurial in practice. Creating healthcare environments that enable excellence in nursing care will not occur without the development of genuine shared working partnerships and collaborations between nurse managers, leaders and educators and their associated organizations. The importance of adopting an authentic sustainable leadership approach to facilitating and supporting frontline staff to innovate and change is imperative in restoring and evidencing that nurses do care and are excellent at what they do. By focusing attention on what resources are required to create a healthcare environment that enables compassion, safety and excellence in nursing care and what this means would be a reasonable start on the journey to excellence in nursing. © 2012 Blackwell Publishing Ltd.

  1. Reorientation of health services: enablers and barriers faced by organisations when increasing health promotion capacity.

    PubMed

    McFarlane, K; Judd, J; Devine, S; Watt, K

    2016-08-01

    Issue addressed Primary healthcare settings are important providers of health promotion approaches. However, organisational challenges can affect their capacity to deliver these approaches. This review identified the common enablers and barriers health organisations faced and it aimed to explore the experiences health organisations, in particular Aboriginal organisations, had when increasing their health promotion capacity. Methods A systematic search of peer-reviewed literature was conducted. Articles published between 1990-2014 that focused on a health care-settings approach and discussed factors that facilitated or hindered an organisation's ability to increase health promotion capacity were included. Results Twenty-five articles met the inclusion criteria. Qualitative (n=18) and quantitative (n=7) study designs were included. Only one article described the experiences of an Aboriginal health organisation. Enablers included: management support, skilled staff, provision of external support to the organisation, committed staffing and financial resources, leadership and the availability of external partners to work with. Barriers included: lack of management support, lack of dedicated health promotion staff, staff lacking skills or confidence, competing priorities and a lack of time and resources allocated to health promotion activities. Conclusions While the literature highlighted the importance of health promotion work, barriers can limit the delivery of health promotion approaches within primary healthcare organisations. A gap in the literature exists about how Aboriginal health organisations face these challenges. So what? Primary healthcare organisations wanting to increase their health promotion capacity can pre-empt the common barriers and strengthen identified enablers through the shared learnings outlined in this review.

  2. COINSTAC: A Privacy Enabled Model and Prototype for Leveraging and Processing Decentralized Brain Imaging Data.

    PubMed

    Plis, Sergey M; Sarwate, Anand D; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R; Turner, Jessica A; Shoemaker, Jody M; Carter, Kim W; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D

    2016-01-01

    The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and "closed" repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to "pooled-data" solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions.

  3. COINSTAC: A Privacy Enabled Model and Prototype for Leveraging and Processing Decentralized Brain Imaging Data

    PubMed Central

    Plis, Sergey M.; Sarwate, Anand D.; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R.; Turner, Jessica A.; Shoemaker, Jody M.; Carter, Kim W.; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D.

    2016-01-01

    The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and “closed” repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to “pooled-data” solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions. PMID:27594820

  4. Resource allocation in shared spectrum access communications for operators with diverse service requirements

    NASA Astrophysics Data System (ADS)

    Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki

    2016-12-01

    In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.

  5. On standardization of basic datasets of electronic medical records in traditional Chinese medicine.

    PubMed

    Zhang, Hong; Ni, Wandong; Li, Jing; Jiang, Youlin; Liu, Kunjing; Ma, Zhaohui

    2017-12-24

    Standardization of electronic medical record, so as to enable resource-sharing and information exchange among medical institutions has become inevitable in view of the ever increasing medical information. The current research is an effort towards the standardization of basic dataset of electronic medical records in traditional Chinese medicine. In this work, an outpatient clinical information model and an inpatient clinical information model are created to adequately depict the diagnosis processes and treatment procedures of traditional Chinese medicine. To be backward compatible with the existing dataset standard created for western medicine, the new standard shall be a superset of the existing standard. Thus, the two models are checked against the existing standard in conjunction with 170,000 medical record cases. If a case cannot be covered by the existing standard due to the particularity of Chinese medicine, then either an existing data element is expanded with some Chinese medicine contents or a new data element is created. Some dataset subsets are also created to group and record Chinese medicine special diagnoses and treatments such as acupuncture. The outcome of this research is a proposal of standardized traditional Chinese medicine medical records datasets. The proposal has been verified successfully in three medical institutions with hundreds of thousands of medical records. A new dataset standard for traditional Chinese medicine is proposed in this paper. The proposed standard, covering traditional Chinese medicine as well as western medicine, is expected to be soon approved by the authority. A widespread adoption of this proposal will enable traditional Chinese medicine hospitals and institutions to easily exchange information and share resources. Copyright © 2017. Published by Elsevier B.V.

  6. Development of a consent resource for genomic data sharing in the clinical setting.

    PubMed

    Riggs, Erin Rooney; Azzariti, Danielle R; Niehaus, Annie; Goehringer, Scott R; Ramos, Erin M; Rodriguez, Laura Lyman; Knoppers, Bartha; Rehm, Heidi L; Martin, Christa Lese

    2018-06-13

    Data sharing between clinicians, laboratories, and patients is essential for improvements in genomic medicine, but obtaining consent for individual-level data sharing is often hindered by a lack of time and resources. To address this issue, the Clinical Genome Resource (ClinGen) developed tools to facilitate consent, including a one-page consent form and online supplemental video with information on key topics, such as risks and benefits of data sharing. To determine whether the consent form and video accurately conveyed key data sharing concepts, we surveyed 5,162 members of the general public. We measured comprehension at baseline, after reading the form and watching the video. Additionally, we assessed participants' attitudes toward genomic data sharing. Participants' performance on comprehension questions significantly improved over baseline after reading the form and continued to improve after watching the video. Results suggest reading the form alone provided participants with important knowledge regarding broad data sharing, and watching the video allowed for broader comprehension. These materials are now available at http://www.clinicalgenome.org/share . These resources will provide patients a straightforward way to share their genetic and health information, and improve the scientific community's access to data generated through routine healthcare.

  7. How Can Social Media Lead to Co-Production (Co-Delivery) of New Services for the Elderly Population? A Qualitative Study.

    PubMed

    Daneshvar, Hadi; Anderson, Stuart; Williams, Robin; Mozaffar, Hajar

    2018-02-12

    The future of health care services in the European Union faces the triple challenges of aging, fiscal restriction, and inclusion. Co-production offers ways to manage informal care resources to help them cater for the growing needs of elderly people. Social media (SM) is seen as a critical enabler for co-production. The objective of this study was to investigate how SM-private Facebook groups, forums, Twitter, and blogging-acts as an enabler of co-production in health and care by facilitating its four underlying principles: equality, diversity, accessibility, and reciprocity. We used normalization process theory as our theoretical framework to design this study. We conducted a qualitative study and collected data through 20 semistructured interviews and observation of the activities of 10 online groups and individuals. We then used thematic analysis and drew on principles of co-production (equality, diversity, accessibility, and reciprocity) as a deductive coding framework to analyze our findings. Our findings point to distinct patterns of feature use by different people involved in care of elderly people. This diversity makes possible the principles of co-production by offering equality among users, enabling diversity of use, making experiences accessible, and encouraging reciprocity in the sharing of knowledge and mutual support. We also identified that explication of common resources may lead to new forms of competition and conflicts. These conflicts require better management to enhance the coordination of the common pool of resources. SM uses afford new forms of organizing and collective engagement between patients, carers, and professionals, which leads to change in health and care communication and coordination. ©Hadi Daneshvar, Stuart Anderson, Robin Williams, Hajar Mozaffar. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 12.02.2018.

  8. How Can Social Media Lead to Co-Production (Co-Delivery) of New Services for the Elderly Population? A Qualitative Study

    PubMed Central

    Anderson, Stuart; Williams, Robin; Mozaffar, Hajar

    2018-01-01

    Background The future of health care services in the European Union faces the triple challenges of aging, fiscal restriction, and inclusion. Co-production offers ways to manage informal care resources to help them cater for the growing needs of elderly people. Social media (SM) is seen as a critical enabler for co-production. Objective The objective of this study was to investigate how SM—private Facebook groups, forums, Twitter, and blogging—acts as an enabler of co-production in health and care by facilitating its four underlying principles: equality, diversity, accessibility, and reciprocity. Methods We used normalization process theory as our theoretical framework to design this study. We conducted a qualitative study and collected data through 20 semistructured interviews and observation of the activities of 10 online groups and individuals. We then used thematic analysis and drew on principles of co-production (equality, diversity, accessibility, and reciprocity) as a deductive coding framework to analyze our findings. Results Our findings point to distinct patterns of feature use by different people involved in care of elderly people. This diversity makes possible the principles of co-production by offering equality among users, enabling diversity of use, making experiences accessible, and encouraging reciprocity in the sharing of knowledge and mutual support. We also identified that explication of common resources may lead to new forms of competition and conflicts. These conflicts require better management to enhance the coordination of the common pool of resources. Conclusions SM uses afford new forms of organizing and collective engagement between patients, carers, and professionals, which leads to change in health and care communication and coordination. PMID:29434014

  9. Establishing strategic alliance among hospitals through SAIS: a case study in Taiwan.

    PubMed

    Hung, Won-Fu; Hwang, Hsin-Ginn; Liao, Chechen

    2005-01-01

    Due to a reformed healthcare insurance system and a gradually decreasing public affairs' budget by the government year by year, Central Taiwan Office (CTO), the Department of Health (DOH) in Taiwan, initiated a strategic alliance project of the hospitals subordinated to the DOH in November, 2001. This project was a five-year plan with an attempt to expand and develop three more strategic alliances covering the northern, southern and eastern regions of Taiwan respectively. Through a cooperative system, such an alliance allows the following: resource sharing, technique collaboration, marketing affiliations and so on. In order to decrease operation management costs and improve the quality of service at hospitals, the strategic alliance practice is supported by IS. We call this alignment the IS-enabled strategic alliance. All the IS-enabled functions are supported by the Strategic Alliance Information System (SAIS). In this article, the SAIS developed by the CTO of the DOH is introduced.

  10. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  11. Wireless shared resources : sharing of right-of-way for wireless technology : guidance on legal and institutional issues

    DOT National Transportation Integrated Search

    1997-06-06

    Shared resource projects offer an opportunity for public transportation agencies to leverage property assets in exchange for support for transportation programs. Intelligent transportation systems (ITS) require wireline infrastructure in roadway ROW ...

  12. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

    PubMed

    Wahn, Basil; König, Peter

    2017-01-01

    Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.

  13. A Study of Veterans Administration/Department of Defense Health Care Resources Sharing at Keller Army Community Hospital West Point, New York 10996

    DTIC Science & Technology

    1984-04-01

    civilian facility. In FY 79, of the $20 million that the Veterans Administration (VA) spent on shared services , only $17,000 was for services shared...2) present incentives to encourage shared services are inadequate; and (3) such sharing of resources can be effected without a detrimental impact on...Regionalization in Perspective", which provided an excellent review of hospital regionalization and the potential benefits associated with shared services . 6

  14. The perspectives of Australian midwifery academics on barriers and enablers for simulation in midwifery education in Australia: a focus group study.

    PubMed

    Fox-Young, Stephanie; Brady, Susannah; Brealey, Wendy; Cooper, Simon; McKenna, Lisa; Hall, Helen; Bogossian, Fiona

    2012-08-01

    to describe Australian midwifery academics' perceptions of the current barriers and enablers for simulation in midwifery education in Australia and the potential and resources required for simulation to be increased. a series of 11 focus groups/interviews were held in all states and territories of Australia with 46 participating academics nominated by their heads of discipline from universities across the country. three themes were identified relating to barriers to the extension of the use of simulated learning environments (SLEs) ('there are things that you can't simulate'; 'not having the appropriate resources'; and professional accreditation requirements) and three themes were identified to facilitate SLE use ('for the bits that you're not likely to see very often in clinical'; ['for students] to figure something out before [they] get to go out there and do it on the real person'; and good resources and support). although barriers exist to the adoption and spread of simulated learning in midwifery, there is a long history of simulation and a great willingness to enhance its use among midwifery academics in Australia. while some aspects of midwifery practice may be impossible to simulate, more collaboration and sharing in the development and use of simulation scenarios, equipment, space and other physical and personnel resources would make the uptake of simulation in midwifery education more widespread. Students would therefore be exposed to the best available preparation for clinical practice contributing to the safety and quality of midwifery care. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Evaluation of Cities in the Context of Energy Efficient Urban Planning Approach

    NASA Astrophysics Data System (ADS)

    Handan Yücel Yıldırım, H.; Burcu Gültekin, Arzuhan; Tanrıvermiş, Harun

    2017-10-01

    Due to the increase in energy need with urbanization as a result of industrialization and rapid population growth, preservation of natural resources has become impossible. As the energy generated particularly from non-renewable natural resources that are in danger of depletion such as coal, natural gas, petroleum is limited, and as environmental issues caused by energy resources increase, means of safe and continuous access to energy are searched in the world. Owing to the limited energy resources and energy dependence on foreign sources in the world, particularly in European Union countries, efforts of increasing the share of renewable energy sources in energy consumption increased in all industries, including urban planning as well. Concordantly, it is necessary to develop policies and approaches that enable utilization of domestic resources complying with the country’s conditions, and monitor developments in energy. Such policies and approaches, which must be implemented in urban planning as well, have great importance in terms of not deteriorating habitable environments of future generations while utilizing present-day energy resources, prevalence of utilization of renewable energy sources, and utilization of energy effectively. For that purpose, this paper puts forward a conceptual framework covering the principles, strategies, and methods on energy efficient urban planning approach, and discusses the energy efficient urban area examples within the scope of the suggested framework.

  16. Cross-Jurisdictional Resource Sharing in Local Health Departments: Implications for Services, Quality, and Cost.

    PubMed

    Humphries, Debbie L; Hyde, Justeen; Hahn, Ethan; Atherly, Adam; O'Keefe, Elaine; Wilkinson, Geoffrey; Eckhouse, Seth; Huleatt, Steve; Wong, Samuel; Kertanis, Jennifer

    2018-01-01

    Forty one percent of local health departments in the U.S. serve jurisdictions with populations of 25,000 or less. Researchers, policymakers, and advocates have long questioned how to strengthen public health systems in smaller municipalities. Cross-jurisdictional sharing may increase quality of service, access to resources, and efficiency of resource use. To characterize perceived strengths and challenges of independent and comprehensive sharing approaches, and to assess cost, quality, and breadth of services provided by independent and sharing health departments in Connecticut (CT) and Massachusetts (MA). We interviewed local health directors or their designees from 15 comprehensive resource-sharing jurisdictions and 54 single-municipality jurisdictions in CT and MA using a semi-structured interview. Quantitative data were drawn from closed-ended questions in the semi-structured interviews; municipal demographic data were drawn from the American Community Survey and other public sources. Qualitative data were drawn from open-ended questions in the semi-structured interviews. The findings from this multistate study highlight advantages and disadvantages of two common public health service delivery models - independent and shared. Shared service jurisdictions provided more community health programs and services, and invested significantly more ($120 per thousand (1K) population vs. $69.5/1K population) on healthy food access activities. Sharing departments had more indicators of higher quality food safety inspections (FSIs), and there was a non-linear relationship between cost per FSI and number of FSI. Minimum cost per FSI was reached above the total number of FSI conducted by all but four of the jurisdictions sampled. Independent jurisdictions perceived their governing bodies to have greater understanding of the roles and responsibilities of local public health, while shared service jurisdictions had fewer staff per 1,000 population. There are trade-offs with sharing and remaining independent. Independent health departments serving small jurisdictions have limited resources but strong local knowledge. Multi-municipality departments have more resources but require more time and investment in governance and decision-making. When making decisions about the right service delivery model for a given municipality, careful consideration should be given to local culture and values. Some economies of scale may be achieved through resource sharing for municipalities <25,000 population.

  17. SWATShare- A Platform for Collaborative Hydrology Research and Education with Cyber-enabled Sharing, Running and Visualization of SWAT Models

    NASA Astrophysics Data System (ADS)

    Rajib, M. A.; Merwade, V.; Song, C.; Zhao, L.; Kim, I. L.; Zhe, S.

    2014-12-01

    Setting up of any hydrologic model requires a large amount of efforts including compilation of all the data, creation of input files, calibration and validation. Given the amount of efforts involved, it is possible that models for a watershed get created multiple times by multiple groups or organizations to accomplish different research, educational or policy goals. To reduce the duplication of efforts and enable collaboration among different groups or organizations around an already existing hydrology model, a platform is needed where anyone can search for existing models, perform simple scenario analysis and visualize model results. The creator and users of a model on such a platform can then collaborate to accomplish new research or educational objectives. From this perspective, a prototype cyber-infrastructure (CI), called SWATShare, is developed for sharing, running and visualizing Soil Water Assessment Tool (SWAT) models in an interactive GIS-enabled web environment. Users can utilize SWATShare to publish or upload their own models, search and download existing SWAT models developed by others, run simulations including calibration using high performance resources provided by XSEDE and Cloud. Besides running and sharing, SWATShare hosts a novel spatio-temporal visualization system for SWAT model outputs. In temporal scale, the system creates time-series plots for all the hydrology and water quality variables available along the reach as well as in watershed-level. In spatial scale, the system can dynamically generate sub-basin level thematic maps for any variable at any user-defined date or date range; and thereby, allowing users to run animations or download the data for subsequent analyses. In addition to research, SWATShare can also be used within a classroom setting as an educational tool for modeling and comparing the hydrologic processes under different geographic and climatic settings. SWATShare is publicly available at https://www.water-hub.org/swatshare.

  18. Envisioning a social-health information exchange as a platform to support a patient-centered medical neighborhood: a feasibility study.

    PubMed

    Nguyen, Oanh Kieu; Chan, Connie V; Makam, Anil; Stieglitz, Heather; Amarasingham, Ruben

    2015-01-01

    Social determinants directly contribute to poorer health, and coordination between healthcare and community-based resources is pivotal to addressing these needs. However, our healthcare system remains poorly equipped to address social determinants of health. The potential of health information technology to bridge this gap across the delivery of healthcare and social services remains unrealized. We conducted in-depth, in-person interviews with 50 healthcare and social service providers to determine the feasibility of a social-health information exchange (S-HIE) in an urban safety-net setting in Dallas County, Texas. After completion of interviews, we conducted a town hall meeting to identify desired functionalities for a S-HIE. We conducted thematic analysis of interview responses using the constant comparative method to explore perceptions about current communication and coordination across sectors, and barriers and enablers to S-HIE implementation. We sought participant confirmation of findings and conducted a forced-rank vote during the town hall to prioritize potential S-HIE functionalities. We found that healthcare and social service providers perceived a need for improved information sharing, communication, and care coordination across sectors and were enthusiastic about the potential of a S-HIE, but shared many technical, legal, and ethical concerns around cross-sector information sharing. Desired technical S-HIE functionalities encompassed fairly simple transactional operations such as the ability to view basic demographic information, visit and referral data, and medical history from both healthcare and social service settings. A S-HIE is an innovative and feasible approach to enabling better linkages between healthcare and social service providers. However, to develop S-HIEs in communities across the country, policy interventions are needed to standardize regulatory requirements, to foster increased IT capability and uptake among social service agencies, and to align healthcare and social service priorities to enable dissemination and broader adoption of this and similar IT initiatives.

  19. MCSDSS: A Multi-Criteria Decision Support System for Merging Geoscience Information with Natural User Interfaces, Preference Ranking, and Interactive Data Utilities

    NASA Astrophysics Data System (ADS)

    Pierce, S. A.; Gentle, J.

    2015-12-01

    The multi-criteria decision support system (MCSDSS) is a newly completed application for touch-enabled group decision support that uses D3 data visualization tools, a geojson conversion utility that we developed, and Paralelex to create an interactive tool. The MCSDSS is a prototype system intended to demonstrate the potential capabilities of a single page application (SPA) running atop a web and cloud based architecture utilizing open source technologies. The application is implemented on current web standards while supporting human interface design that targets both traditional mouse/keyboard interactions and modern touch/gesture enabled interactions. The technology stack for MCSDSS was selected with the goal of creating a robust and dynamic modular codebase that can be adjusted to fit many use cases and scale to support usage loads that range between simple data display to complex scientific simulation-based modelling and analytics. The application integrates current frameworks for highly performant agile development with unit testing, statistical analysis, data visualization, mapping technologies, geographic data manipulation, and cloud infrastructure while retaining support for traditional HTML5/CSS3 web standards. The software lifecylcle for MCSDSS has following best practices to develop, share, and document the codebase and application. Code is documented and shared via an online repository with the option for programmers to see, contribute, or fork the codebase. Example data files and tutorial documentation have been shared with clear descriptions and data object identifiers. And the metadata about the application has been incorporated into an OntoSoft entry to ensure that MCSDSS is searchable and clearly described. MCSDSS is a flexible platform that allows for data fusion and inclusion of large datasets in an interactive front-end application capable of connecting with other science-based applications and advanced computing resources. In addition, MCSDSS offers functionality that enables communication with non-technical users for policy, education, or engagement with groups around scientific topics with societal relevance.

  20. Collaboration for rare disease drug discovery research.

    PubMed

    Litterman, Nadia K; Rhee, Michele; Swinney, David C; Ekins, Sean

    2014-01-01

    Rare disease research has reached a tipping point, with the confluence of scientific and technologic developments that if appropriately harnessed, could lead to key breakthroughs and treatments for this set of devastating disorders. Industry-wide trends have revealed that the traditional drug discovery research and development (R&D) model is no longer viable, and drug companies are evolving their approach. Rather than only pursue blockbuster therapeutics for heterogeneous, common diseases, drug companies have increasingly begun to shift their focus to rare diseases. In academia, advances in genetics analyses and disease mechanisms have allowed scientific understanding to mature, but the lack of funding and translational capability severely limits the rare disease research that leads to clinical trials. Simultaneously, there is a movement towards increased research collaboration, more data sharing, and heightened engagement and active involvement by patients, advocates, and foundations. The growth in networks and social networking tools presents an opportunity to help reach other patients but also find researchers and build collaborations. The growth of collaborative software that can enable researchers to share their data could also enable rare disease patients and foundations to manage their portfolio of funded projects for developing new therapeutics and suggest drug repurposing opportunities. Still there are many thousands of diseases without treatments and with only fragmented research efforts. We will describe some recent progress in several rare diseases used as examples and propose how collaborations could be facilitated. We propose that the development of a center of excellence that integrates and shares informatics resources for rare diseases sponsored by all of the stakeholders would help foster these initiatives.

  1. Collaboration for rare disease drug discovery research

    PubMed Central

    Litterman, Nadia K.; Rhee, Michele; Swinney, David C.; Ekins, Sean

    2014-01-01

    Rare disease research has reached a tipping point, with the confluence of scientific and technologic developments that if appropriately harnessed, could lead to key breakthroughs and treatments for this set of devastating disorders. Industry-wide trends have revealed that the traditional drug discovery research and development (R&D) model is no longer viable, and drug companies are evolving their approach. Rather than only pursue blockbuster therapeutics for heterogeneous, common diseases, drug companies have increasingly begun to shift their focus to rare diseases. In academia, advances in genetics analyses and disease mechanisms have allowed scientific understanding to mature, but the lack of funding and translational capability severely limits the rare disease research that leads to clinical trials. Simultaneously, there is a movement towards increased research collaboration, more data sharing, and heightened engagement and active involvement by patients, advocates, and foundations. The growth in networks and social networking tools presents an opportunity to help reach other patients but also find researchers and build collaborations. The growth of collaborative software that can enable researchers to share their data could also enable rare disease patients and foundations to manage their portfolio of funded projects for developing new therapeutics and suggest drug repurposing opportunities. Still there are many thousands of diseases without treatments and with only fragmented research efforts. We will describe some recent progress in several rare diseases used as examples and propose how collaborations could be facilitated. We propose that the development of a center of excellence that integrates and shares informatics resources for rare diseases sponsored by all of the stakeholders would help foster these initiatives. PMID:25685324

  2. Proximity-based access control for context-sensitive information provision in SOA-based systems

    NASA Astrophysics Data System (ADS)

    Rajappan, Gowri; Wang, Xiaofei; Grant, Robert; Paulini, Matthew

    2014-06-01

    Service Oriented Architecture (SOA) has enabled open-architecture integration of applications within an enterprise. For net-centric Command and Control (C2), this elucidates information sharing between applications and users, a critical requirement for mission success. The Information Technology (IT) access control schemes, which arbitrate who gets access to what information, do not yet have the contextual knowledge to dynamically allow this information sharing to happen dynamically. The access control might prevent legitimate users from accessing information relevant to the current mission context, since this context may be very different from the context for which the access privileges were configured. We evaluate a pair of data relevance measures - proximity and risk - and use these as the basis of dynamic access control. Proximity is a measure of the strength of connection between the user and the resource. However, proximity is not sufficient, since some data might have a negative impact, if leaked, which far outweighs importance to the subject's mission. For this, we use a risk measure to quantify the downside of data compromise. Given these contextual measures of proximity and risk, we investigate extending Attribute-Based Access Control (ABAC), which is used by the Department of Defense, and Role-Based Access Control (RBAC), which is widely used in the civilian market, so that these standards-based access control models are given contextual knowledge to enable dynamic information sharing. Furthermore, we consider the use of such a contextual access control scheme in a SOA-based environment, in particular for net-centric C2.

  3. Resource Sharing: New Technologies as a Must for Universal Availability of Information. International Essen Symposium (16th, Essen, Germany, October 18-21, 1993). Festschrift in Honor of Hans-Peter Geh.

    ERIC Educational Resources Information Center

    Helal, Ahmed H., Ed.; Weiss, Joachim W.

    This proceedings includes the following papers presented at the 16th International Essen Symposium: "Electronic Resource Sharing: It May Seem Obvious, But It's Not as Simple as it Looks" (Herbert S. White); "Resource Sharing through OCLC: A Comprehensive Approach" (Janet Mitchell); "The Business Information Network:…

  4. Language influences music harmony perception: effects of shared syntactic integration resources beyond attention

    PubMed Central

    Willems, Roel M.; Hagoort, Peter

    2016-01-01

    Many studies have revealed shared music–language processing resources by finding an influence of music harmony manipulations on concurrent language processing. However, the nature of the shared resources has remained ambiguous. They have been argued to be syntax specific and thus due to shared syntactic integration resources. An alternative view regards them as related to general attention and, thus, not specific to syntax. The present experiments evaluated these accounts by investigating the influence of language on music. Participants were asked to provide closure judgements on harmonic sequences in order to assess the appropriateness of sequence endings. At the same time participants read syntactic garden-path sentences. Closure judgements revealed a change in harmonic processing as the result of reading a syntactically challenging word. We found no influence of an arithmetic control manipulation (experiment 1) or semantic garden-path sentences (experiment 2). Our results provide behavioural evidence for a specific influence of linguistic syntax processing on musical harmony judgements. A closer look reveals that the shared resources appear to be needed to hold a harmonic key online in some form of syntactic working memory or unification workspace related to the integration of chords and words. Overall, our results support the syntax specificity of shared music–language processing resources. PMID:26998339

  5. Tripartite Governance: Enabling Successful Implementations with Vulnerable Populations.

    PubMed

    Kennedy, Margaret Ann

    2016-01-01

    Vulnerable populations are often at a distinct disadvantage when it comes to the implementation of health information systems in an equitable, appropriate, and timely manner. The disadvantages experienced by vulnerable populations are innumerable and include lack of representation, lack of appropriate levels of funding, lack of resources and capacity, and lack of representation. Increasingly, models of representation for complex implementations involve a tripartite project governance model. This tripartite partnership distributes accountability across all partners, and ensures that vulnerable populations have an equitable contribution to the direction of implementation according to their needs. This article shares lessons learned and best practices from complex tripartite partnerships supporting implementations with vulnerable populations in Canada.

  6. The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences.

    PubMed

    Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker

    2016-01-01

    The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant's platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses.

  7. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  8. HomeADL for adaptive ADL monitoring within smart homes.

    PubMed

    Hong, Xin; Nugent, Chris D; Finlay, Dewar D; Mulvenna, Maurice

    2008-01-01

    In this paper we present homeADL: a representation standard for an inference hierarchy of activities of daily living which may be monitored in a sensor equipped smart home. The approach allows a free exchange of ADL monitoring structures between different communities who share the same concern of providing high quality healthcare to the elderly. Its ability of matching different ADL protocols enables a mapping between an ADL protocol to a suitable smart home which makes an effective management of smart homes within a community hence, not only being able to satisfy an individual's healthcare requirements but also efficiently using monitoring resources at hand.

  9. 7 CFR 624.7 - Cost-sharing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 6 2013-01-01 2013-01-01 false Cost-sharing. 624.7 Section 624.7 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE WATER RESOURCES EMERGENCY WATERSHED PROTECTION § 624.7 Cost-sharing. (a) Except as provided in...

  10. 7 CFR 624.7 - Cost-sharing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 6 2014-01-01 2014-01-01 false Cost-sharing. 624.7 Section 624.7 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE WATER RESOURCES EMERGENCY WATERSHED PROTECTION § 624.7 Cost-sharing. (a) Except as provided in...

  11. 7 CFR 624.7 - Cost-sharing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 6 2011-01-01 2011-01-01 false Cost-sharing. 624.7 Section 624.7 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE WATER RESOURCES EMERGENCY WATERSHED PROTECTION § 624.7 Cost-sharing. (a) Except as provided in...

  12. 7 CFR 624.7 - Cost-sharing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 6 2012-01-01 2012-01-01 false Cost-sharing. 624.7 Section 624.7 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF AGRICULTURE WATER RESOURCES EMERGENCY WATERSHED PROTECTION § 624.7 Cost-sharing. (a) Except as provided in...

  13. State Support for Open Educational Resources: Key Findings from Achieve's OER Institute

    ERIC Educational Resources Information Center

    Achieve, Inc., 2013

    2013-01-01

    Open Educational Resources (OER) offer unique new opportunities for educators to share quality learning resources, especially in an increasingly digital world. Forty-six states and the District of Columbia have adopted the Common Core State Standards (CCSS), providing them with the unprecedented advantage of being able to share resources that are…

  14. Wyoming Academic Libraries Resource Project: Developing a Statewide Ariel Document Delivery Network. Final Report.

    ERIC Educational Resources Information Center

    Lange, Karen

    The Wyoming Academic Libraries Resource Project was initiated to improve cooperation and resource sharing by developing an interconnected information access and delivery system among Wyoming's academic libraries and the State Library. The goal was to formalize communication, cooperation, and resource sharing by developing an Ariel document…

  15. The demands and resources arising from shared office spaces.

    PubMed

    Morrison, Rachel L; Macky, Keith A

    2017-04-01

    The prevalence of flexible and shared office spaces is increasing significantly, yet the socioemotional outcomes associated with these environments are under researched. Utilising the job demands-resources (JD-R) model we investigate both the demands and the resources that can accrue to workers as a result of shared work environments and hot-desking. Data were collected from work experienced respondents (n = 1000) assessing the extent to which they shared their office space with others, along with demands comprising distractions, uncooperative behaviours, distrust, and negative relationships, and resources from co-worker friendships and supervisor support. We found that, as work environments became more shared (with hot-desking being at the extreme end of the continuum), not only were there increases in demands, but co-worker friendships were not improved and perceptions of supervisory support decreased. Findings are discussed in relation to employee well-being and recommendations are made regarding how best to ameliorate negative consequences of shared work environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. An Infrastructure for Indexing and Organizing Best Practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Staples, Mark; Gorton, Ian

    Industry best practices are widely held but not necessarily empirically verified software engineering beliefs. Best practices can be documented in distributed web-based public repositories as pattern catalogues or practice libraries. There is a need to systematically index and organize these practices to enable their better practical use and scientific evaluation. In this paper, we propose a semi-automatic approach to index and organise best practices. A central repository acts as an information overlay on top of other pre-existing resources to facilitate organization, navigation, annotation and meta-analysis while maintaining synchronization with those resources. An initial population of the central repository is automatedmore » using Yahoo! contextual search services. The collected data is organized using semantic web technologies so that the data can be more easily shared and used for innovative analyses. A prototype has demonstrated the capability of the approach.« less

  17. Global comparative healthcare effectiveness research: evaluating sustainable programmes in low & middle resource settings.

    PubMed

    Balkrishnan, Rajesh; Chang, Jongwha; Patel, Isha; Yang, Fang; Merajver, Sofia D

    2013-03-01

    The need to focus healthcare expenditures on innovative and sustainable health systems that efficiently use existing effective therapies are the major drivers stimulating Comparative Effectiveness Research (CER) across the globe. Lack of adequate access and high cost of essential medicines and technologies in many countries increases morbidity and mortality and cost of care that forces people and families into poverty due to disability and out-of-pocket expenses. This review illustrates the potential of value-added global health care comparative effectiveness research in shaping health systems and health care delivery paradigms in the "global south". Enabling the development of effective CER systems globally paves the way for tangible local and regional definitions of equity in health care because CER fosters the sharing of critical assets, resources, skills, and capabilities and the development of collaborative of multi-sectorial frameworks to improve health outcomes and metrics globally.

  18. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  19. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    DOE PAGES

    Drawert, Brian; Hellander, Andreas; Bales, Ben; ...

    2016-12-08

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less

  20. Sharing Service Resource Information for Application Integration in a Virtual Enterprise - Modeling the Communication Protocol for Exchanging Service Resource Information

    NASA Astrophysics Data System (ADS)

    Yamada, Hiroshi; Kawaguchi, Akira

    Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.

  1. Perspective: follow the money: the implications of medical schools' funds flow models.

    PubMed

    Miller, Jeffrey C; Andersson, George E; Cohen, Marcia; Cohen, Stephen M; Gibson, Scott; Hindery, Michael A; Hooven, Martha; Krakower, Jack; Browdy, David H

    2012-12-01

    Medical schools conduct research, provide clinical care, and educate future physicians and scientists. Each school has its own unique mix of revenue sources and expense sharing among the medical school, faculty practice plan(s), parent university, and affiliated hospital(s). Despite these differences, revenues from clinical care subsidize the money-losing research and education missions at every medical school.In this perspective, the authors discuss the flow of funds among a medical school, its faculty practice plan(s), parent university, and affiliated hospital(s). They summarize where medical school revenues come from, how revenues and expenses flow within a medical school and between a medical school and its partners, and why understanding this process is crucial to leading and managing such an enterprise. They conclude with recommendations for medical schools to consider in developing funds flow models that meet their individual needs and circumstances: (1) understand economic drivers, (2) reward desired behaviors, (3) enable every unit to generate a positive margin, (4) communicate budget priorities, financial performance, and the use of institutional resources, and (5) establish principles for sharing resources and allocating expenses among entities within the institution.Medical schools should develop funds flow models that are transparent, aligned with their strategic priorities, and reward the behaviors necessary to produce effective collaboration within and across mission areas.

  2. Sustainability in health care by allocating resources effectively (SHARE) 3: examining how resource allocation decisions are made, implemented and evaluated in a local healthcare setting.

    PubMed

    Harris, Claire; Allen, Kelly; Waller, Cara; Brooke, Vanessa

    2017-05-09

    This is the third in a series of papers reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. Leaders in a large Australian health service planned to establish an organisation-wide, systematic, integrated, evidence-based approach to disinvestment. In order to introduce new systems and processes for disinvestment into existing decision-making infrastructure, we aimed to understand where, how and by whom resource allocation decisions were made, implemented and evaluated. We also sought the knowledge and experience of staff regarding previous disinvestment activities. Structured interviews, workshops and document analysis were used to collect information from multiple sources in an environmental scan of decision-making systems and processes. Findings were synthesised using a theoretical framework. Sixty-eight respondents participated in interviews and workshops. Eight components in the process of resource allocation were identified: Governance, Administration, Stakeholder engagement, Resources, Decision-making, Implementation, Evaluation and, where appropriate, Reinvestment of savings. Elements of structure and practice for each component are described and a new framework was developed to capture the relationships between them. A range of decision-makers, decision-making settings, type and scope of decisions, criteria used, and strengths, weaknesses, barriers and enablers are outlined. The term 'disinvestment' was not used in health service decision-making. Previous projects that involved removal, reduction or restriction of current practices were driven by quality and safety issues, evidence-based practice or a need to find resource savings and not by initiatives where the primary aim was to disinvest. Measuring resource savings is difficult, in some situations impossible. Savings are often only theoretical as resources released may be utilised immediately by patients waiting for beds, clinic appointments or surgery. Decision-making systems and processes for resource allocation are more complex than assumed in previous studies. There is a wide range of decision-makers, settings, scope and type of decisions, and criteria used for allocating resources within a single institution. To our knowledge, this is the first paper to report this level of detail and to introduce eight components of the resource allocation process identified within a local health service.

  3. The Heliophysics Data Environment: Open Source, Open Systems and Open Data.

    NASA Astrophysics Data System (ADS)

    King, Todd; Roberts, Aaron; Walker, Raymond; Thieman, James

    2012-07-01

    The Heliophysics Data Environment (HPDE) is a place for scientific discovery. Today the Heliophysics Data Environment is a framework of technologies, standards and services which enables the international community to collaborate more effectively in space physics research. Crafting a framework for a data environment begins with defining a model of the tasks to be performed, then defining the functional aspects and the work flow. The foundation of any data environment is an information model which defines the structure and content of the metadata necessary to perform the tasks. In the Heliophysics Data Environment the information model is the Space Physics Archive Search and Extract (SPASE) model and available resources are described by using this model. A described resource can reside anywhere on the internet which makes it possible for a national archive, mission, data center or individual researcher to be a provider. The generated metadata is shared, reviewed and harvested to enable services. Virtual Observatories use the metadata to provide community based portals. Through unique identifiers and registry services tools can quickly discover and access data available anywhere on the internet. This enables a researcher to quickly view and analyze data in a variety of settings and enhances the Heliophysics Data Environment. To illustrate the current Heliophysics Data Environment we present the design, architecture and operation of the Heliophysics framework. We then walk through a real example of using available tools to investigate the effects of the solar wind on Earth's magnetosphere.

  4. Resource Sharing: A Necessity for the '80s.

    ERIC Educational Resources Information Center

    Lavo, Barbara, Comp.

    Papers presented at a 1981 seminar on library resource sharing covered topics related to Australasian databases, Australian and New Zealand document delivery systems, and shared acquisition and cataloging for special libraries. The papers included: (1) "AUSINET: Australasia's Information Network?" by Ian McCallum; (2) "Australia/New…

  5. 30 CFR 220.022 - Calculation of net profit share payment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Calculation of net profit share payment. 220.022 Section 220.022 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ACCOUNTING PROCEDURES FOR DETERMINING NET PROFIT SHARE PAYMENT FOR OUTER CONTINENTAL...

  6. An open-source software platform for data management, visualisation, model building and model sharing in water, energy and other resource modelling domains.

    NASA Astrophysics Data System (ADS)

    Knox, S.; Meier, P.; Mohammed, K.; Korteling, B.; Matrosov, E. S.; Hurford, A.; Huskova, I.; Harou, J. J.; Rosenberg, D. E.; Thilmant, A.; Medellin-Azuara, J.; Wicks, J.

    2015-12-01

    Capacity expansion on resource networks is essential to adapting to economic and population growth and pressures such as climate change. Engineered infrastructure systems such as water, energy, or transport networks require sophisticated and bespoke models to refine management and investment strategies. Successful modeling of such complex systems relies on good data management and advanced methods to visualize and share data.Engineered infrastructure systems are often represented as networks of nodes and links with operating rules describing their interactions. Infrastructure system management and planning can be abstracted to simulating or optimizing new operations and extensions of the network. By separating the data storage of abstract networks from manipulation and modeling we have created a system where infrastructure modeling across various domains is facilitated.We introduce Hydra Platform, a Free Open Source Software designed for analysts and modelers to store, manage and share network topology and data. Hydra Platform is a Python library with a web service layer for remote applications, called Apps, to connect. Apps serve various functions including network or results visualization, data export (e.g. into a proprietary format) or model execution. This Client-Server architecture allows users to manipulate and share centrally stored data. XML templates allow a standardised description of the data structure required for storing network data such that it is compatible with specific models.Hydra Platform represents networks in an abstract way and is therefore not bound to a single modeling domain. It is the Apps that create domain-specific functionality. Using Apps researchers from different domains can incorporate different models within the same network enabling cross-disciplinary modeling while minimizing errors and streamlining data sharing. Separating the Python library from the web layer allows developers to natively expand the software or build web-based apps in other languages for remote functionality. Partner CH2M is developing a commercial user-interface for Hydra Platform however custom interfaces and visualization tools can be built. Hydra Platform is available on GitHub while Apps will be shared on a central repository.

  7. Enabling Interoperable and Selective Data Sharing among Social Networking Sites

    NASA Astrophysics Data System (ADS)

    Shin, Dongwan; Lopes, Rodrigo

    With the widespread use of social networking (SN) sites and even introduction of a social component in non-social oriented services, there is a growing concern over user privacy in general, how to handle and share user profiles across SN sites in particular. Although there have been several proprietary or open source-based approaches to unifying the creation of third party applications, the availability and retrieval of user profile information are still limited to the site where the third party application is run, mostly devoid of the support for data interoperability. In this paper we propose an approach to enabling interopearable and selective data sharing among SN sites. To support selective data sharing, we discuss an authenticated dictionary (ADT)-based credential which enables a user to share only a subset of her information certified by external SN sites with applications running on an SN site. For interoperable data sharing, we propose an extension to the OpenSocial API so that it can provide an open source-based framework for allowing the ADT-based credential to be used seamlessly among different SN sites.

  8. Processing structure in language and music: a case for shared reliance on cognitive control.

    PubMed

    Slevc, L Robert; Okada, Brooke M

    2015-06-01

    The relationship between structural processing in music and language has received increasing interest in the past several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, Nature Neuroscience, 6, 674-681, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions also can arise with nonstructural manipulations, and some recent neuroimaging studies report largely nonoverlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input.

  9. Informatics methods to enable sharing of quantitative imaging research data.

    PubMed

    Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L

    2012-11-01

    The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. A suite of R packages for web-enabled modeling and analysis of surface waters

    NASA Astrophysics Data System (ADS)

    Read, J. S.; Winslow, L. A.; Nüst, D.; De Cicco, L.; Walker, J. I.

    2014-12-01

    Researchers often create redundant methods for downloading, manipulating, and analyzing data from online resources. Moreover, the reproducibility of science can be hampered by complicated and voluminous data, lack of time for documentation and long-term maintenance of software, and fear of exposing programming skills. The combination of these factors can encourage unshared one-off programmatic solutions instead of openly provided reusable methods. Federal and academic researchers in the water resources and informatics domains have collaborated to address these issues. The result of this collaboration is a suite of modular R packages that can be used independently or as elements in reproducible analytical workflows. These documented and freely available R packages were designed to fill basic needs for the effective use of water data: the retrieval of time-series and spatial data from web resources (dataRetrieval, geoknife), performing quality assurance and quality control checks of these data with robust statistical methods (sensorQC), the creation of useful data derivatives (including physically- and biologically-relevant indices; GDopp, LakeMetabolizer), and the execution and evaluation of models (glmtools, rLakeAnalyzer). Here, we share details and recommendations for the collaborative coding process, and highlight the benefits of an open-source tool development pattern with a popular programming language in the water resources discipline (such as R). We provide examples of reproducible science driven by large volumes of web-available data using these tools, explore benefits of accessing packages as standardized web processing services (WPS) and present a working platform that allows domain experts to publish scientific algorithms in a service-oriented architecture (WPS4R). We assert that in the era of open data, tools that leverage these data should also be freely shared, transparent, and developed in an open innovation environment.

  11. The Social Development Summit and the developing countries.

    PubMed

    Barnabas, A P; Kulkarni, P D; Nanavatty, M C; Singh, R R

    1996-01-01

    This article discusses some concerns of the 1996 UN Summit on Social Development. Conference organizers identified the three key conference issues as poverty alleviation, social integration of the marginalized and disadvantaged, and expansion of productive employment. The goal of a "society for all" means dealing with the increasing differences between rich and poor countries, the survival of weaker economies in a competitive market system, wide variations in consumption patterns between countries, attainment of political stability while respecting ethnic identity, the rise in social problems among countries with a high human development index, and increasing joblessness. The Human Development Report for 1994 emphasizes human security. Social development is not the equivalent of human resource development nor a side issue of economic growth. The integration of ethnic groups poses social and political problems. There remains a question about what political system and culture would be best for social integration. Developed countries define poverty as the inability of people and government to provide resources and necessary services for people's productive activity. Poverty in developing countries is blamed on colonialism. Globally, developed countries control 71% of world trade. Sharing resources to meet basic needs throughout the world is not an operational ideal. The highest 20% of income earners receive 83% of the world income. The culture of poverty is the strategy used by the poor to survive. Welfare is not an end in itself but does enable the poor to improve their conditions. Development that focuses on productive employment is uncertain. Developed and developing countries do not share similar perceptions of human rights. There is a question as to who should set the priorities for social development. Sustainable social development is related to preservation of natural resources, control of population growth, and promotion of social security.

  12. SilvaCarbon: Volunteered Geographical Information and Effective Monitoring

    NASA Astrophysics Data System (ADS)

    Sun, M.

    2011-12-01

    Significant amounts of efforts have been taken into monitoring forest and terrestrial carbon by many countries in recent years. As the rapid increase of methodologies and resources, international collaboration is critical now for enhancing capacity of managing and sharing the ongoing research efficiently worldwide. Moreover, much broader citizen participants with or without expert training have been involved in. Fortunately, the emergence of Web2.0, social networking, and geopositioning technology make such wide-range collaboration and participation on geospatial science research possible. The concept of Volunteer Geographical Information (VGI) coined by Michael F. Goodchild enables the ability to contribute georeferenced and disseminated scientific resource and to exchange information over the web. With this in mind, SilvaCarbon, applying the above technologies, is a project conducted by U.S. federal agencies as a U.S. contribution to the Forest Carbon Tracking task of the intergovernmental Group on Earth Observation. Clearly, all research activities must rely on geographic data. And because of the observational objectives of Forest Carbon Tracking task, data sharing is a main objective of the project needed to be addressed. Data can be captured directly, contributed by secondary sources, or obtained from historical archive for the past period. Each VGI participant becomes a sensor with the ability to collect and share data. A given phenomenon can be always described more sufficient by data from multiple sources than captured individually. And data sharing can also satisfy the desire to avoid data duplication. Another purpose of Silvacarbon is to describe the activity states of involved countries, communities and individual participants and to help communicating. With the assistant of the other social networking like Facebook and Twitter, VGI participants are given an access to broadcast states of their research or activities. They also can plan travels and trades, and administer events and actions according to the returned information. More importantly, the quality of VGI needs to be taken into account. Participant can obtain the most current information and data resources shared in time. Furthermore, with the capacity of map mash-up technology, SilvaCarbon can allow participants to map their data and research result, and to track activity location and movements. Participants can share information by the georeferenced representation on map. A web portal prototype has been developed as the implement of VGI concept for SilvaCarbon according to the project objectives, and a group of users have been invited to test the functions.

  13. Resource Sharing in an Electronic Age: Past, Present, and Future.

    ERIC Educational Resources Information Center

    Jones, Adrian

    Librarians' work has become more challenging and complex over the past 15 years. Fifteen years ago, the telephone was a librarian's most used and most effective instrument, and librarians mostly relied on the resources within their own walls. In that era, resource sharing placed substantial burdens on larger libraries, and the resources of smaller…

  14. Studying interregional wildland fire engine assignments for large fire suppression

    Treesearch

    Erin J. Belval; Yu Wei; David E. Calkin; Crystal S. Stonesifer; Matthew P. Thompson; John R. Tipton

    2017-01-01

    One crucial component of large fire response in the United States (US) is the sharing of wildland firefighting resources between regions: resources from regions experiencing low fire activity supplement resources in regions experiencing high fire activity. An important step towards improving the efficiency of resource sharing and related policies is to develop a better...

  15. Sharing Ideas: Tough Times Encourage Colleges to Collaborate

    ERIC Educational Resources Information Center

    Fain, Paul; Blumenstyk, Goldie; Sander, Libby

    2009-01-01

    Tough times are encouraging colleges to share resources in a variety of areas, including campus security, research, and degree programs. Despite its veneer of cooperation, higher education is a competitive industry, where resource sharing is eyed warily. But the recession is chipping away at that reluctance, and institutions are pursuing…

  16. BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences.

    PubMed

    McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta

    2016-01-01

    BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org. © The Author(s) 2016. Published by Oxford University Press.

  17. International Conference of Directors of National Libraries on Resource Sharing in Asia and Oceanic [Proceedings] (Canberra, Australia, May 14-18, 1979). Development of Resource Sharing Networks. Networks Study No. 11.

    ERIC Educational Resources Information Center

    National Library of Australia, Canberra.

    The proceedings of this 1979 conference on library cooperation begin with proposals for the promotion of resource sharing among the national libraries of Asia and Oceania, the text of a policy statement on the role of national and international systems as approved at a 1976 meeting of directors of national libraries held in Lausanne, and a summary…

  18. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  19. A Gossip-Based Optimistic Replication for Efficient Delay-Sensitive Streaming Using an Interactive Middleware Support System

    NASA Astrophysics Data System (ADS)

    Mavromoustakis, Constandinos X.; Karatza, Helen D.

    2010-06-01

    While sharing resources the efficiency is substantially degraded as a result of the scarceness of availability of the requested resources in a multiclient support manner. These resources are often aggravated by many factors like the temporal constraints for availability or node flooding by the requested replicated file chunks. Thus replicated file chunks should be efficiently disseminated in order to enable resource availability on-demand by the mobile users. This work considers a cross layered middleware support system for efficient delay-sensitive streaming by using each device's connectivity and social interactions in a cross layered manner. The collaborative streaming is achieved through the epidemically replicated file chunk policy which uses a transition-based approach of a chained model of an infectious disease with susceptible, infected, recovered and death states. The Gossip-based stateful model enforces the mobile nodes whether to host a file chunk or not or, when no longer a chunk is needed, to purge it. The proposed model is thoroughly evaluated through experimental simulation taking measures for the effective throughput Eff as a function of the packet loss parameter in contrast with the effectiveness of the replication Gossip-based policy.

  20. The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Weigel, R. S.; Narock, T. W.; McGuire, R. E.; Candey, R. M.

    2011-12-01

    The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework is a configurable service oriented framework to enable the discovery, access and analysis of data shared in a community. The SEEKR framework integrates many existing independent services through the use of web technologies and standard metadata. Services are hosted on systems by using an application server and are callable by using REpresentational State Transfer (REST) protocols. Messages and metadata are transferred with eXtensible Markup Language (XML) encoding which conform to a published XML schema. Space Physics Archive Search and Extract (SPASE) metadata is central to utilizing the services. Resources (data, documents, software, etc.) are described with SPASE and the associated Resource Identifier is used to access and exchange resources. The configurable options for the service can be set by using a web interface. Services are packaged as web application resource (WAR) files for direct deployment on application services such as Tomcat or Jetty. We discuss the composition of the SEEKR framework, how new services can be integrated and the steps necessary to deploying the framework. The SEEKR Framework emerged from NASA's Virtual Magnetospheric Observatory (VMO) and other systems and we present an overview of these systems from a SEEKR Framework perspective.

  1. CmapTools: A Software Environment for Knowledge Modeling and Sharing

    NASA Technical Reports Server (NTRS)

    Canas, Alberto J.

    2004-01-01

    In an ongoing collaborative effort between a group of NASA Ames scientists and researchers at the Institute for Human and Machine Cognition (IHMC) of the University of West Florida, a new version of CmapTools has been developed that enable scientists to construct knowledge models of their domain of expertise, share them with other scientists, make them available to anybody on the Internet with access to a Web browser, and peer-review other scientists models. These software tools have been successfully used at NASA to build a large-scale multimedia on Mars and in knowledge model on Habitability Assessment. The new version of the software places emphasis on greater usability for experts constructing their own knowledge models, and support for the creation of large knowledge models with large number of supporting resources in the forms of images, videos, web pages, and other media. Additionally, the software currently allows scientists to cooperate with each other in the construction, sharing and criticizing of knowledge models. Scientists collaborating from remote distances, for example researchers at the Astrobiology Institute, can concurrently manipulate the knowledge models they are viewing without having to do this at a special videoconferencing facility.

  2. Reusable Social Networking Capabilities for an Earth Science Collaboratory

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Da Silva, D.; Leptoukh, G. G.; Ramachandran, R.

    2011-12-01

    A vast untapped resource of data, tools, information and knowledge lies within the Earth science community. This is due to the fact that it is difficult to share the full spectrum of these entities, particularly their full context. As a result, most knowledge exchange is through person-to-person contact at meetings, email and journal articles, each of which can support only a limited level of detail. We propose the creation of an Earth Science Collaboratory (ESC): a framework that would enable sharing of data, tools, workflows, results and the contextual knowledge about these information entities. The Drupal platform is well positioned to provide the key social networking capabilities to the ESC. As a proof of concept of a rich collaboration mechanism, we have developed a Drupal-based mechanism for graphically annotating and commenting on results images from analysis workflows in the online Giovanni analysis system for remote sensing data. The annotations can be tagged and shared with others in the community. These capabilities are further supplemented by a Research Notebook capability reused from another online analysis system named Talkoot. The goal is a reusable set of modules that can integrate with variety of other applications either within Drupal web frameworks or at a machine level.

  3. Sharing resources: opportunities for smaller primary care practices to increase their capacity for patient care. Findings from the 2009 Commonwealth Fund International Health Policy Survey of Primary Care Physicians.

    PubMed

    Fryer, Ashley-Kay; Doty, Michelle M; Audet, Anne-Marie J

    2011-03-01

    Most Americans get their health care in small physician practices. Yet, small practice settings are often unable to provide the same range of services or partici­pate in quality improvement initiatives as large practices because they lack the staff, infor­mation technology, and office systems. One promising strategy is to share clinical sup­port services and information systems with other practices. New findings from the 2009 Commonwealth Fund International Health Policy Survey of Primary Care Physicians suggest smaller practices that share resources are more likely than those without shared resources to have advanced electronic medical records and health information technology, routinely track and manage patient information, have after-hours care arrangements, and engage in quality monitoring and benchmarking. This issue brief highlights strategies that can increase resources among small- and medium-sized practices and efforts supported by states, the private sector, and the Affordable Care Act that encourage the expansion of shared-resource models.

  4. Enablers and inhibitors of the implementation of the Casalud Model, a Mexican innovative healthcare model for non-communicable disease prevention and control.

    PubMed

    Tapia-Conyer, Roberto; Saucedo-Martinez, Rodrigo; Mujica-Rosales, Ricardo; Gallardo-Rincon, Hector; Campos-Rivera, Paola Abril; Lee, Evan; Waugh, Craig; Guajardo, Lucia; Torres-Beltran, Braulio; Quijano-Gonzalez, Ursula; Soni-Gallardo, Lidia

    2016-07-22

    The Mexican healthcare system is under increasing strain due to the rising prevalence of non-communicable diseases (especially type 2 diabetes), mounting costs, and a reactive curative approach focused on treating existing diseases and their complications rather than preventing them. Casalud is a comprehensive primary healthcare model that enables proactive prevention and disease management throughout the continuum of care, using innovative technologies and a patient-centred approach. Data were collected over a 2-year period in eight primary health clinics (PHCs) in two states in central Mexico to identify and assess enablers and inhibitors of the implementation process of Casalud. We used mixed quantitative and qualitative data collection tools: surveys, in-depth interviews, and participant and non-participant observations. Transcripts and field notes were analyzed and coded using Framework Analysis, focusing on defining and describing enablers and inhibitors of the implementation process. We identified seven recurring topics in the analyzed textual data. Four topics were categorized as enablers: political support for the Casalud model, alignment with current healthcare trends, ongoing technical improvements (to ease adoption and support), and capacity building. Three topics were categorized as inhibitors: administrative practices, health clinic human resources, and the lack of a shared vision of the model. Enablers are located at PHCs and across all levels of government, and include political support for, and the technological validity of, the model. The main inhibitor is the persistence of obsolete administrative practices at both state and PHC levels, which puts the administrative feasibility of the model's implementation in jeopardy. Constructing a shared vision around the model could facilitate the implementation of Casalud as well as circumvent administrative inhibitors. In order to overcome PHC-level barriers, it is crucial to have an efficient and straightforward adaptation and updating process for technological tools. One of the key lessons learned from the implementation of the Casalud model is that a degree of uncertainty must be tolerated when quickly scaling up a healthcare intervention. Similar patient-centred technology-based models must remain open to change and be able to quickly adapt to changing circumstances.

  5. Sustainability in Health care by Allocating Resources Effectively (SHARE) 7: supporting staff in evidence-based decision-making, implementation and evaluation in a local healthcare setting.

    PubMed

    Harris, Claire; Allen, Kelly; Waller, Cara; Dyer, Tim; Brooke, Vanessa; Garrubba, Marie; Melder, Angela; Voutier, Catherine; Gust, Anthony; Farjou, Dina

    2017-06-21

    This is the seventh in a series of papers reporting Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The SHARE Program was a systematic, integrated, evidence-based program for resource allocation within a large Australian health service. It aimed to facilitate proactive use of evidence from research and local data; evidence-based decision-making for resource allocation including disinvestment; and development, implementation and evaluation of disinvestment projects. From the literature and responses of local stakeholders it was clear that provision of expertise and education, training and support of health service staff would be required to achieve these aims. Four support services were proposed. This paper is a detailed case report of the development, implementation and evaluation of a Data Service, Capacity Building Service and Project Support Service. An Evidence Service is reported separately. Literature reviews, surveys, interviews, consultation and workshops were used to capture and process the relevant information. Existing theoretical frameworks were adapted for evaluation and explication of processes and outcomes. Surveys and interviews identified current practice in use of evidence in decision-making, implementation and evaluation; staff needs for evidence-based practice; nature, type and availability of local health service data; and preferred formats for education and training. The Capacity Building and Project Support Services were successful in achieving short term objectives; but long term outcomes were not evaluated due to reduced funding. The Data Service was not implemented at all. Factors influencing the processes and outcomes are discussed. Health service staff need access to education, training, expertise and support to enable evidence-based decision-making and to implement and evaluate the changes arising from those decisions. Three support services were proposed based on research evidence and local findings. Local factors, some unanticipated and some unavoidable, were the main barriers to successful implementation. All three proposed support services hold promise as facilitators of EBP in the local healthcare setting. The findings from this study will inform further exploration.

  6. Regional Resource Initiative. A Blueprint for Sharing Resources and Expertise in Adult Education and Literacy across State Lines.

    ERIC Educational Resources Information Center

    Tennessee Univ., Knoxville. Center for Literacy Studies.

    The Arizona Adult Literacy and Technology Resource Center and the University of Tennessee's Center for Literacy Studies undertook a collaborative project to explore the feasibility and effectiveness of regional sharing of resources and expertise in field of adult education and literacy education. The project's goals were as follows: involve a…

  7. Collective Designing and Sharing of Open Educational Resources: A Study of the French CARTOUN Platform

    ERIC Educational Resources Information Center

    Quere, Nolwenn

    2017-01-01

    Designing and sharing Open Educational Resources (OERs) requires teachers to develop new competences, in particular with digital resources. In this paper, the case of a language resource production group is introduced. Due to the centrality of the OERs in their collective activity, I show that the documents they produce are essential to the…

  8. Content-based histopathology image retrieval using CometCloud.

    PubMed

    Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin

    2014-08-26

    The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two different pathology applications, which are regularly evaluated in the practice of pathology. Comparative experimental results demonstrate excellent performance throughout the course of a set of systematic studies. Additionally, we present and evaluate a framework to enable the execution of these algorithms across distributed resources. We show how parallel searching of content-wise similar images in the dataset significantly reduces the overall computational time to ensure the practical utility of the proposed CBIR algorithms.

  9. 30 CFR 280.73 - Will MMS share data and information with coastal States?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Data Requirements Protections § 280.73 Will MMS share data and information with coastal States? (a) We... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Will MMS share data and information with coastal States? 280.73 Section 280.73 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE...

  10. 30 CFR 580.73 - Will BOEM share data and information with coastal States?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CONTINENTAL SHELF Data Requirements Protections § 580.73 Will BOEM share data and information with coastal... 30 Mineral Resources 2 2012-07-01 2012-07-01 false Will BOEM share data and information with coastal States? 580.73 Section 580.73 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF...

  11. 30 CFR 580.73 - Will BOEM share data and information with coastal States?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CONTINENTAL SHELF Data Requirements Protections § 580.73 Will BOEM share data and information with coastal... 30 Mineral Resources 2 2014-07-01 2014-07-01 false Will BOEM share data and information with coastal States? 580.73 Section 580.73 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF...

  12. International Conference of Directors of National Libraries on Resource Sharing in Asia and Oceania, Canberra, 1979: Papers from Australasia and Oceania.

    ERIC Educational Resources Information Center

    Ronnie, Mary; And Others

    1980-01-01

    Describes four library resource sharing projects in (1) New Zealand, (2) Papua New Guinea, (3) Australia, and (4) Fiji. Numerous shared services are discussed, including national bibliographies, publications exchanges, staff exchanges, clearing centers for duplicates, library planning, and national collections. (LLS)

  13. 30 CFR 580.73 - Will BOEM share data and information with coastal States?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CONTINENTAL SHELF Data Requirements Protections § 580.73 Will BOEM share data and information with coastal... 30 Mineral Resources 2 2013-07-01 2013-07-01 false Will BOEM share data and information with coastal States? 580.73 Section 580.73 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF...

  14. The 3D Elevation Program: summary for Idaho

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    Elevation data are essential to a broad range of applications, including forest resources management, wildlife and habitat management, national security, recreation, and many others. For the State of Idaho, elevation data are critical for agriculture and precision farming, natural resources conservation, infrastructure and construction management, geologic resource assessment and hazard mitigation, flood risk management, forest resources management, and other business uses. Today, high-quality light detection and ranging (lidar) data are the sources for creating elevation models and other elevation datasets. Federal, State, and local agencies work in partnership to (1) replace data, on a national basis, that are (on average) 30 years old and of lower quality and (2) provide coverage where publicly accessible data do not exist. A joint goal of State and Federal partners is to acquire consistent, statewide coverage to support existing and emerging applications enabled by lidar data. The new 3D Elevation Program (3DEP) initiative, managed by the U.S. Geological Survey (USGS), responds to the growing need for high-quality topographic data and a wide range of other three-dimensional representations of the Nation’s natural and constructed features. The Idaho LiDAR Consortium provides statewide collaboration and data sharing mechanisms that can be used as a resource by State and Federal partners implementing the 3DEP initiative.

  15. Semantic Document Model to Enhance Data and Knowledge Interoperability

    NASA Astrophysics Data System (ADS)

    Nešić, Saša

    To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.

  16. Experimenting with an Evolving Ground/Space-based Software Architecture to Enable Sensor Webs

    NASA Technical Reports Server (NTRS)

    mandl, Daniel; Frye, Stuart

    2005-01-01

    A series of ongoing experiments are being conducted at the NASA Goddard Space Flight Center to explore integrated ground and space-based software architectures enabling sensor webs. A sensor web, as defined by Steve Talabac at NASA Goddard Space Flight Center(GSFC), is a coherent set of distributed nodes interconnected by a communications fabric, that collectively behave as a single, dynamically adaptive, observing system. The nodes can be comprised of satellites, ground instruments, computing nodes etc. Sensor web capability requires autonomous management of constellation resources. This becomes progressively more important as more and more satellites share resource, such as communication channels and ground station,s while automatically coordinating their activities. There have been five ongoing activities which include an effort to standardize a set of middleware. This paper will describe one set of activities using the Earth Observing 1 satellite, which used a variety of ground and flight software along with other satellites and ground sensors to prototype a sensor web. This activity allowed us to explore where the difficulties that occur in the assembly of sensor webs given today s technology. We will present an overview of the software system architecture, some key experiments and lessons learned to facilitate better sensor webs in the future.

  17. The asthma mobile health study, smartphone data collected using ResearchKit.

    PubMed

    Chan, Yu-Feng Yvonne; Bot, Brian M; Zweig, Micol; Tignor, Nicole; Ma, Weiping; Suver, Christine; Cedeno, Rafhael; Scott, Erick R; Gregory Hershman, Steven; Schadt, Eric E; Wang, Pei

    2018-05-22

    Widespread adoption of smart mobile platforms coupled with a growing ecosystem of sensors including passive location tracking and the ability to leverage external data sources create an opportunity to generate an unprecedented depth of data on individuals. Mobile health technologies could be utilized for chronic disease management as well as research to advance our understanding of common diseases, such as asthma. We conducted a prospective observational asthma study to assess the feasibility of this type of approach, clinical characteristics of cohorts recruited via a mobile platform, the validity of data collected, user retention patterns, and user data sharing preferences. We describe data and descriptive statistics from the Asthma Mobile Health Study, whereby participants engaged with an iPhone application built using Apple's ResearchKit framework. Data from 6346 U.S. participants, who agreed to share their data broadly, have been made available for further research. These resources have the potential to enable the research community to work collaboratively towards improving our understanding of asthma as well as mobile health research best practices.

  18. The core legion object model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, M.; Grimshaw, A.

    1996-12-31

    The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less

  19. Cybersecurity and privacy issues for socially integrated mobile healthcare applications operating in a multi-cloud environment.

    PubMed

    Al-Muhtadi, Jalal; Shahzad, Basit; Saleem, Kashif; Jameel, Wasif; Orgun, Mehmet A

    2017-05-01

    Social media has enabled information-sharing across massively large networks of people without spending much financial resources and time that are otherwise required in the print and electronic media. Mobile-based social media applications have overwhelmingly changed the information-sharing perspective. However, with the advent of such applications at an unprecedented scale, the privacy of the information is compromised to a larger extent if breach mitigation is not adequate. Since healthcare applications are also being developed for mobile devices so that they also benefit from the power of social media, cybersecurity privacy concerns for such sensitive applications have become critical. This article discusses the architecture of a typical mobile healthcare application, in which customized privacy levels are defined for the individuals participating in the system. It then elaborates on how the communication across a social network in a multi-cloud environment can be made more secure and private, especially for healthcare applications.

  20. Using smart mobile devices in social-network-based health education practice: a learning behavior analysis.

    PubMed

    Wu, Ting-Ting

    2014-06-01

    Virtual communities provide numerous resources, immediate feedback, and information sharing, enabling people to rapidly acquire information and knowledge and supporting diverse applications that facilitate interpersonal interactions, communication, and sharing. Moreover, incorporating highly mobile and convenient devices into practice-based courses can be advantageous in learning situations. Therefore, in this study, a tablet PC and Google+ were introduced to a health education practice course to elucidate satisfaction of learning module and conditions and analyze the sequence and frequency of learning behaviors during the social-network-based learning process. According to the analytical results, social networks can improve interaction among peers and between educators and students, particularly when these networks are used to search for data, post articles, engage in discussions, and communicate. In addition, most nursing students and nursing educators expressed a positive attitude and satisfaction toward these innovative teaching methods, and looked forward to continuing the use of this learning approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies

    PubMed Central

    2012-01-01

    Background We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Methods Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. Results We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data. PMID:22420565

  2. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    PubMed

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  3. Information Services in New Zealand and the Pacific.

    ERIC Educational Resources Information Center

    Ronnie, Mary A.

    This paper examines information services and resource sharing within New Zealand with a view to future participation in a Pacific resource sharing network. Activities of the National Library, the New Zealand Library Resources Committee, and the Information Services Committee are reviewed over a 40-year period, illustrating library cooperative…

  4. Sharing Resources in the Small School.

    ERIC Educational Resources Information Center

    Uxer, John E.

    Improved strategies for sharing resources are absolutely essential to the survival of small schools. Although not all, or even a major portion, of school programs should be provided by a cooperative delivery system, a discerning superintendent and board will mobilize every resource available to them in conducting their educational programs.…

  5. Optimization and Control of Cyber-Physical Vehicle Systems

    PubMed Central

    Bradley, Justin M.; Atkins, Ella M.

    2015-01-01

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541

  6. Optimization and Control of Cyber-Physical Vehicle Systems.

    PubMed

    Bradley, Justin M; Atkins, Ella M

    2015-09-11

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  7. All inequality is not equal: children correct inequalities using resource value

    PubMed Central

    Shaw, Alex; Olson, Kristina R.

    2013-01-01

    Fairness concerns guide children's judgments about how to share resources with others. However, it is unclear from past research if children take extant inequalities or the value of resources involved in an inequality into account when sharing with others; these questions are the focus of the current studies. In all experiments, children saw an inequality between two recipients—one had two more resources than another. What varied between conditions was the value of the resources that the child could subsequently distribute. When the resources were equal in value to those involved in the original inequality, children corrected the previous inequality by giving two resources to the child with fewer resources (Experiment 1). However, as the value of the resources increased relative to those initially shared by the experimenter, children were more likely to distribute the two high value resources equally between the two recipients, presumably to minimize the overall inequality in value (Experiments 1 and 2). We found that children specifically use value, not just size, when trying to equalize outcomes (Experiment 3) and further found that children focus on the relative rather than absolute value of the resources they share—when the experimenter had unequally distributed the same high value resource that the child would later share, children corrected the previous inequality by giving two high value resources to the person who had received fewer high value resources. These results illustrate that children attempt to correct past inequalities and try to maintain equality not just in the count of resources but also by using the value of resources. PMID:23882227

  8. Development of shared decision-making resources to help inform difficult healthcare decisions: An example focused on dysvascular partial foot and transtibial amputations.

    PubMed

    Quigley, Matthew; Dillon, Michael P; Fatone, Stefania

    2018-02-01

    Shared decision making is a consultative process designed to encourage patient participation in decision making by providing accurate information about the treatment options and supporting deliberation with the clinicians about treatment options. The process can be supported by resources such as decision aids and discussion guides designed to inform and facilitate often difficult conversations. As this process increases in use, there is opportunity to raise awareness of shared decision making and the international standards used to guide the development of quality resources for use in areas of prosthetic/orthotic care. To describe the process used to develop shared decision-making resources, using an illustrative example focused on decisions about the level of dysvascular partial foot amputation or transtibial amputation. Development process: The International Patient Decision Aid Standards were used to guide the development of the decision aid and discussion guide focused on decisions about the level of dysvascular partial foot amputation or transtibial amputation. Examples from these shared decision-making resources help illuminate the stages of development including scoping and design, research synthesis, iterative development of a prototype, and preliminary testing with patients and clinicians not involved in the development process. Lessons learnt through the process, such as using the International Patient Decision Aid Standards checklist and development guidelines, may help inform others wanting to develop similar shared decision-making resources given the applicability of shared decision making to many areas of prosthetic-/orthotic-related practice. Clinical relevance Shared decision making is a process designed to guide conversations that help patients make an informed decision about their healthcare. Raising awareness of shared decision making and the international standards for development of high-quality decision aids and discussion guides is important as the approach is introduced in prosthetic-/orthotic-related practice.

  9. Numerical cognition explains age-related changes in third-party fairness.

    PubMed

    Chernyak, Nadia; Sandham, Beth; Harris, Paul L; Cordes, Sara

    2016-10-01

    Young children share fairly and expect others to do the same. Yet little is known about the underlying cognitive mechanisms that support fairness. We investigated whether children's numerical competencies are linked with their sharing behavior. Preschoolers (aged 2.5-5.5) participated in third-party resource allocation tasks in which they split a set of resources between 2 puppets. Children's numerical competence was assessed using the Give-N task (Sarnecka & Carey, 2008; Wynn, 1990). Numerical competence-specifically knowledge of the cardinal principle-explained age-related changes in fair sharing. Although many subset-knowers (those without knowledge of the cardinal principle) were still able to share fairly, they invoked turn-taking strategies and did not remember the number of resources they shared. These results suggest that numerical cognition serves as an important mechanism for fair sharing behavior, and that children employ different sharing strategies (division or turn-taking) depending on their numerical competence. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Ontology-based, Tissue MicroArray oriented, image centered tissue bank

    PubMed Central

    Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano

    2008-01-01

    Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177

  11. Does a House Divided Stand? Kinship and the Continuity of Shared Living Arrangements

    PubMed Central

    Glick, Jennifer E.; Van Hook, Jennifer

    2011-01-01

    Shared living arrangements can provide housing, economies of scale, and other instrumental support and may become an important resource in times of economic constraint. But the extent to which such living arrangements experience continuity or rapid change in composition is unclear. Previous research on extended-family households tended to focus on factors that trigger the onset of coresidence, including life course events or changes in health status and related economic needs. Relying on longitudinal data from 9,932 households in the Survey of Income and Program Participation (SIPP), the analyses demonstrate that the distribution of economic resources in the household also influences the continuity of shared living arrangements. The results suggest that multigenerational households of parents and adult children experience greater continuity in composition when one individual or couple has a disproportionate share of the economic resources in the household. Other coresidential households, those shared by other kin or nonkin, experience greater continuity when resources are more evenly distributed. PMID:22259218

  12. Decentralized Real-Time Scheduling

    DTIC Science & Technology

    1990-08-01

    must provide several alternative resource management policies, including FIFO and deadline queueing for shared resources that are not available. 5...When demand exceeds the supply of shared resources (even within a single switch), some calls cannot be completed. In that case, a call’s priority...associated chiefly with the need to manage resources in a timely and decentralized fashion. The Alpha programming model permits the convenient expression of

  13. EarthCollab, building geoscience-centric implementations of the VIVO semantic software suite

    NASA Astrophysics Data System (ADS)

    Rowan, L. R.; Gross, M. B.; Mayernik, M. S.; Daniels, M. D.; Krafft, D. B.; Kahn, H. J.; Allison, J.; Snyder, C. B.; Johns, E. M.; Stott, D.

    2017-12-01

    EarthCollab, an EarthCube Building Block project, is extending an existing open-source semantic web application, VIVO, to enable the exchange of information about scientific researchers and resources across institutions. EarthCollab is a collaboration between UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy, The Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory, and Cornell University. VIVO has been implemented by more than 100 universities and research institutions to highlight research and institutional achievements. This presentation will discuss benefits and drawbacks of working with and extending open source software. Some extensions include plotting georeferenced objects on a map, a mobile-friendly theme, integration of faceting via Elasticsearch, extending the VIVO ontology to capture geoscience-centric objects and relationships, and the ability to cross-link between VIVO instances. Most implementations of VIVO gather information about a single organization. The EarthCollab project created VIVO extensions to enable cross-linking of VIVO instances to reduce the amount of duplicate information about the same people and scientific resources and to enable dynamic linking of related information across VIVO installations. As the list of customizations grows, so does the effort required to maintain compatibility between the EarthCollab forks and the main VIVO code. For example, dozens of libraries and dependencies were updated prior to the VIVO v1.10 release, which introduced conflicts in the EarthCollab cross-linking code. The cross-linking code has been developed to enable sharing of data across different versions of VIVO, however, using a JSON output schema standardized across versions. We will outline lessons learned in working with VIVO and its open source dependencies, which include Jena, Solr, Freemarker, and jQuery and discuss future work by EarthCollab, which includes refining the cross-linking VIVO capabilities by continued integration of persistent and unique identifiers to enable automated lookup and matching across institutional VIVOs.

  14. SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog

    NASA Astrophysics Data System (ADS)

    Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely

    2014-05-01

    Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.

  15. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Allocation of Resources to Collaborators and Free-Riders in 3-Year-Olds

    ERIC Educational Resources Information Center

    Melis, Alicia P.; Altrichter, Kristin; Tomasello, Michael

    2013-01-01

    Recent studies have shown that in situations where resources have been acquired collaboratively, children at around 3 years of age share mostly equally. We investigated 3-year-olds' sharing behavior with a collaborating partner and a free-riding partner who explicitly expressed her preference not to collaborate. Children shared more equally with…

  17. Interstitial Cystitis Association

    MedlinePlus

    ... Donor Resources My Profile Login Social Media Twitter YouTube Facebook Pinterest Community ShareThis Google Search Toggle navigation ... Resources for Donors Corporate Contributions Social Media Twitter YouTube Facebook Pinterest Community ShareThis Google Search Home About ...

  18. Understanding the influence of power and empathic perspective-taking on collaborative natural resource management.

    PubMed

    Wald, Dara M; Segal, Elizabeth A; Johnston, Erik W; Vinze, Ajay

    2017-09-01

    Public engagement in collaborative natural resource management necessitates shared understanding and collaboration. Empathic perspective-taking is a critical facilitator of shared understanding and positive social interactions, such as collaboration. Yet there is currently little understanding about how to reliably generate empathic perspective-taking and collaboration, particularly in situations involving the unequal distribution of environmental resources or power. Here we examine how experiencing the loss or gain of social power influenced empathic perspective-taking and behavior within a computer-mediated scenario. Participants (n = 180) were randomly assigned to each condition: high resources, low resources, lose resources, gain resources. Contrary to our expectations, participants in the perspective-taking condition, specifically those who lost resources, also lost perspective taking and exhibited egoistic behavior. This finding suggests that resource control within the collaborative process is a key contextual variable that influences perspective-taking and collaborative behavior. Moreover, the observed relationship between perspective-taking and egoistic behavior within a collaborative resource sharing exercise suggests that when resource control or access is unequal, interventions to promote perspective-taking deserve careful consideration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. BIGCHEM: Challenges and Opportunities for Big Data Analysis in Chemistry.

    PubMed

    Tetko, Igor V; Engkvist, Ola; Koch, Uwe; Reymond, Jean-Louis; Chen, Hongming

    2016-12-01

    The increasing volume of biomedical data in chemistry and life sciences requires the development of new methods and approaches for their handling. Here, we briefly discuss some challenges and opportunities of this fast growing area of research with a focus on those to be addressed within the BIGCHEM project. The article starts with a brief description of some available resources for "Big Data" in chemistry and a discussion of the importance of data quality. We then discuss challenges with visualization of millions of compounds by combining chemical and biological data, the expectations from mining the "Big Data" using advanced machine-learning methods, and their applications in polypharmacology prediction and target de-convolution in phenotypic screening. We show that the efficient exploration of billions of molecules requires the development of smart strategies. We also address the issue of secure information sharing without disclosing chemical structures, which is critical to enable bi-party or multi-party data sharing. Data sharing is important in the context of the recent trend of "open innovation" in pharmaceutical industry, which has led to not only more information sharing among academics and pharma industries but also the so-called "precompetitive" collaboration between pharma companies. At the end we highlight the importance of education in "Big Data" for further progress of this area. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  20. BIGCHEM: Challenges and Opportunities for Big Data Analysis in Chemistry

    PubMed Central

    Engkvist, Ola; Koch, Uwe; Reymond, Jean‐Louis; Chen, Hongming

    2016-01-01

    Abstract The increasing volume of biomedical data in chemistry and life sciences requires the development of new methods and approaches for their handling. Here, we briefly discuss some challenges and opportunities of this fast growing area of research with a focus on those to be addressed within the BIGCHEM project. The article starts with a brief description of some available resources for “Big Data” in chemistry and a discussion of the importance of data quality. We then discuss challenges with visualization of millions of compounds by combining chemical and biological data, the expectations from mining the “Big Data” using advanced machine‐learning methods, and their applications in polypharmacology prediction and target de‐convolution in phenotypic screening. We show that the efficient exploration of billions of molecules requires the development of smart strategies. We also address the issue of secure information sharing without disclosing chemical structures, which is critical to enable bi‐party or multi‐party data sharing. Data sharing is important in the context of the recent trend of “open innovation” in pharmaceutical industry, which has led to not only more information sharing among academics and pharma industries but also the so‐called “precompetitive” collaboration between pharma companies. At the end we highlight the importance of education in “Big Data” for further progress of this area. PMID:27464907

  1. Interlibrary Loan, the Key to Resource Sharing: A Manual of Procedures and Protocols.

    ERIC Educational Resources Information Center

    Alaska State Dept. of Education, Juneau. Div. of State Libraries.

    Intended for use by librarians in Alaska, this manual provides general guidelines for the maximum utilization of library resources through interlibrary loan service. The first of four major sections describes the Alaska Library Network (ALN), which provides protocols and procedures to libraries for resource sharing; points out that new protocols…

  2. Methods and Frequency of Sharing of Learning Resources by Medical Students

    ERIC Educational Resources Information Center

    Judd, Terry; Elliott, Kristine

    2017-01-01

    University students have ready access to quality learning resources through learning management systems (LMS), online library collections and generic search tools. However, anecdotal evidence suggests they sometimes turn to peer-based sharing rather than sourcing resources directly. We know little about this practice--how common it is, what sort…

  3. Use of the Advanced Communications Technology Satellite to Promote International Distance Education Programs for Georgetown University

    NASA Technical Reports Server (NTRS)

    Bradley, Harold; Kauffman, Amy

    1996-01-01

    Georgetown's distance education program is designed to demonstrate to faculty and administrators the feasibility and desirability of using two-way video transmission for international education. These programs will extend the reach of Georgetown's educational offerings; enrich the curriculum and content of Georgetown's offerings by interaction with institutions in other nations; enhance the world view of the School of Business Administration; enable Georgetown to share its resources with other institutions outside of the United States; and promote Commerce within the Americas. The primary reason for this pilot program is to evaluate the effectiveness and economic viability of offering academic courses and Small Business Development training.

  4. New working paradigms in research laboratories.

    PubMed

    Keighley, Wilma; Sewing, Andreas

    2009-07-01

    Work in research laboratories, especially within centralised functions in larger organisations, is changing fast. With easier access to external providers and Contract Research Organisations, and a focus on budgets and benchmarking, scientific expertise has to be complemented with operational excellence. New concepts, globally shared projects and restricted resources highlight the constraints of traditional operating models working from Monday to Friday and nine to five. Whilst many of our scientists welcome this new challenge, organisations have to enable and foster a more business-like mindset. Organisational structures, remuneration, as well as systems in finance need to be adapted to build operations that are best-in-class rather than merely minimising negative impacts of current organisational structures.

  5. The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences

    PubMed Central

    Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker

    2016-01-01

    The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant’s platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses. PMID:26752627

  6. Stitch in Time: Enabling Change Using Computers: Innovative Uses of Information Technology Helping Providers to Care

    DTIC Science & Technology

    2011-01-01

    l’ults 43 J4 Microalbumin Results 19 6 LOL Results 43 34 Encounters Percentages Done Foot Exams 18% Ey e Exam s 19% Flu shots 26% Pneumo-Vax 26...diagnose and treat. • Shares clinical workload, helps  nurses  and  ancillary staff identify what preventive clinical  measures a patient needs...Quick access to patient‐centered  resources. 2011 MHS Conference Providers and nurses  can review data..................... ~==:=~~~~~~~~==~~~~~ ~E

  7. The Collaborative Heliophysics Events Knowledgebase

    NASA Astrophysics Data System (ADS)

    Hurlburt, N. E.; Schuler, D.; Cheung, C.

    2010-12-01

    The Collaborative Heliophysics Events Knowledgebase (CHEK) leverages and integrates the existing resources developed by HEK for SDO (Hurlburt et al. 2010) to provide a collaborative framework for heliophysics researchers. This framework will enable an environment were researches can not only identify and locate relevant data, but can deploy a social network for sharing and expanding knowledge about heliophysical events. CHEK will expand the HEK and key HEK clients into the heliosphere and geospace, and create a heliophysics social network. We describe our design and goals of the CHEK project and discuss its relation to Citizen Science in the heliosphere. Hurlburt, N et al. 2010, “A Heliophysics Event Knowledgebase for Solar Dynamics Observatory,” Sol Phys., in press

  8. Data sharing by scientists: Practices and perceptions

    USGS Publications Warehouse

    Tenopir, C.; Allard, S.; Douglass, K.; Aydinoglu, A.U.; Wu, L.; Read, E.; Manoff, M.; Frame, M.

    2011-01-01

    Background: Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers - data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings: A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance: Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles. ?? 2011 Tenopir et al.

  9. Data Sharing by Scientists: Practices and Perceptions

    PubMed Central

    Tenopir, Carol; Allard, Suzie; Douglass, Kimberly; Aydinoglu, Arsev Umur; Wu, Lei; Read, Eleanor; Manoff, Maribeth; Frame, Mike

    2011-01-01

    Background Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers – data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles. PMID:21738610

  10. IC Treatment: Antihistamines

    MedlinePlus

    ... Us Magazine Store Donor Resources My Profile Login Social Media Twitter YouTube Facebook Pinterest Community ShareThis Google Search ... Federal Campaign ICA Resources for Donors Corporate Contributions Social Media Twitter YouTube Facebook Pinterest Community ShareThis Google Search ...

  11. Pregnancy and IC

    MedlinePlus

    ... Us Magazine Store Donor Resources My Profile Login Social Media Twitter YouTube Facebook Pinterest Community ShareThis Google Search ... Federal Campaign ICA Resources for Donors Corporate Contributions Social Media Twitter YouTube Facebook Pinterest Community ShareThis Google Search ...

  12. Lean production tools and decision latitude enable conditions for innovative learning in organizations: a multilevel analysis.

    PubMed

    Fagerlind Ståhl, Anna-Carin; Gustavsson, Maria; Karlsson, Nadine; Johansson, Gun; Ekberg, Kerstin

    2015-03-01

    The effect of lean production on conditions for learning is debated. This study aimed to investigate how tools inspired by lean production (standardization, resource reduction, visual monitoring, housekeeping, value flow analysis) were associated with an innovative learning climate and with collective dispersion of ideas in organizations, and whether decision latitude contributed to these associations. A questionnaire was sent out to employees in public, private, production and service organizations (n = 4442). Multilevel linear regression analyses were used. Use of lean tools and decision latitude were positively associated with an innovative learning climate and collective dispersion of ideas. A low degree of decision latitude was a modifier in the association to collective dispersion of ideas. Lean tools can enable shared understanding and collective spreading of ideas, needed for the development of work processes, especially when decision latitude is low. Value flow analysis played a pivotal role in the associations. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  14. Evaluation of the Display of Cognitive State Feedback to Drive Adaptive Task Sharing

    PubMed Central

    Dorneich, Michael C.; Passinger, Břetislav; Hamblin, Christopher; Keinrath, Claudia; Vašek, Jiři; Whitlow, Stephen D.; Beekhuyzen, Martijn

    2017-01-01

    This paper presents an adaptive system intended to address workload imbalances between pilots in future flight decks. Team performance can be maximized when task demands are balanced within crew capabilities and resources. Good communication skills enable teams to adapt to changes in workload, and include the balancing of workload between team members This work addresses human factors priorities in the aviation domain with the goal to develop concepts that balance operator workload, support future operator roles and responsibilities, and support new task requirements, while allowing operators to focus on the most safety critical tasks. A traditional closed-loop adaptive system includes the decision logic to turn automated adaptations on and off. This work takes a novel approach of replacing the decision logic, normally performed by the automation, with human decisions. The Crew Workload Manager (CWLM) was developed to objectively display the workload between pilots and recommend task sharing; it is then the pilots who “close the loop” by deciding how to best mitigate unbalanced workload. The workload was manipulated by the Shared Aviation Task Battery (SAT-B), which was developed to provide opportunities for pilots to mitigate imbalances in workload between crew members. Participants were put in situations of high and low workload (i.e., workload was manipulated as opposed to being measured), the workload was then displayed to pilots, and pilots were allowed to decide how to mitigate the situation. An evaluation was performed that utilized the SAT-B to manipulate workload and create workload imbalances. Overall, the CWLM reduced the time spent in unbalanced workload and improved the crew coordination in task sharing while not negatively impacting concurrent task performance. Balancing workload has the potential to improve crew resource management and task performance over time, and reduce errors and fatigue. Paired with a real-time workload measurement system, the CWLM could help teams manage their own task load distribution. PMID:28400716

  15. Evaluation of the Display of Cognitive State Feedback to Drive Adaptive Task Sharing.

    PubMed

    Dorneich, Michael C; Passinger, Břetislav; Hamblin, Christopher; Keinrath, Claudia; Vašek, Jiři; Whitlow, Stephen D; Beekhuyzen, Martijn

    2017-01-01

    This paper presents an adaptive system intended to address workload imbalances between pilots in future flight decks. Team performance can be maximized when task demands are balanced within crew capabilities and resources. Good communication skills enable teams to adapt to changes in workload, and include the balancing of workload between team members This work addresses human factors priorities in the aviation domain with the goal to develop concepts that balance operator workload, support future operator roles and responsibilities, and support new task requirements, while allowing operators to focus on the most safety critical tasks. A traditional closed-loop adaptive system includes the decision logic to turn automated adaptations on and off. This work takes a novel approach of replacing the decision logic, normally performed by the automation, with human decisions. The Crew Workload Manager (CWLM) was developed to objectively display the workload between pilots and recommend task sharing; it is then the pilots who "close the loop" by deciding how to best mitigate unbalanced workload. The workload was manipulated by the Shared Aviation Task Battery (SAT-B), which was developed to provide opportunities for pilots to mitigate imbalances in workload between crew members. Participants were put in situations of high and low workload (i.e., workload was manipulated as opposed to being measured), the workload was then displayed to pilots, and pilots were allowed to decide how to mitigate the situation. An evaluation was performed that utilized the SAT-B to manipulate workload and create workload imbalances. Overall, the CWLM reduced the time spent in unbalanced workload and improved the crew coordination in task sharing while not negatively impacting concurrent task performance. Balancing workload has the potential to improve crew resource management and task performance over time, and reduce errors and fatigue. Paired with a real-time workload measurement system, the CWLM could help teams manage their own task load distribution.

  16. Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data

    NASA Astrophysics Data System (ADS)

    Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.

    2016-12-01

    Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.

  17. Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing

    NASA Astrophysics Data System (ADS)

    Chine, Karim

    The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.

  18. Understanding the investigators: a qualitative study investigating the barriers and enablers to the implementation of local investigator-initiated clinical trials in Ethiopia

    PubMed Central

    Franzen, Samuel R P; Chandler, Clare; Enquselassie, Fikre; Siribaddana, Sisira; Atashili, Julius; Angus, Brian; Lang, Trudie

    2013-01-01

    Objectives Clinical trials provide ‘gold standard’ evidence for policy, but insufficient locally relevant trials are conducted in low-income and middle-income countries. Local investigator-initiated trials could generate highly relevant data for national governments, but information is lacking on how to facilitate them. We aimed to identify barriers and enablers to investigator-initiated trials in Ethiopia to inform and direct capacity strengthening initiatives. Design Exploratory, qualitative study comprising of in-depth interviews (n=7) and focus group discussions (n=3). Setting Fieldwork took place in Ethiopia during March 2011. Participants Local health researchers with previous experiences of clinical trials or stakeholders with an interest in trials were recruited through snowball sampling (n=20). Outcome measures Detailed discussion notes were analysed using thematic coding analysis and key themes were identified. Results All participants perceived investigator-initiated trials as important for generating local evidence. System and organisational barriers included: limited funding allocation, weak regulatory and administrative systems, few learning opportunities, limited human and material capacity and poor incentives for conducting research. Operational hurdles were symptomatic of these barriers. Lack of awareness, confidence and motivation to undertake trials were important individual barriers. Training, knowledge sharing and experience exchange were key enablers to trial conduct and collaboration was unanimously regarded as important for improving capacity. Conclusions Barriers to trial conduct were found at individual, operational, organisational and system levels. These findings indicate that to increase locally led trial conduct in Ethiopia, system wide changes are needed to create a more receptive and enabling research environment. Crucially, the creation of research networks between potential trial groups could provide much needed practical collaborative support through sharing of financial and project management burdens, knowledge and resources. These findings could have important implications for capacity-strengthening initiatives but further research is needed before the results can be generalised more widely. PMID:24285629

  19. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  20. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    PubMed

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate model sharing, and there are corresponding initiatives by the scientific journals. Outside the publishing enterprise, infrastructure to facilitate model sharing in biomechanics exists, and simulation software developers are interested in accommodating the community's needs for sharing of modeling resources. Encouragement for the use of standardized markups, concerns related to quality assurance, acknowledgement of increased burden, and importance of stewardship of resources are noted. In the short-term, it is advisable that the community builds upon recent strategies and experiments with new pathways for continued demonstration of model sharing, its promotion, and its utility. Nonetheless, the need for a long-term strategy to unify approaches in sharing computational models and related resources is acknowledged. Development of a sustainable platform supported by a culture of open model sharing will likely evolve through continued and inclusive discussions bringing all stakeholders at the table, e.g., by possibly establishing a consortium.

  1. The "SAFARI" Method of Collection Study and Cooperative Acquisition for a Multi-Library Cooperative. A Manual of Procedures.

    ERIC Educational Resources Information Center

    Sinclair, Dorothy

    This document examines the importance and difficulties in resource sharing and acquisition by libraries and introduces the procedures of the Site Appraisal for Area Resources Inventory (SAFARI) system as a method of comparative evaluation of subject collections among a group of libraries. Resource, or collection, sharing offers specific…

  2. Team Collaboration Software

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Schrock, Mitchell; Baldwin, John R.; Borden, Charles S.

    2010-01-01

    The Ground Resource Allocation and Planning Environment (GRAPE 1.0) is a Web-based, collaborative team environment based on the Microsoft SharePoint platform, which provides Deep Space Network (DSN) resource planners tools and services for sharing information and performing analysis.

  3. IVHS Institutional Issues And Case Studies: Westchester Commuter Central Case Study

    DOT National Transportation Integrated Search

    1997-01-01

    Shared resource projects are public-private arrangements that involve sharing public property such as rights-of-way and private resources such as telecommunications capacity and expertise. Typically, private telecommunications providers are granted a...

  4. The Conceptual Framework of Factors Affecting Shared Mental Model

    ERIC Educational Resources Information Center

    Lee, Miyoung; Johnson, Tristan; Lee, Youngmin; O'Connor, Debra; Khalil, Mohammed

    2004-01-01

    Many researchers have paid attention to the potentiality and possibility of the shared mental model because it enables teammates to perform their job better by sharing team knowledge, skills, attitudes, dynamics and environments. Even though theoretical and experimental evidences provide a close relationship between the shared mental model and…

  5. Climate Change Impacts on Hydrology and Water Management of the San Juan Basin

    NASA Astrophysics Data System (ADS)

    Rich, P. M.; Weintraub, L. H.; Chen, L.; Herr, J.

    2005-12-01

    Recent climatic events, including regional drought and increased storm severity, have accentuated concerns that climatic extremes may be increasing in frequency and intensity due to global climate change. As part of the ZeroNet Water-Energy Initiative, the San Juan Decision Support System includes a basin-scale modeling tool to evaluate effects of climate change on water budgets under different climate and management scenarios. The existing Watershed Analysis Risk Management Framework (WARMF) was enhanced with iterative modeling capabilities to enable construction of climate scenarios based on historical and projected data. We applied WARMF to 42,000 km2 (16,000 mi2) of the San Juan Basin (CO, NM) to assess impacts of extended drought and increased temperature on surface water balance. Simulations showed that drought and increased temperature impact water availability for all sectors (agriculture, energy, municipal, industry), and lead to increased frequency of critical shortages. Implementation of potential management alternatives such as "shortage sharing" or degraded water usage during critical years helps improve available water supply. In the face of growing concern over climate change, limited water resources, and competing demands, integrative modeling tools can enable better understanding of complex interconnected systems, and enable better decisions.

  6. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  7. Collaborative Information Retrieval Method among Personal Repositories

    NASA Astrophysics Data System (ADS)

    Kamei, Koji; Yukawa, Takashi; Yoshida, Sen; Kuwabara, Kazuhiro

    In this paper, we describe a collaborative information retrieval method among personal repositorie and an implementation of the method on a personal agent framework. We propose a framework for personal agents that aims to enable the sharing and exchange of information resources that are distributed unevenly among individuals. The kernel of a personal agent framework is an RDF(resource description framework)-based information repository for storing, retrieving and manipulating privately collected information, such as documents the user read and/or wrote, email he/she exchanged, web pages he/she browsed, etc. The repository also collects annotations to information resources that describe relationships among information resources and records of interaction between the user and information resources. Since the information resources in a personal repository and their structure are personalized, information retrieval from other users' is an important application of the personal agent. A vector space model with a personalized concept-base is employed as an information retrieval mechanism in a personal repository. Since a personalized concept-base is constructed from information resources in a personal repository, it reflects its user's knowledge and interests. On the other hand, it leads to another problem while querying other users' personal repositories; that is, simply transferring query requests does not provide desirable results. To solve this problem, we propose a query equalization scheme based on a relevance feedback method for collaborative information retrieval between personalized concept-bases. In this paper, we describe an implementation of the collaborative information retrieval method and its user interface on the personal agent framework.

  8. Collaborative Sharing of Multidimensional Space-time Data Using HydroShare

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Idaszak, R.; Yi, H.; Blanton, B.

    2015-12-01

    HydroShare is a collaborative environment being developed for sharing hydrological data and models. It includes capability to upload data in many formats as resources that can be shared. The HydroShare data model for resources uses a specific format for the representation of each type of data and specifies metadata common to all resource types as well as metadata unique to specific resource types. The Network Common Data Form (NetCDF) was chosen as the format for multidimensional space-time data in HydroShare. NetCDF is widely used in hydrological and other geoscience modeling because it contains self-describing metadata and supports the creation of array-oriented datasets that may include three spatial dimensions, a time dimension and other user defined dimensions. For example, NetCDF may be used to represent precipitation or surface air temperature fields that have two dimensions in space and one dimension in time. This presentation will illustrate how NetCDF files are used in HydroShare. When a NetCDF file is loaded into HydroShare, header information is extracted using the "ncdump" utility. Python functions developed for the Django web framework on which HydroShare is based, extract science metadata present in the NetCDF file, saving the user from having to enter it. Where the file follows Climate Forecast (CF) convention and Attribute Convention for Dataset Discovery (ACDD) standards, metadata is thus automatically populated. Users also have the ability to add metadata to the resource that may not have been present in the original NetCDF file. HydroShare's metadata editing functionality then writes this science metadata back into the NetCDF file to maintain consistency between the science metadata in HydroShare and the metadata in the NetCDF file. This further helps researchers easily add metadata information following the CF and ACDD conventions. Additional data inspection and subsetting functions were developed, taking advantage of Python and command line libraries for working with NetCDF files. We describe the design and implementation of these features and illustrate how NetCDF files from a modeling application may be curated in HydroShare and thus enhance reproducibility of the associated research. We also discuss future development planned for multidimensional space-time data in HydroShare.

  9. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.

  10. A cognitive gateway-based spectrum sharing method in downlink round robin scheduling of LTE system

    NASA Astrophysics Data System (ADS)

    Deng, Hongyu; Wu, Cheng; Wang, Yiming

    2017-07-01

    A key technique of LTE is how to allocate efficiently the resource of radio spectrum. Traditional Round Robin (RR) scheduling scheme may lead to too many resource residues when allocating resources. When the number of users in the current transmission time interval (TTI) is not the greatest common divisor of resource block groups (RBGs), and such a phenomenon lasts for a long time, the spectrum utilization would be greatly decreased. In this paper, a novel spectrum allocation scheme of cognitive gateway (CG) was proposed, in which the LTE spectrum utilization and CG’s throughput were greatly increased by allocating idle resource blocks in the shared TTI in LTE system to CG. Our simulation results show that the spectrum resource sharing method can improve LTE spectral utilization and increase the CG’s throughput as well as network use time.

  11. Interdisciplinary research and education in the Vienna Doctoral Programme on Water Resource Systems: a framework for evaluation

    NASA Astrophysics Data System (ADS)

    Bloeschl, G.; Carr, G.; Loucks, D. P.

    2017-12-01

    Greater understanding of how interdisciplinary research and education evolves is critical for identifying and implementing appropriate programme management strategies. We propose a program evaluation framework that is based on social learning processes (individual learning, interdisciplinary research practices, and interaction between researchers with different backgrounds); social capital outcomes (ability to interact, interpersonal connectivity, and shared understanding); and knowledge and human capital outcomes (new knowledge that integrates multiple research fields). The framework is tested on established case study doctoral program: the Vienna Doctoral Program on Water Resource Systems. Data are collected via mixed qualitative/quantitative methods that include semi-structured interviews, publication co-author analysis, analysis of research proposals, categorisation of the interdisciplinarity of publications and graduate analysis. Through the evaluation and analysis, several interesting findings about how interdisciplinary research evolves and can be supported are identified. Firstly, different aspects of individual learning seem to contribute to a researcher's ability to interact with researchers from other research fields and work collaboratively. These include learning new material from different research fields, learning how to learn new material and learning how to integrate different material. Secondly, shared interdisciplinary research practices can be identified that may be common to other programs and support interaction and shared understanding between different researchers. They include clarification and questioning, harnessing differences and setting defensible research boundaries. Thirdly, intensive interaction between researchers from different backgrounds support connectivity between the researchers, further enabling cross-disciplinary collaborative work. The case study data suggest that social learning processes and social capital outcomes precede new interdisciplinary research findings and are therefore a critical aspect to consider in interdisciplinary program management.

  12. Freeing data through The Polar Information Commons

    NASA Astrophysics Data System (ADS)

    de Bruin, Taco; Chen, Robert; Parsons, Mark; Carlson, David

    2010-05-01

    The polar regions are changing rapidly with dramatic global effect. Wise management of resources, improved decision support, and effective international cooperation on resource and geopolitical issues require deeper understanding and better prediction of these changes. Unfortunately, polar data and information remain scattered, scarce, and sporadic. Inspired by the Antarctic Treaty of 1959 that established the Antarctic as a global commons to be used only for peaceful purposes and scientific research, we assert that data and information about the polar regions are themselves "public goods" that should be shared ethically and with minimal constraint. We therefore envision the Polar Information Commons (PIC) as an open, virtual repository for vital scientific data and information that would provide a shared, community-based cyber-infrastructure fostering innovation, improving scientific efficiency, and encouraging participation in polar research, education, planning, and management. The PIC will build on the legacy of the International Polar Year (IPY), providing a long-term framework for access to and preservation of both existing and future data and information about the polar regions. Rapid change demands rapid data access. The PIC system will enable scientists to quickly expose their data to the world and share them through open protocols on the Internet. A PIC digital label will alert users and data centers to new polar data and ensure that usage rights are clear. The PIC will utilize the Science Commons Protocol for Implementing Open Access Data, which promotes open data access through the public domain coupled with community norms of practice to ensure use of data in a fair and equitable manner. A set of PIC norms is currently being developed in consultation with key polar data organizations and other stakeholders. We welcome inputs from the broad science community as we further develop and refine the PIC approach and move ahead with implementation.

  13. Freeing data through The Polar Information Commons

    NASA Astrophysics Data System (ADS)

    de Bruin, T.; Chen, R. S.; Parsons, M. A.; Carlson, D. J.

    2009-12-01

    The polar regions are changing rapidly with dramatic global effect. Wise management of resources, improved decision support, and effective international cooperation on resource and geopolitical issues require deeper understanding and better prediction of these changes. Unfortunately, polar data and information remain scattered, scarce, and sporadic. Inspired by the Antarctic Treaty of 1959 that established the Antarctic as a global commons to be used only for peaceful purposes and scientific research, we assert that data and information about the polar regions are themselves “public goods” that should be shared ethically and with minimal constraint. We therefore envision the Polar Information Commons (PIC) as an open, virtual repository for vital scientific data and information that would provide a shared, community-based cyber-infrastructure fostering innovation, improving scientific efficiency, and encouraging participation in polar research, education, planning, and management. The PIC will build on the legacy of the International Polar Year (IPY), providing a long-term framework for access to and preservation of both existing and future data and information about the polar regions. Rapid change demands rapid data access. The PIC system will enable scientists to quickly expose their data to the world and share them through open protocols on the Internet. A PIC digital label will alert users and data centers to new polar data and ensure that usage rights are clear. The PIC will utilize the Science Commons Protocol for Implementing Open Access Data, which promotes open data access through the public domain coupled with community norms of practice to ensure use of data in a fair and equitable manner. A set of PIC norms is currently being developed in consultation with key polar data organizations and other stakeholders. We welcome inputs from the broad science community as we further develop and refine the PIC approach and move ahead with implementation.

  14. Freeing data through The Polar Information Commons

    NASA Astrophysics Data System (ADS)

    de Bruin, T.; Chen, R. S.; Parsons, M. A.; Carlson, D. J.; Cass, K.; Finney, K.; Wilbanks, J.; Jochum, K.

    2010-12-01

    The polar regions are changing rapidly with dramatic global effect. Wise management of resources, improved decision support, and effective international cooperation on resource and geopolitical issues require deeper understanding and better prediction of these changes. Unfortunately, polar data and information remain scattered, scarce, and sporadic. Inspired by the Antarctic Treaty of 1959 that established the Antarctic as a global commons to be used only for peaceful purposes and scientific research, we assert that data and information about the polar regions are themselves “public goods” that should be shared ethically and with minimal constraint. ICSU’s Committee on Data (CODATA) therefore started the Polar Information Commons (PIC) as an open, virtual repository for vital scientific data and information. The PIC provides a shared, community-based cyber-infrastructure fostering innovation, improving scientific efficiency, and encouraging participation in polar research, education, planning, and management. The PIC builds on the legacy of the International Polar Year (IPY), providing a long-term framework for access to and preservation of both existing and future data and information about the polar regions. Rapid change demands rapid data access. The PIC system enables scientists to quickly expose their data to the world and share them through open protocols on the Internet. A PIC digital label will alert users and data centers to new polar data and ensure that usage rights are clear. The PIC utilizes the Science Commons Protocol for Implementing Open Access Data, which promotes open data access through the public domain coupled with community norms of practice to ensure use of data in a fair and equitable manner. A set of PIC norms has been developed in consultation with key polar data organizations and other stakeholders. We welcome inputs from the broad science community as we further develop and refine the PIC approach and move ahead with implementation.

  15. A scoping literature review of collaboration between primary care and public health.

    PubMed

    Martin-Misener, Ruth; Valaitis, Ruta; Wong, Sabrina T; Macdonald, Marjorie; Meagher-Stewart, Donna; Kaczorowski, Janusz; O-Mara, Linda; Savage, Rachel; Austin, Patricia

    2012-10-01

    The purpose of this scoping literature review was to determine what is known about: 1) structures and processes required to build successful collaborations between primary care (PC) and public health (PH); 2) outcomes of such collaborations; and 3) markers of their success. Collaboration between PC and PH is believed to enable more effective individual and population services than what might be achieved by either alone. The study followed established methods for a scoping literature review and was guided by a framework that identifies systemic, organizational and interactional determinants for collaboration. The review was restricted to articles published between 1988 and 2008. Published quantitative and qualitative primary studies, evaluation research, systematic and other types of reviews, as well as descriptive accounts without an explicit research design, were included if they addressed either the structures or processes to build collaboration or the outcomes or markers of such collaboration, and were published in English. The combined search strategy yielded 6125 articles of which 114 were included. Systemic-level factors influencing collaboration included: government involvement, policy and fit with local needs; funding and resource factors, power and control issues; and education and training. Lack of a common agenda; knowledge and resource limitations; leadership, management and accountability issues; geographic proximity of partners; and shared protocols, tools and information sharing were influential at the organizational level. Interpersonal factors included having a shared purpose; philosophy and beliefs; clear roles and positive relationships; and effective communication and decision-making strategies. Reported benefits of collaboration included: improved chronic disease management; communicable disease control; and maternal child health. More research is needed to explore the conditions and contexts in which collaboration between PC and PH makes most sense and potential gains outweigh the associated risks and costs.

  16. DAsHER CD: Developing a Data-Oriented Human-Centric Enterprise Architecture for EarthCube

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Yu, M.; Sun, M.; Qin, H.; Robinson, E.

    2015-12-01

    One of the biggest challenges that face Earth scientists is the resource discovery, access, and sharing in a desired fashion. EarthCube is targeted to enable geoscientists to address the challenges by fostering community-governed efforts that develop a common cyberinfrastructure for the purpose of collecting, accessing, analyzing, sharing and visualizing all forms of data and related resources, through the use of advanced technological and computational capabilities. Here we design an Enterprise Architecture (EA) for EarthCube to facilitate the knowledge management, communication and human collaboration in pursuit of the unprecedented data sharing across the geosciences. The design results will provide EarthCube a reference framework for developing geoscience cyberinfrastructure collaborated by different stakeholders, and identifying topics which should invoke high interest in the community. The development of this EarthCube EA framework leverages popular frameworks, such as Zachman, Gartner, DoDAF, and FEAF. The science driver of this design is the needs from EarthCube community, including the analyzed user requirements from EarthCube End User Workshop reports and EarthCube working group roadmaps, and feedbacks or comments from scientists obtained by organizing workshops. The final product of this Enterprise Architecture is a four-volume reference document: 1) Volume one is this document and comprises an executive summary of the EarthCube architecture, serving as an overview in the initial phases of architecture development; 2) Volume two is the major body of the design product. It outlines all the architectural design components or viewpoints; 3) Volume three provides taxonomy of the EarthCube enterprise augmented with semantics relations; 4) Volume four describes an example of utilizing this architecture for a geoscience project.

  17. HydroShare for iUTAH: Collaborative Publication, Interoperability, and Reuse of Hydrologic Data and Models for a Large, Interdisciplinary Water Research Project

    NASA Astrophysics Data System (ADS)

    Horsburgh, J. S.; Jones, A. S.

    2016-12-01

    Data and models used within the hydrologic science community are diverse. New research data and model repositories have succeeded in making data and models more accessible, but have been, in most cases, limited to particular types or classes of data or models and also lack the type of collaborative, and iterative functionality needed to enable shared data collection and modeling workflows. File sharing systems currently used within many scientific communities for private sharing of preliminary and intermediate data and modeling products do not support collaborative data capture, description, visualization, and annotation. More recently, hydrologic datasets and models have been cast as "social objects" that can be published, collaborated around, annotated, discovered, and accessed. Yet it can be difficult using existing software tools to achieve the kind of collaborative workflows and data/model reuse that many envision. HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier and achieving new levels of interactive functionality and interoperability. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. HydroShare is enabled by a generic data model and content packaging scheme that supports describing and sharing diverse hydrologic datasets and models. Interoperability among the diverse types of data and models used by hydrologic scientists is achieved through the use of consistent storage, management, sharing, publication, and annotation within HydroShare. In this presentation, we highlight and demonstrate how the flexibility of HydroShare's data model and packaging scheme, HydroShare's access control and sharing functionality, and versioning and publication capabilities have enabled the sharing and publication of research datasets for a large, interdisciplinary water research project called iUTAH (innovative Urban Transitions and Aridregion Hydro-sustainability). We discuss the experiences of iUTAH researchers now using HydroShare to collaboratively create, curate, and publish datasets and models in a way that encourages collaboration, promotes reuse, and meets funding agency requirements.

  18. New Catalog of Resources Enables Paleogeosciences Research

    NASA Astrophysics Data System (ADS)

    Lingo, R. C.; Horlick, K. A.; Anderson, D. M.

    2014-12-01

    The 21st century promises a new era for scientists of all disciplines, the age where cyber infrastructure enables research and education and fuels discovery. EarthCube is a working community of over 2,500 scientists and students of many Earth Science disciplines who are looking to build bridges between disciplines. The EarthCube initiative will create a digital infrastructure that connects databases, software, and repositories. A catalog of resources (databases, software, repositories) has been produced by the Research Coordination Network for Paleogeosciences to improve the discoverability of resources. The Catalog is currently made available within the larger-scope CINERGI geosciences portal (http://hydro10.sdsc.edu/geoportal/catalog/main/home.page). Other distribution points and web services are planned, using linked data, content services for the web, and XML descriptions that can be harvested using metadata protocols. The databases provide searchable interfaces to find data sets that would otherwise remain dark data, hidden in drawers and on personal computers. The software will be described in catalog entries so just one click will lead users to methods and analytical tools that many geoscientists were unaware of. The repositories listed in the Paleogeosciences Catalog contain physical samples found all across the globe, from natural history museums to the basements of university buildings. EarthCube has over 250 databases, 300 software systems, and 200 repositories which will grow in the coming year. When completed, geoscientists across the world will be connected into a productive workflow for managing, sharing, and exploring geoscience data and information that expedites collaboration and innovation within the paleogeosciences, potentially bringing about new interdisciplinary discoveries.

  19. Negotiation Support Systems for Facilitating International Water Conflicts

    NASA Astrophysics Data System (ADS)

    Mirchi, A.; Madani, K.; Rouhani, O. M.

    2011-12-01

    Two decades after the collapse of the Soviet Union, Caspian Sea -the largest inland body of water on earth- continues to be the subject of one of the world's most insurmountable disputes, involving Iran, Russia, and the new sovereign states of Azerbaijan, Kazakhstan, and Turkmenistan. The conflict is over the legal status of this multinational water body, which supplies almost all of the world's black caviar, and holds about 10% and 4% of the world's oil and gas reserves, respectively. Typically, proposed division methods for sharing the Caspian Sea and its valuable resources focus either on the areal shares or on the oil and gas shares of the parties. As such, total gains of littoral states under different division methods have remained unclear. In this study, we have developed the Caspian Sea Negotiation Support System (NSS) to delineate optimal boundaries for sharing the sea. The Caspian Sea NSS facilitates simultaneous consideration of the countries' areal and resource shares from the sea under different sharing methods. The developed model is run under different division scenarios to provide insights into the sensitivity of the countries' gains and locations of nautical boundaries to the proposed division rules and the economic values of the Caspian Sea resources. The results are highly sensitive to the proposed division rules, and there is an indirect relationship between the allocated area and resource shares. The main policy implication of the study is that explicit quantification of the countries' resource and areal gains under any suggested legal regime for governing the Caspian Sea is a precursor the success of the negotiations.

  20. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  1. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  2. Knowledge and experience sharing practices among health professionals in hospitals under the Addis Ababa health bureau, Ethiopia.

    PubMed

    Asemahagn, Mulusew Andualem

    2014-09-24

    Health professionals need updated health information from credible sources to improve their knowledge and provide evidence based health care services. Various types of medical errors have occurred in resource-limited countries because of poor knowledge and experience sharing practices among health professionals. The aim of this study was to assess knowledge-sharing practices and determinants among health professionals in Addis Ababa, Ethiopia. An institutional based cross-sectional study was conducted among 320 randomly selected health professionals from August12-25/2012. A pretested, self-administered questionnaire was used to collect data about different variables. Data entry and analysis were done using Epi-Info version 3.5.4 and SPSS version20 respectively. Descriptive statistics and multivariate regression analyses were applied to describe study objectives and identify the determinants of knowledge sharing practices respectively. Odds ratio at 95% CI was used to describe the strength of association between the study and outcome variables. Most of the respondents approved the need of knowledge and experience sharing practices in their routine activities. Nearly half, 152 (49.0%) of the study participants had knowledge and experience sharing practices. A majority, 219 (70.0%) of the respondents showed a willingness to share their knowledge and experiences. Trust on others' knowledge, motivation, supportive leadership, job satisfaction, awareness, willingness and resource allocation are the determinants of knowledge and experience sharing practices. Supportive leadership, resources, and trust on others' knowledge can enhance knowledge and experience sharing by OR = 3.12, 95% CI = [1.89 - 5.78], OR = 2.3, 95% CI = [1.61- 4.21] and OR = 2.78, 95% CI = [1.66 - 4.64] times compared with their counterparts respectively. Even though most of the respondents knew the importance of knowledge and experience sharing practices, only a limited number of respondents practiced it. Individual, organizational and resource related issues are the major determinants of low knowledge sharing practices. Improving management, proper resource allocation, motivating staffs, and accessing health information sources are important interventions to improve the problem in the study area.

  3. The designing and implementation of PE teaching information resource database based on broadband network

    NASA Astrophysics Data System (ADS)

    Wang, Jian

    2017-01-01

    In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.

  4. Sharing and reusing multimedia multilingual educational resources in medicine.

    PubMed

    Zdrahal, Zdenek; Knoth, Petr; Mulholland, Paul; Collins, Trevor

    2013-01-01

    The paper describes the Eurogene portal for sharing and reusing multilingual multimedia educational resources in human genetics. The content is annotated using concepts of two ontologies and a topic hierarchy. The ontology annotation is used to guide search and for calculating semantically similar content. Educational resources can be aggregated into learning packages. The system is in routine use since 2009.

  5. Low Latency DESDynI Data Products for Disaster Response, Resource Management and Other Applications

    NASA Technical Reports Server (NTRS)

    Doubleday, Joshua R.; Chien, Steve A.; Lou, Yunling

    2011-01-01

    We are developing onboard processor technology targeted at the L-band SAR instrument onboard the planned DESDynI mission to enable formation of SAR images onboard opening possibilities for near-real-time data products to augment full data streams. Several image processing and/or interpretation techniques are being explored as possible direct-broadcast products for use by agencies in need of low-latency data, responsible for disaster mitigation and assessment, resource management, agricultural development, shipping, etc. Data collected through UAVSAR (L-band) serves as surrogate to the future DESDynI instrument. We have explored surface water extent as a tool for flooding response, and disturbance images on polarimetric backscatter of repeat pass imagery potentially useful for structural collapse (earthquake), mud/land/debris-slides etc. We have also explored building vegetation and snow/ice classifiers, via support vector machines utilizing quad-pol backscatter, cross-pol phase, and a number of derivatives (radar vegetation index, dielectric estimates, etc.). We share our qualitative and quantitative results thus far.

  6. A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)

    NASA Astrophysics Data System (ADS)

    Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.

    2014-05-01

    Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/

  7. Examining the potential exploitation of UNOS policies.

    PubMed

    Zink, Sheldon; Wertlieb, Stacey; Catalano, John; Marwin, Victor

    2005-01-01

    The United Network for Organ Sharing (UNOS) waiting list was designed as a just and equitable system through which the limited number of organs is allocated to the millions of Americans in need of a transplant. People have trusted the system because of the belief that everyone on the list has an equal opportunity to receive an organ and also that allocation is blind to matters of financial standing, celebrity or political power. Recent events have revealed that certain practices and policies have the potential to be exploited. The policies addressed in this paper enable those on the list with the proper resources to gain an advantage over other less fortunate members, creating a system that benefits not the individual most in medical need, but the one with the best resources. These policies are not only unethical but threaten the balance and success of the entire UNOS system. This paper proposes one possible solution, which seeks to balance the concepts of justice and utility.

  8. Dynamics Of Human Motion The Case Study of an Examination Hall

    NASA Astrophysics Data System (ADS)

    Ogunjo, Samuel; Ajayi, Oluwaseyi; Fuwape, Ibiyinka; Dansu, Emmanuel

    Human behaviour is difficult to characterize and generalize due to ITS complex nature. Advances in mathematical models have enabled human systems such as love interaction, alcohol abuse, admission problem to be described using models. This study investigates one of such problems, the dynamics of human motion in an examination hall with limited computer systems such that students write their examination in batches. The examination is characterized by time (t) allocated to each students and difficulty level (dl) associated with the examination. A stochastic model based on the difficulty level of the examination was developed for the prediction of student's motion around the examination hall. A good agreement was obtained between theoretical predictions and numerical simulation. The result obtained will help in better planning of examination session to maximize available resources. Furthermore, results obtained in the research can be extended to other areas such as banking hall, customer service points where available resources will be shared amongst many users.

  9. Application of service oriented architecture for sensors and actuators in district heating substations.

    PubMed

    Gustafsson, Jonas; Kyusakov, Rumen; Mäkitaavola, Henrik; Delsing, Jerker

    2014-08-21

    Hardwired sensor installations using proprietary protocols found in today's district heating substations limit the potential usability of the sensors in and around the substations. If sensor resources can be shared and re-used in a variety of applications, the cost of sensors and installation can be reduced, and their functionality and operability can be increased. In this paper, we present a new concept of district heating substation control and monitoring, where a service oriented architecture (SOA) is deployed in a wireless sensor network (WSN), which is integrated with the substation. IP-networking is exclusively used from sensor to server; hence, no middleware is needed for Internet integration. Further, by enabling thousands of sensors with SOA capabilities, a System of Systems approach can be applied. The results of this paper show that it is possible to utilize SOA solutions with heavily resource-constrained embedded devices in contexts where the real-time constrains are limited, such as in a district heating substation.

  10. Assuring the Quality of Agricultural Learning Repositories: Issues for the Learning Object Metadata Creation Process of the CGIAR

    NASA Astrophysics Data System (ADS)

    Zschocke, Thomas; Beniest, Jan

    The Consultative Group on International Agricultural Re- search (CGIAR) has established a digital repository to share its teaching and learning resources along with descriptive educational information based on the IEEE Learning Object Metadata (LOM) standard. As a critical component of any digital repository, quality metadata are critical not only to enable users to find more easily the resources they require, but also for the operation and interoperability of the repository itself. Studies show that repositories have difficulties in obtaining good quality metadata from their contributors, especially when this process involves many different stakeholders as is the case with the CGIAR as an international organization. To address this issue the CGIAR began investigating the Open ECBCheck as well as the ISO/IEC 19796-1 standard to establish quality protocols for its training. The paper highlights the implications and challenges posed by strengthening the metadata creation workflow for disseminating learning objects of the CGIAR.

  11. [Online information service: the library support for evidence-based practice].

    PubMed

    Markulin, Helena; Petrak, Jelka

    2014-01-01

    It frequently happens that physicians do not have adequate skills or enough time for searching and evaluating evidence needed in their everyday practice. Medical librarian can serve as a mediator in enabling physicians to utilize the potential offered by contemporary evidence-based medicine. The Central Medical Library (CML) at University of Zagreb, School of Medicine, designed a web-based information service aimed at the promotion of evidence-based practice in the Croatian medical community. The users can ask for a help in finding information on their clinical problems. A responsible librarian will analyse the problem, search information resources and evaluate the evidence. The answer is returned to the user by an e-mail. In the 2008-2012 period 166 questions from 12 clinical fields were received and most of them (36.1%) came from internal medicine doctors. The share of treatment-related questions was 70.5%. In the setting of underdeveloped ICT infrastructure and inadequate EBM resources availability, such information service can help in transfer of scientific evidence into the everyday clinical practice.

  12. Application of Service Oriented Architecture for Sensors and Actuators in District Heating Substations

    PubMed Central

    Gustafsson, Jonas; Kyusakov, Rumen; Mäkitaavola, Henrik; Delsing, Jerker

    2014-01-01

    Hardwired sensor installations using proprietary protocols found in today's district heating substations limit the potential usability of the sensors in and around the substations. If sensor resources can be shared and re-used in a variety of applications, the cost of sensors and installation can be reduced, and their functionality and operability can be increased. In this paper, we present a new concept of district heating substation control and monitoring, where a service oriented architecture (SOA) is deployed in a wireless sensor network (WSN), which is integrated with the substation. IP-networking is exclusively used from sensor to server; hence, no middleware is needed for Internet integration. Further, by enabling thousands of sensors with SOA capabilities, a System of Systems approach can be applied. The results of this paper show that it is possible to utilize SOA solutions with heavily resource-constrained embedded devices in contexts where the real-time constrains are limited, such as in a district heating substation. PMID:25196165

  13. An Innovative Infrastructure with a Universal Geo-Spatiotemporal Data Representation Supporting Cost-Effective Integration of Diverse Earth Science Data

    NASA Technical Reports Server (NTRS)

    Rilee, Michael Lee; Kuo, Kwo-Sen

    2017-01-01

    The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.

  14. Designing for Global Data Sharing, Designing for Educational Transformation

    ERIC Educational Resources Information Center

    Adams, Robin S.; Radcliffe, David; Fosmire, Michael

    2016-01-01

    This paper provides an example of a global data sharing project with an educational transformation agenda. This agenda shaped both the design of the shared dataset and the experience of sharing the common dataset to support multiple perspective inquiry and enable integrative and critically reflexive research-to-practice dialogue. The shared…

  15. The Federated Satellite Systems paradigm: Concept and business case evaluation

    NASA Astrophysics Data System (ADS)

    Golkar, Alessandro; Lluch i Cruz, Ignasi

    2015-06-01

    This paper defines the paradigm of Federated Satellite Systems (FSS) as a novel distributed space systems architecture. FSS are networks of spacecraft trading previously inefficiently allocated and unused resources such as downlink bandwidth, storage, processing power, and instrument time. FSS holds the promise to enhance cost-effectiveness, performance and reliability of existing and future space missions, by networking different missions and effectively creating a pool of resources to exchange between participants in the federation. This paper introduces and describes the FSS paradigm, and develops an approach integrating mission analysis and economic assessments to evaluate the feasibility of the business case of FSS. The approach is demonstrated on a case study on opportunities enabled by FSS to enhance space exploration programs, with particular reference to the International Space Station. The application of the proposed methodology shows that the FSS concept is potentially able to create large commercial markets of in-space resources, by providing the technical platform to offer the opportunity for spacecraft to share or make use of unused resources within their orbital neighborhood. It is shown how the concept is beneficial to satellite operators, space agencies, and other stakeholders of the space industry to more flexibly interoperate space systems as a portfolio of assets, allowing unprecedented collaboration among heterogeneous types of missions.

  16. From reactive to proactive: developing a valid clinical ethics needs assessment survey to support ethics program strategic planning (part 1 of 2).

    PubMed

    Frolic, Andrea; Jennings, Barb; Seidlitz, Wendy; Andreychuk, Sandy; Djuric-Paulin, Angela; Flaherty, Barb; Peace, Donna

    2013-03-01

    As ethics committees and programs become integrated into the "usual business" of healthcare organizations, they are likely to face the predicament of responding to greater demands for service and higher expectations, without an influx of additional resources. This situation demands that ethics committees and programs allocate their scarce resources (including their time, skills and funds) strategically, rather than lurching from one ad hoc request to another; finding ways to maximize the effectiveness, efficiency, impact and quality of ethics services is essential in today's competitive environment. How can Hospital Ethics Committees (HECs) begin the process of strategic priority-setting to ensure they are delivering services where and how they are most needed? This paper describes the creation of the Clinical Ethics Needs Assessment Survey (CENAS) as a tool to understand interprofessional staff perceptions of the organization's ethical climate, challenging ethical issues and educational priorities. The CENAS was designed to support informed resource allocation and advocacy by HECs. By sharing our process of developing and validating this ethics needs assessment survey we hope to enable strategic priority-setting in other resource-strapped ethics programs, and to empower HECs to shift their focus to more proactive, quality-focused initiatives.

  17. Implementation of GenePattern within the Stanford Microarray Database.

    PubMed

    Hubble, Jeremy; Demeter, Janos; Jin, Heng; Mao, Maria; Nitzberg, Michael; Reddy, T B K; Wymore, Farrell; Zachariah, Zachariah K; Sherlock, Gavin; Ball, Catherine A

    2009-01-01

    Hundreds of researchers across the world use the Stanford Microarray Database (SMD; http://smd.stanford.edu/) to store, annotate, view, analyze and share microarray data. In addition to providing registered users at Stanford access to their own data, SMD also provides access to public data, and tools with which to analyze those data, to any public user anywhere in the world. Previously, the addition of new microarray data analysis tools to SMD has been limited by available engineering resources, and in addition, the existing suite of tools did not provide a simple way to design, execute and share analysis pipelines, or to document such pipelines for the purposes of publication. To address this, we have incorporated the GenePattern software package directly into SMD, providing access to many new analysis tools, as well as a plug-in architecture that allows users to directly integrate and share additional tools through SMD. In this article, we describe our implementation of the GenePattern microarray analysis software package into the SMD code base. This extension is available with the SMD source code that is fully and freely available to others under an Open Source license, enabling other groups to create a local installation of SMD with an enriched data analysis capability.

  18. Exploiting GPUs in Virtual Machine for BioCloud

    PubMed Central

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  19. Exploiting GPUs in virtual machine for BioCloud.

    PubMed

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  20. The community resource management area mechanism: a strategy to manage African forest resources for REDD+.

    PubMed

    Asare, Rebecca A; Kyei, Andrew; Mason, John J

    2013-01-01

    Climate change poses a significant threat to Africa, and deforestation rates have increased in recent years. Mitigation initiatives such as REDD+ are widely considered as potentially efficient ways to generate emission reductions (or removals), conserve or sustainably manage forests, and bring benefits to communities, but effective implementation models are lacking. This paper presents the case of Ghana's Community Resource Management Area (CREMA) mechanism, an innovative natural resource governance and landscape-level planning tool that authorizes communities to manage their natural resources for economic and livelihood benefits. This paper argues that while the CREMA was originally developed to facilitate community-based wildlife management and habitat protection, it offers a promising community-based structure and process for managing African forest resources for REDD+. At a theoretical level, it conforms to the ecological, socio-cultural and economic factors that drive resource-users' decision process and practices. And from a practical mitigation standpoint, the CREMA has the potential to help solve many of the key challenges for REDD+ in Africa, including definition of boundaries, smallholder aggregation, free prior and informed consent, ensuring permanence, preventing leakage, clarifying land tenure and carbon rights, as well as enabling equitable benefit-sharing arrangements. Ultimately, CREMA's potential as a forest management and climate change mitigation strategy that generates livelihood benefits for smallholder farmers and forest users will depend upon the willingness of African governments to support the mechanism and give it full legislative backing, and the motivation of communities to adopt the CREMA and integrate democratic decision-making and planning with their traditional values and natural resource management systems.

  1. The community resource management area mechanism: a strategy to manage African forest resources for REDD+

    PubMed Central

    Asare, Rebecca A.; Kyei, Andrew; Mason, John J.

    2013-01-01

    Climate change poses a significant threat to Africa, and deforestation rates have increased in recent years. Mitigation initiatives such as REDD+ are widely considered as potentially efficient ways to generate emission reductions (or removals), conserve or sustainably manage forests, and bring benefits to communities, but effective implementation models are lacking. This paper presents the case of Ghana's Community Resource Management Area (CREMA) mechanism, an innovative natural resource governance and landscape-level planning tool that authorizes communities to manage their natural resources for economic and livelihood benefits. This paper argues that while the CREMA was originally developed to facilitate community-based wildlife management and habitat protection, it offers a promising community-based structure and process for managing African forest resources for REDD+. At a theoretical level, it conforms to the ecological, socio-cultural and economic factors that drive resource-users’ decision process and practices. And from a practical mitigation standpoint, the CREMA has the potential to help solve many of the key challenges for REDD+ in Africa, including definition of boundaries, smallholder aggregation, free prior and informed consent, ensuring permanence, preventing leakage, clarifying land tenure and carbon rights, as well as enabling equitable benefit-sharing arrangements. Ultimately, CREMA's potential as a forest management and climate change mitigation strategy that generates livelihood benefits for smallholder farmers and forest users will depend upon the willingness of African governments to support the mechanism and give it full legislative backing, and the motivation of communities to adopt the CREMA and integrate democratic decision-making and planning with their traditional values and natural resource management systems. PMID:23878338

  2. Governance of global health research consortia: Sharing sovereignty and resources within Future Health Systems.

    PubMed

    Pratt, Bridget; Hyder, Adnan A

    2017-02-01

    Global health research partnerships are increasingly taking the form of consortia that conduct programs of research in low and middle-income countries (LMICs). An ethical framework has been developed that describes how the governance of consortia comprised of institutions from high-income countries and LMICs should be structured to promote health equity. It encompasses initial guidance for sharing sovereignty in consortia decision-making and sharing consortia resources. This paper describes a first effort to examine whether and how consortia can uphold that guidance. Case study research was undertaken with the Future Health Systems consortium, performs research to improve health service delivery for the poor in Bangladesh, China, India, and Uganda. Data were thematically analysed and revealed that proposed ethical requirements for sharing sovereignty and sharing resources are largely upheld by Future Health Systems. Facilitating factors included having a decentralised governance model, LMIC partners with good research capacity, and firm budgets. Higher labour costs in the US and UK and the funder's policy of allocating funds to consortia on a reimbursement basis prevented full alignment with guidance on sharing resources. The lessons described in this paper can assist other consortia to more systematically link their governance policy and practice to the promotion of health equity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Shared resources : sharing right-of-way for telecommunications : guidance on legal and institutional issues

    DOT National Transportation Integrated Search

    1996-03-01

    Fiber-optic communications technology offers benefits for government agencies that want to set up communications networks for Intelligent Transportation Systems (ITS). One way to do this efficiently is to offer the public resource of highway right-of...

  4. Terminology for Neuroscience Data Discovery: Multi-tree Syntax and Investigator-Derived Semantics

    PubMed Central

    Goldberg, David H.; Grafstein, Bernice; Robert, Adrian; Gardner, Esther P.

    2009-01-01

    The Neuroscience Information Framework (NIF), developed for the NIH Blueprint for Neuroscience Research and available at http://nif.nih.gov and http://neurogateway.org, is built upon a set of coordinated terminology components enabling data and web-resource description and selection. Core NIF terminologies use a straightforward syntax designed for ease of use and for navigation by familiar web interfaces, and readily exportable to aid development of relational-model databases for neuroscience data sharing. Datasets, data analysis tools, web resources, and other entities are characterized by multiple descriptors, each addressing core concepts, including data type, acquisition technique, neuroanatomy, and cell class. Terms for each concept are organized in a tree structure, providing is-a and has-a relations. Broad general terms near each root span the category or concept and spawn more detailed entries for specificity. Related but distinct concepts (e.g., brain area and depth) are specified by separate trees, for easier navigation than would be required by graph representation. Semantics enabling NIF data discovery were selected at one or more workshops by investigators expert in particular systems (vision, olfaction, behavioral neuroscience, neurodevelopment), brain areas (cerebellum, thalamus, hippocampus), preparations (molluscs, fly), diseases (neurodegenerative disease), or techniques (microscopy, computation and modeling, neurogenetics). Workshop-derived integrated term lists are available Open Source at http://brainml.org; a complete list of participants is at http://brainml.org/workshops. PMID:18958630

  5. An analysis of factors affecting participation behavior of limited resource farmers in agricultural cost-share programs in Alabama

    Treesearch

    Okwudili Onianwa; Gerald Wheelock; Buddhi Gyawali; Jianbang Gan; Mark Dubois; John Schelhas

    2004-01-01

    This study examines factors that affect the participation behavior of limited resource farmers in agricultural cost-share programs in Alabama. The data were generated from a survey administered to a sample of limited resource farm operators. A binary logit model was employed to analyze the data. Results indicate that college education, age, gross sales, ratio of owned...

  6. Optimizing DER Participation in Inertial and Primary-Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop

    This paper develops an approach to enable the optimal participation of distributed energy resources (DERs) in inertial and primary-frequency response alongside conventional synchronous generators. Leveraging a reduced-order model description of frequency dynamics, DERs' synthetic inertias and droop coefficients are designed to meet time-domain performance objectives of frequency overshoot and steady-state regulation. Furthermore, an optimization-based method centered around classical economic dispatch is developed to ensure that DERs share the power injections for inertial- and primary-frequency response in proportion to their power ratings. Simulations for a modified New England test-case system composed of ten synchronous generators and six instances of the IEEEmore » 37-node test feeder with frequency-responsive DERs validate the design strategy.« less

  7. The FLIGHT Drosophila RNAi database

    PubMed Central

    Bursteinas, Borisas; Jain, Ekta; Gao, Qiong; Baum, Buzz; Zvelebil, Marketa

    2010-01-01

    FLIGHT (http://flight.icr.ac.uk/) is an online resource compiling data from high-throughput Drosophila in vivo and in vitro RNAi screens. FLIGHT includes details of RNAi reagents and their predicted off-target effects, alongside RNAi screen hits, scores and phenotypes, including images from high-content screens. The latest release of FLIGHT is designed to enable users to upload, analyze, integrate and share their own RNAi screens. Users can perform multiple normalizations, view quality control plots, detect and assign screen hits and compare hits from multiple screens using a variety of methods including hierarchical clustering. FLIGHT integrates RNAi screen data with microarray gene expression as well as genomic annotations and genetic/physical interaction datasets to provide a single interface for RNAi screen analysis and datamining in Drosophila. PMID:20855970

  8. Goddard's New Approach to Information Technology: The Information Systems Center an Overview

    NASA Technical Reports Server (NTRS)

    Kea, Howard E.

    1994-01-01

    The Information Center (ISC) at Goddard was created as part of the Goddard reorganization and was located within the Applied Engineering and Technology (AET) Directorate. The creation of ISC was to: (1) focus expertise and leadership in information system development; (2) Promote organizational collaboration, partnerships, and resource sharing; (3) Stimulate design/development of seamless end-to-end flight and ground systems; (4) Enable flexibility to effectively support many simultaneous projects by improved access to critical mass of discipline expertise; (5) Enhance career growth and opportunities including multi-disciplinary opportunities; and (6) to improve communications among information system professionals. This paper presents a general overview of the Information Systems Center as well as the role of the Software Engineering Laboratory within the center.

  9. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  10. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  11. Mothers' Expectations for Shared Reading Following Delivery: Implications For Reading Activities at 6 Months

    PubMed Central

    Berkule, Samantha B.; Dreyer, Benard P.; Klass, Perri E.; Huberman, Harris S.; Yin, Hsiang S.; Mendelsohn, Alan L.

    2008-01-01

    Objective To determine whether mothers with plans related to shared reading and baby books in the home at the time of delivery of their newborns would be more likely to engage in shared reading behaviors at age 6 months. Method This was a cohort study with enrollment post-partum and follow-up at 6 months in an urban public hospital. Predictors: mothers' attitudes and resources related to shared reading during the postpartum period. Outcomes: mothers' shared reading activities and resources at 6 months (StimQ-READ). Results 173 mother-infant dyads were assessed. In multiple regression analyses adjusting for sociodemographics and maternal depression and literacy, StimQ-READ at 6 months was increased in association with all 3 postpartum predictors: plans for reading as a strategy for school success (adjusted mean 1.7 point increase in 6 month score; 95% CI: 0.3 – 3.0), plans to read in infancy (3.1 point increase; 95% CI: 1.6-4.6), and having baby books in the home (2.3 point increase; 95% CI: 0.9 – 3.6). In multiple logistic regression analysis, mothers with two or more attitudes and resources had an AOR of 6.2 (95% CI: 2.0-18.9) for having initiated reading at 6 months. Conclusions Maternal attitudes and resources in early infancy related to shared reading are important predictors of reading behaviors by 6 months. Cumulative postnatal attitudes and resources are the strongest predictors of later behaviors. Additional research is needed regarding whether guidance about shared reading in early infancy or pregnancy would enhance programs such as Reach Out and Read. PMID:18501863

  12. SHARING EDUCATIONAL SERVICES.

    ERIC Educational Resources Information Center

    Catskill Area Project in Small School Design, Oneonta, NY.

    SHARED SERVICES, A COOPERATIVE SCHOOL RESOURCE PROGRAM, IS DEFINED IN DETAIL. INCLUDED IS A DISCUSSION OF THEIR NEED, ADVANTAGES, GROWTH, DESIGN, AND OPERATION. SPECIFIC PROCEDURES FOR OBTAINING STATE AID IN SHARED SERVICES, EFFECTS OF SHARED SERVICES ON THE SCHOOL, AND HINTS CONCERNING SHARED SERVICES ARE DESCRIBED. CHARACTERISTICS OF THE SMALL…

  13. The USA National Phenology Network; taking the pulse of our planet

    USGS Publications Warehouse

    Weltzin, Jake F.

    2011-01-01

    People have tracked phenology for centuries and for the most practical reasons: it helped them know when to hunt and fish, when to plant and harvest crops, and when to navigate waterways. Now phenology is being used as a tool to assess climate change and its effects on both natural and modified ecosystems. How is the timing of events in plant and animal life cycles, like flowering or migration, responding to climate change? And how are those responses, in turn, affecting people and ecosystems? The USA National Phenology Network (the Network) is working to answer these questions for science and society by promoting a broad understanding of plant and animal phenology and their relationship to environmental change. The Network is a consortium of organizations and individuals that collect, share, and use phenology data, models, and related information to enable scientists, resource managers, and the public to adapt in response to changing climates and environments. In addition, the Network encourages people of all ages and backgrounds to observe and record phenology as a way to discover and explore the nature and pace of our dynamic world. The National Coordinating Office (NCO) of the Network is a resource center that facilitates and encourages widespread collection, integration, and sharing of phenology data and related information (for example, meteorological and hydrological data). The NCO develops and promotes standardized methods for field data collection and maintains several online user interfaces for data upload and download, as well as data exploration, visualization, and analysis. The NCO also facilitates basic and applied research related to phenology, the development of decision-support tools for resource managers and planners, and the design of educational and outreach materials

  14. Open Core Data: Connecting scientific drilling data to scientists and community data resources

    NASA Astrophysics Data System (ADS)

    Fils, D.; Noren, A. J.; Lehnert, K.; Diver, P.

    2016-12-01

    Open Core Data (OCD) is an innovative, efficient, and scalable infrastructure for data generated by scientific drilling and coring to improve discoverability, accessibility, citability, and preservation of data from the oceans and continents. OCD is building on existing community data resources that manage, store, publish, and preserve scientific drilling data, filling a critical void that currently prevents linkages between these and other data systems and tools to realize the full potential of data generated through drilling and coring. We are developing this functionality through Linked Open Data (LOD) and semantic patterns that enable data access through the use of community ontologies such as GeoLink (geolink.org, an EarthCube Building Block), a collection of protocols, formats and vocabularies from a set of participating geoscience repositories. Common shared concepts of classes such as cruise, dataset, person and others allow easier resolution of common references through shared resource IDs. These graphs are then made available via SPARQL as well as incorporated into web pages following schema.org approaches. Additionally the W3C PROV vocabulary is under evaluation for use for documentation of provenance. Further, the application of persistent identifiers for samples (IGSNs); datasets, expeditions, and projects (DOIs); and people (ORCIDs), combined with LOD approaches, provides methods to resolve and incorporate metadata and datasets. Application Program Interfaces (APIs) complement these semantic approaches to the OCD data holdings. APIs are exposed following the Swagger guidelines (swagger.io) and will be evolved into the OpenAPI (openapis.org) approach. Currently APIs are in development for the NSF funded Flyover Country mobile geoscience app (fc.umn.edu), the Neotoma Paleoecology Database (neotomadb.org), Magnetics Information Consortium (MagIC; earthref.org/MagIC), and other community tools and data systems, as well as for internal OCD use.

  15. Supporting Shared Resource Usage for a Diverse User Community: the OSG Experience and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele; Levshina, Tanya; Rynge, Mats; Sehgal, Chander; Slyz, Marko

    2012-12-01

    The Open Science Grid (OSG) supports a diverse community of new and existing users in adopting and making effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides consulting and technical services through the User Support area. We describe these sometimes successful and sometimes not so successful experiences and analyze lessons learned that are helping us improve our services. The services offered include forums to enable shared learning and mutual support, tutorials and documentation for new technology, and troubleshooting of problematic or systemic failure modes. For new communities and users, we bootstrap their use of the distributed high throughput computing technologies and resources available on the OSG by following a phased approach. We first adapt the application and run a small production campaign on a subset of “friendly” sites. Only then do we move the user to run full production campaigns across the many remote sites on the OSG, adding to the community resources up to hundreds of thousands of CPU hours per day. This scaling up generates new challenges - like no determinism in the time to job completion, and diverse errors due to the heterogeneity of the configurations and environments - so some attention is needed to get good results. We cover recent experiences with image simulation for the Large Synoptic Survey Telescope (LSST), small-file large volume data movement for the Dark Energy Survey (DES), civil engineering simulation with the Network for Earthquake Engineering Simulation (NEES), and accelerator modeling with the Electron Ion Collider group at BNL. We will categorize and analyze the use cases and describe how our processes are evolving based on lessons learned.

  16. Providing the Tools for Information Sharing: Net-Centric Enterprise Services

    DTIC Science & Technology

    2007-07-01

    The Department of Defense (DoD) is establishing a net-centric environment that increasingly leverages shared services and Service-Oriented...transformational program that delivers a set of shared services as part of the DoD’s common infrastructure to enable networked joint force capabilities, improved interoperability, and increased information sharing across mission area services.

  17. MinT: Middleware for Cooperative Interaction of Things

    PubMed Central

    Jeon, Soobin; Jung, Inbum

    2017-01-01

    This paper proposes an Internet of Things (IoT) middleware called Middleware for Cooperative Interaction of Things (MinT). MinT supports a fully distributed IoT environment in which IoT devices directly connect to peripheral devices easily construct a local or global network, and share their data in an energy efficient manner. MinT provides a sensor abstract layer, a system layer and an interaction layer. These enable integrated sensing device operations, efficient resource management, and active interconnection between peripheral IoT devices. In addition, MinT provides a high-level API to develop IoT devices easily for IoT device developers. We aim to enhance the energy efficiency and performance of IoT devices through the performance improvements offered by MinT resource management and request processing. The experimental results show that the average request rate increased by 25% compared to Californium, which is a middleware for efficient interaction in IoT environments with powerful performance, an average response time decrease of 90% when resource management was used, and power consumption decreased by up to 68%. Finally, the proposed platform can reduce the latency and power consumption of IoT devices. PMID:28632182

  18. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, K; Kagadis, G; Xing, L

    As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less

  19. MinT: Middleware for Cooperative Interaction of Things.

    PubMed

    Jeon, Soobin; Jung, Inbum

    2017-06-20

    This paper proposes an Internet of Things (IoT) middleware called Middleware for Cooperative Interaction of Things (MinT). MinT supports a fully distributed IoT environment in which IoT devices directly connect to peripheral devices easily construct a local or global network, and share their data in an energy efficient manner. MinT provides a sensor abstract layer, a system layer and an interaction layer. These enable integrated sensing device operations, efficient resource management, and active interconnection between peripheral IoT devices. In addition, MinT provides a high-level API to develop IoT devices easily for IoT device developers. We aim to enhance the energy efficiency and performance of IoT devices through the performance improvements offered by MinT resource management and request processing. The experimental results show that the average request rate increased by 25% compared to Californium, which is a middleware for efficient interaction in IoT environments with powerful performance, an average response time decrease of 90% when resource management was used, and power consumption decreased by up to 68%. Finally, the proposed platform can reduce the latency and power consumption of IoT devices.

  20. Towards health in all policies for childhood obesity prevention.

    PubMed

    Hendriks, Anna-Marie; Kremers, Stef P J; Gubbels, Jessica S; Raat, Hein; de Vries, Nanne K; Jansen, Maria W J

    2013-01-01

    The childhood obesity epidemic can be best tackled by means of an integrated approach, which is enabled by integrated public health policies, or Health in All Policies. Integrated policies are developed through intersectoral collaboration between local government policy makers from health and nonhealth sectors. Such intersectoral collaboration has been proved to be difficult. In this study, we investigated which resources influence intersectoral collaboration. The behavior change wheel framework was used to categorize motivation-, capability-, and opportunity-related resources for intersectoral collaboration. In-depth interviews were held with eight officials representing 10 non-health policy sectors within a local government. Results showed that health and non-health policy sectors did not share policy goals, which decreased motivation for intersectoral collaboration. Awareness of the linkage between health and nonhealth policy sectors was limited, and management was not involved in creating such awareness, which reduced the capability for intersectoral collaboration. Insufficient organizational resources and structures reduced opportunities for intersectoral collaboration. To stimulate intersectoral collaboration to prevent childhood obesity, we recommend that public health professionals should reframe health goals in the terminology of nonhealth policy sectors, that municipal department managers should increase awareness of public health in non-health policy sectors, and that flatter organizational structures should be established.

  1. Shared Resources: Working Both Ways with Business and Industry.

    ERIC Educational Resources Information Center

    Rand, Glenn

    1989-01-01

    Describes the cooperative relationship established between Eastman Kodak Company and Lansing Community College whereby the college assigned an employee from the media department to work half time for Kodak. Examines the benefits to the college and outcomes of the shared resource effort. (DMM)

  2. Vision, Educational Level, and Empowering Work Relationships.

    ERIC Educational Resources Information Center

    Johnson, G. M.

    1995-01-01

    Thirty-one machinists (blind, sighted, and visually impaired) answered questions about trust, resource sharing, and empowerment in work relationships. Employees with low vision were the least trusting and trusted, received the fewest shared resources, and reported proportionately more disempowering relationships. More educated employees saw more…

  3. 75 FR 9246 - Cooperative Share Loan Insurance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-01

    ... DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT [Docket No. FR-5376-N-14] Cooperative Share Loan... comments on the subject proposal. New guidance for cooperative housing loan insurance will be published to update existing policies, and better enable mortgagees to submit cooperative share loans for FHA...

  4. Information Technology and Community Restoration Studies/Task 1: Information Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upton, Jaki F.; Lesperance, Ann M.; Stein, Steven L.

    2009-11-19

    Executive Summary The Interagency Biological Restoration Demonstration—a program jointly funded by the Department of Defense's Defense Threat Reduction Agency and the Department of Homeland Security's (DHS's) Science and Technology Directorate—is developing policies, methods, plans, and applied technologies to restore large urban areas, critical infrastructures, and Department of Defense installations following the intentional release of a biological agent (anthrax) by terrorists. There is a perception that there should be a common system that can share information both vertically and horizontally amongst participating organizations as well as support analyses. A key question is: "How far away from this are we?" As partmore » of this program, Pacific Northwest National Laboratory conducted research to identify the current information technology tools that would be used by organizations in the greater Seattle urban area in such a scenario, to define criteria for use in evaluating information technology tools, and to identify current gaps. Researchers interviewed 28 individuals representing 25 agencies in civilian and military organizations to identify the tools they currently use to capture data needed to support operations and decision making. The organizations can be grouped into five broad categories: defense (Department of Defense), environmental/ecological (Environmental Protection Agency/Ecology), public health and medical services, emergency management, and critical infrastructure. The types of information that would be communicated in a biological terrorism incident include critical infrastructure and resource status, safety and protection information, laboratory test results, and general emergency information. The most commonly used tools are WebEOC (web-enabled crisis information management systems with real-time information sharing), mass notification software, resource tracking software, and NW WARN (web-based information to protect critical infrastructure systems). It appears that the current information management tools are used primarily for information gathering and sharing—not decision making. Respondents identified the following criteria for a future software system. It is easy to learn, updates information in real time, works with all agencies, is secure, uses a visualization or geographic information system feature, enables varying permission levels, flows information from one stage to another, works with other databases, feeds decision support tools, is compliant with appropriate standards, and is reasonably priced. Current tools have security issues, lack visual/mapping functions and critical infrastructure status, and do not integrate with other tools. It is clear that there is a need for an integrated, common operating system. The system would need to be accessible by all the organizations that would have a role in managing an anthrax incident to enable regional decision making. The most useful tool would feature a GIS visualization that would allow for a common operating picture that is updated in real time. To capitalize on information gained from the interviews, the following activities are recommended: • Rate emergency management decision tools against the criteria specified by the interviewees. • Identify and analyze other current activities focused on information sharing in the greater Seattle urban area. • Identify and analyze information sharing systems/tools used in other regions.« less

  5. Sharik 1.0: User Needs and System Requirements for a Web-Based Tool to Support Collaborative Sensemaking

    DTIC Science & Technology

    2016-05-01

    Sharik 1.0: User Needs and System Requirements for a Web -Based Tool to Support Collaborative Sensemaking Shadi Ghajar-Khosravi...share the new intelligence items with their peers. In this report, the authors describe Sharik (SHAring Resources, Information, and Knowledge), a web ...SHAring Resources, Information and Knowledge, soit le partage des ressources, de l’information et des connaissances), un outil Web qui facilite le

  6. Network Information Management Subsystem

    NASA Technical Reports Server (NTRS)

    Chatburn, C. C.

    1985-01-01

    The Deep Space Network is implementing a distributed data base management system in which the data are shared among several applications and the host machines are not totally dedicated to a particular application. Since the data and resources are to be shared, the equipment must be operated carefully so that the resources are shared equitably. The current status of the project is discussed and policies, roles, and guidelines are recommended for the organizations involved in the project.

  7. Improvement of Resilience to Disasters in Local Community Using Information Sharing Platform

    NASA Astrophysics Data System (ADS)

    Hayama, Toru; Suzuki, Yuji; Park, Wonho; Hayashi, Akira

    This paper presents a proposal for Disaster Information Sharing Platform, which enable local government and residents to share the disaster information, and to cope with the disaster under the proper balance of Self-help, Mutual-help and Public-help. Informagic, which has been developed as a concrete example of the information sharing platform, enable us to collect information from variety of sources, such as government, local government, research institutes, private contents providers and so forth, and to transmit these information to residents through multi-media, such as internet, mobile-phone network and wireless system. An experiment was conducted under the cooperation of City of Fujisawa, to investigate the effectiveness of such platform for the disaster mitigation. Further, the platform was utilized to provide information to refugees at refuges for the Iwate-Miyagi Inland Earthquake. Through these experiments, effectiveness and issues of the platform and information sharing were investigated.

  8. One for You, One for Me: Humans' Unique Turn-Taking Skills.

    PubMed

    Melis, Alicia P; Grocke, Patricia; Kalbitz, Josefine; Tomasello, Michael

    2016-07-01

    Long-term collaborative relationships require that any jointly produced resources be shared in mutually satisfactory ways. Prototypically, this sharing involves partners dividing up simultaneously available resources, but sometimes the collaboration makes a resource available to only one individual, and any sharing of resources must take place across repeated instances over time. Here, we show that beginning at 5 years of age, human children stabilize cooperation in such cases by taking turns across instances of obtaining a resource. In contrast, chimpanzees do not take turns in this way, and so their collaboration tends to disintegrate over time. Alternating turns in obtaining a collaboratively produced resource does not necessarily require a prosocial concern for the other, but rather requires only a strategic judgment that partners need incentives to continue collaborating. These results suggest that human beings are adapted for thinking strategically in ways that sustain long-term cooperative relationships and that are absent in their nearest primate relatives. © The Author(s) 2016.

  9. XDS-I outsourcing proxy: ensuring confidentiality while preserving interoperability.

    PubMed

    Ribeiro, Luís S; Viana-Ferreira, Carlos; Oliveira, José Luís; Costa, Carlos

    2014-07-01

    The interoperability of services and the sharing of health data have been a continuous goal for health professionals, patients, institutions, and policy makers. However, several issues have been hindering this goal, such as incompatible implementations of standards (e.g., HL7, DICOM), multiple ontologies, and security constraints. Cross-enterprise document sharing (XDS) workflows were proposed by Integrating the Healthcare Enterprise (IHE) to address current limitations in exchanging clinical data among organizations. To ensure data protection, XDS actors must be placed in trustworthy domains, which are normally inside such institutions. However, due to rapidly growing IT requirements, the outsourcing of resources in the Cloud is becoming very appealing. This paper presents a software proxy that enables the outsourcing of XDS architectural parts while preserving the interoperability, confidentiality, and searchability of clinical information. A key component in our architecture is a new searchable encryption (SE) scheme-Posterior Playfair Searchable Encryption (PPSE)-which, besides keeping the same confidentiality levels of the stored data, hides the search patterns to the adversary, bringing improvements when compared to the remaining practical state-of-the-art SE schemes.

  10. DIY-style GIS service in mobile navigation system integrated with web and wireless GIS

    NASA Astrophysics Data System (ADS)

    Yan, Yongbin; Wu, Jianping; Fan, Caiyou; Wang, Minqi; Dai, Sheng

    2007-06-01

    Mobile navigation system based on handheld device can not only provide basic GIS services, but also enable these GIS services to be provided without location limit, to be more instantly interacted between users and devices. However, we still see that most navigation systems have common defects on user experience like limited map format, few map resources, and unable location share. To overcome the above defects, we propose DIY-style GIS service which provide users a more free software environment and allow uses to customize their GIS services. These services include defining geographical coordinate system of maps which helps to hugely enlarge the map source, editing vector feature, related property information and hotlink images, customizing covered area of download map via General Packet Radio Service (GPRS), and sharing users' location information via SMS (Short Message Service) which establishes the communication between users who needs GIS services. The paper introduces the integration of web and wireless GIS service in a mobile navigation system and presents an implementation sample of a DIY-Style GIS service in a mobile navigation system.

  11. Sustainability in Health care by Allocating Resources Effectively (SHARE) 5: developing a model for evidence-driven resource allocation in a local healthcare setting.

    PubMed

    Harris, Claire; Allen, Kelly; Waller, Cara; Green, Sally; King, Richard; Ramsey, Wayne; Kelly, Cate; Thiagarajan, Malar

    2017-05-10

    This is the fifth in a series of papers reporting Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. This paper synthesises the findings from Phase One of the SHARE Program and presents a model to be implemented and evaluated in Phase Two. Monash Health, a large healthcare network in Melbourne Australia, sought to establish an organisation-wide systematic evidence-based program for disinvestment. In the absence of guidance from the literature, the Centre for Clinical Effectiveness, an in-house 'Evidence Based Practice Support Unit', was asked to explore concepts and practices related to disinvestment, consider the implications for a local health service and identify potential settings and methods for decision-making. Mixed methods were used to capture the relevant information. These included literature reviews; online questionnaire, interviews and structured workshops with a range of stakeholders; and consultation with experts in disinvestment, health economics and health program evaluation. Using the principles of evidence-based change, the project team worked with health service staff, consumers and external experts to synthesise the findings from published literature and local research and develop proposals, frameworks and plans. Multiple influencing factors were extracted from these findings. The implications were both positive and negative and addressed aspects of the internal and external environments, human factors, empirical decision-making, and practical applications. These factors were considered in establishment of the new program; decisions reached through consultation with stakeholders were used to define four program components, their aims and objectives, relationships between components, principles that underpin the program, implementation and evaluation plans, and preconditions for success and sustainability. The components were Systems and processes, Disinvestment projects, Support services, and Program evaluation and research. A model for a systematic approach to evidence-based resource allocation in a local health service was developed. A robust evidence-based investigation of the research literature and local knowledge with a range of stakeholders resulted in rich information with strong consistent messages. At the completion of Phase One, synthesis of the findings enabled development of frameworks and plans and all preconditions for exploration of the four main aims in Phase Two were met.

  12. Implementing efficient and sustainable collaboration between National Immunization Technical Advisory Groups: Report on the 3rd International Technical Meeting, Paris, France, 8-9 December 2014.

    PubMed

    Perronne, Christian; Adjagba, Alex; Duclos, Philippe; Floret, Daniel; Houweling, Hans; Le Goaster, Corinne; Lévy-Brühl, Daniel; Meyer, François; Senouci, Kamel; Wichmann, Ole

    2016-03-08

    Many experts on vaccination are convinced that efforts should be made to encourage increased collaboration between National Immunization Technical Advisory Groups on immunization (NITAGs) worldwide. International meetings were held in Berlin, Germany, in 2010 and 2011, to discuss improvement of the methodologies for the development of evidence-based vaccination recommendations, recognizing the need for collaboration and/or sharing of resources in this effort. A third meeting was held in Paris, France, in December 2014, to consider the design of specific practical activities and an organizational structure to enable effective and sustained collaboration. The following conclusions were reached: (i) The proposed collaboration needs a core functional structure and the establishment or strengthening of an international network of NITAGs. (ii) Priority subjects for collaborative work are background information for recommendations, systematic reviews, mathematical models, health economic evaluations and establishment of common frameworks and methodologies for reviewing and grading the evidence. (iii) The programme of collaborative work should begin with participation of a limited number of NITAGs which already have a high level of expertise. The amount of joint work could be increased progressively through practical activities and pragmatic examples. Due to similar priorities and already existing structures, this should be organized at regional or subregional level. For example, in the European Union a project is funded by the European Centre for Disease Prevention and Control (ECDC) with the aim to set up a network for improving data, methodology and resource sharing and thereby supporting NITAGs. Such regional networking activities should be carried out in collaboration with the World Health Organization (WHO). (iv) A global steering committee should be set up to promote international exchange between regional networks and to increase the involvement of less experienced NITAGs. NITAGs already collaborate at the global level via the NITAG Resource Centre, a web-based platform developed by the Health Policy and Institutional Development Unit (WHO Collaborating Centre) of the Agence de Médecine Préventive (AMP-HPID). It would be appropriate to continue facilitating the coordination of this global network through the AMP-HPID NITAG Resource Centre. (v) While sharing work products and experiences, each NITAG would retain responsibility for its own decision-making and country-specific recommendations. Copyright © 2016. Published by Elsevier Ltd.. All rights reserved.

  13. Using PIDs to Support the Full Research Data Publishing Lifecycle

    NASA Astrophysics Data System (ADS)

    Waard, A. D.

    2016-12-01

    Persistent identifiers can help support scientific research, track scientific impact and let researchers achieve recognition for their work. We discuss a number of ways in which Elsevier utilizes PIDs to support the scholarly lifecycle: To improve the process of storing and sharing data, Mendeley Data (http://data.mendeley.com) makes use of persistent identifiers to support the dynamic nature of data and software, by tracking and recording the provenance and versioning of datasets. This system now allows the comparison of different versions of a dataset, to see precisely what was changed during a versioning update. To present research data in context for the reader, we include PIDs in research articles as hyperlinks: https://www.elsevier.com/books-and-journals/content-innovation/data-base-linking. In some cases, PIDs fetch data files from the repositories provide that allow the embedding of visualizations, e.g. with PANGAEA and PubChem: https://www.elsevier.com/books-and-journals/content-innovation/protein-viewer; https://www.elsevier.com/books-and-journals/content-innovation/pubchem. To normalize referenced data elements, the Resource Identification Initiative - which we developed together with members of the Force11 RRID group - introduces a unified standard for resource identifiers (RRIDs) that can easily be interpreted by both humans and text mining tools. https://www.force11.org/group/resource-identification-initiative/update-resource-identification-initiative, as can be seen in our Antibody Data app: https://www.elsevier.com/books-and-journals/content-innovation/antibody-data To enable better citation practices and support robust metrics system for sharing research data, we have helped develop, and are early adopters of the Force11 Data Citation Principles and Implementation groups (https://www.force11.org/group/dcip) Lastly, through our work with the Research Data Alliance Publishing Data Services group, we helped create a set of guidelines (http://www.scholix.org/guidelines) and a demonstrator service (http://dliservice.research-infrastructures.eu/#/) for a linked data network connecting datasets, articles, and individuals, which all rely on robust PIDs.

  14. The RICORDO approach to semantic interoperability for biomedical data and models: strategy, standards and solutions

    PubMed Central

    2011-01-01

    Background The practice and research of medicine generates considerable quantities of data and model resources (DMRs). Although in principle biomedical resources are re-usable, in practice few can currently be shared. In particular, the clinical communities in physiology and pharmacology research, as well as medical education, (i.e. PPME communities) are facing considerable operational and technical obstacles in sharing data and models. Findings We outline the efforts of the PPME communities to achieve automated semantic interoperability for clinical resource documentation in collaboration with the RICORDO project. Current community practices in resource documentation and knowledge management are overviewed. Furthermore, requirements and improvements sought by the PPME communities to current documentation practices are discussed. The RICORDO plan and effort in creating a representational framework and associated open software toolkit for the automated management of PPME metadata resources is also described. Conclusions RICORDO is providing the PPME community with tools to effect, share and reason over clinical resource annotations. This work is contributing to the semantic interoperability of DMRs through ontology-based annotation by (i) supporting more effective navigation and re-use of clinical DMRs, as well as (ii) sustaining interoperability operations based on the criterion of biological similarity. Operations facilitated by RICORDO will range from automated dataset matching to model merging and managing complex simulation workflows. In effect, RICORDO is contributing to community standards for resource sharing and interoperability. PMID:21878109

  15. Relative time sharing: new findings and an extension of the resource allocation model of temporal processing.

    PubMed

    Buhusi, Catalin V; Meck, Warren H

    2009-07-12

    Individuals time as if using a stopwatch that can be stopped or reset on command. Here, we review behavioural and neurobiological data supporting the time-sharing hypothesis that perceived time depends on the attentional and memory resources allocated to the timing process. Neuroimaging studies in humans suggest that timekeeping tasks engage brain circuits typically involved in attention and working memory. Behavioural, pharmacological, lesion and electrophysiological studies in lower animals support this time-sharing hypothesis. When subjects attend to a second task, or when intruder events are presented, estimated durations are shorter, presumably due to resources being taken away from timing. Here, we extend the time-sharing hypothesis by proposing that resource reallocation is proportional to the perceived contrast, both in temporal and non-temporal features, between intruders and the timed events. New findings support this extension by showing that the effect of an intruder event is dependent on the relative duration of the intruder to the intertrial interval. The conclusion is that the brain circuits engaged by timekeeping comprise not only those primarily involved in time accumulation, but also those involved in the maintenance of attentional and memory resources for timing, and in the monitoring and reallocation of those resources among tasks.

  16. 7 CFR 631.12 - Cost-share payments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... resource management systems or a practice or an identifiable unit according to specifications will be made... 7 Agriculture 6 2010-01-01 2010-01-01 false Cost-share payments. 631.12 Section 631.12 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF...

  17. Data sharing in the ag community - what are current challenges, benefits, and opportunities

    USDA-ARS?s Scientific Manuscript database

    The model for building agronomic science today and into the future to meet global food demands with limited resources will be through public-private data acquisition, sharing, and collaborative analysis. The public perspective focuses on preserving natural resources. The private perspective focuses ...

  18. LearnAlaska Portal

    Science.gov Websites

    ESS (Employee Self Service) E-Travel Online Login IRIS FIN/PROC Login IRIS HRM Login LearnAlaska SFOA SharePoint Site TRIPS (Traveler Integrated Profile System) Vendor Self Service (VSS) Resources Alaska & Resources Manuals Payment Detail Report Salary Schedules SFOA SharePoint Site (SOA Only) Training

  19. 14 CFR 1274.904 - Resource sharing requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... will share in providing the resources necessary to perform the agreement. NASA funding and non-cash contributions (personnel, equipment, facilities, etc.) and the dollar value of the Recipient's cash and/or non-cash contribution will be on a __ percent (NASA)—__ percent (Recipient) basis. Criteria and procedures...

  20. Resource Sharing in Times of Retrenchment.

    ERIC Educational Resources Information Center

    Sloan, Bernard G.

    1992-01-01

    Discusses the impact of decreases in revenues on the resource-sharing activities of ILLINET Online and the Illinois Library Computer Systems Organization (ILCSO). Strategies for successfully coping with fiscal crises are suggested, including reducing levels of service and initiating user fees for interlibrary loans and faxing photocopied journal…

  1. AccrualNet: Addressing Low Accrual Via a Knowledge-Based, Community of Practice Platform

    PubMed Central

    Massett, Holly A.; Parreco, Linda K.; Padberg, Rose Mary; Richmond, Ellen S.; Rienzo, Marie E.; Leonard, Colleen E. Ryan; Quesenbery, Whitney; Killiam, H. William; Johnson, Lenora E.; Dilts, David M.

    2011-01-01

    Purpose: Present the design and initial evaluation of a unique, Web-enabled platform for the development of a community of practice around issues of oncology clinical trial accrual. Methods: The National Cancer Institute (NCI) conducted research with oncology professionals to identify unmet clinical trial accrual needs in the field. In response, a comprehensive platform for accrual resources, AccrualNet, was created by using an agile development process, storyboarding, and user testing. Literature and resource searches identified relevant content to populate the site. Descriptive statistics were tracked for resource and site usage. Use cases were defined to support implementation. Results: AccrualNet has five levels: (1) clinical trial macrostages (prestudy, active study, and poststudy); (2) substages (developing a protocol, selecting a trial, preparing to open, enrolling patients, managing the trial, retaining participants, and lessons learned); (3) strategies for each substage; (4) multiple activities for each strategy; and (5) multiple resources for each activity. Since its launch, AccrualNet has had more than 45,000 page views, with the Tools & Resources, Conversations, and Training sections being the most viewed. Total resources have increased 69%, to 496 items. Analysis of articles in the site reveals that 22% are from two journals and 46% of the journals supplied a single article. To date, there are 29 conversations with 43 posts. Four use cases are discussed. Conclusion: AccrualNet represents a unique, centralized comprehensive-solution platform to systematically capture accrual knowledge for all stages of a clinical trial. It is designed to foster a community of practice by encouraging users to share additional strategies, resources, and ideas. PMID:22379429

  2. AccrualNet: Addressing Low Accrual Via a Knowledge-Based, Community of Practice Platform.

    PubMed

    Massett, Holly A; Parreco, Linda K; Padberg, Rose Mary; Richmond, Ellen S; Rienzo, Marie E; Leonard, Colleen E Ryan; Quesenbery, Whitney; Killiam, H William; Johnson, Lenora E; Dilts, David M

    2011-11-01

    Present the design and initial evaluation of a unique, Web-enabled platform for the development of a community of practice around issues of oncology clinical trial accrual. The National Cancer Institute (NCI) conducted research with oncology professionals to identify unmet clinical trial accrual needs in the field. In response, a comprehensive platform for accrual resources, AccrualNet, was created by using an agile development process, storyboarding, and user testing. Literature and resource searches identified relevant content to populate the site. Descriptive statistics were tracked for resource and site usage. Use cases were defined to support implementation. ACCRUALNET HAS FIVE LEVELS: (1) clinical trial macrostages (prestudy, active study, and poststudy); (2) substages (developing a protocol, selecting a trial, preparing to open, enrolling patients, managing the trial, retaining participants, and lessons learned); (3) strategies for each substage; (4) multiple activities for each strategy; and (5) multiple resources for each activity. Since its launch, AccrualNet has had more than 45,000 page views, with the Tools & Resources, Conversations, and Training sections being the most viewed. Total resources have increased 69%, to 496 items. Analysis of articles in the site reveals that 22% are from two journals and 46% of the journals supplied a single article. To date, there are 29 conversations with 43 posts. Four use cases are discussed. AccrualNet represents a unique, centralized comprehensive-solution platform to systematically capture accrual knowledge for all stages of a clinical trial. It is designed to foster a community of practice by encouraging users to share additional strategies, resources, and ideas.

  3. Participation process and outcome interactions: Exploring participation in water resource management

    NASA Astrophysics Data System (ADS)

    Carr, G.; Loucks, D. P.; Blöschl, G.

    2012-04-01

    Evaluating participation programmes, projects and activities aids understanding of effective mechanisms and enables the identification of improvements to current strategies. Characteristics of participation processes, such whether the process is cost effective, adequately facilitated, accessible, includes a representative section of society or interest groups and allocates power equivalently between participants, are commonly described and evaluated in the literature. A key question concerns whether effective processes lead to desirable outcomes. Two types of outcomes can be identified from participation programmes - tangible and non-tangible. Tangible outcomes include resource management changes or resource quality changes. Non tangible outcomes include developing and strengthening communication and action networks, building trust between individuals and/or organisations, developing innovative solutions, or developing shared knowledge and understandings of issues. To better understand how participation impacts upon resource management it is necessary to identify i) how non-tangible outcomes lead to resource management outcomes and ii) which characteristics of the participation process are connected to achieving non-tangible outcomes. This has been attempted with a literature based meta-analysis. Literature has been analysed to identify outcomes from participations programmes, and the process characteristics present that are associated with promoting or inhibiting their achievement. Preliminary analysis shows that process characteristics such as representation, facilitation and accessibility are important for achieving non-tangible outcomes. The relationship between non-tangible outcomes and resource management outcomes is less clear in the literature. This may be due to the different timescales over which the different types of outcomes emerge (resource management outcomes emerge over longer time periods) and the different contexts or settings in which participation takes place.

  4. Moving Virtual Research Environments from high maintenance Stovepipes to Multi-purpose Sustainable Service-oriented Science Platforms

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fraser, Ryan; Wyborn, Lesley; Friedrich, Carsten; Squire, Geoffrey; Barker, Michelle; Moloney, Glenn

    2017-04-01

    The researcher of today is likely to be part of a team distributed over multiple sites that will access data from an external repository and then process the data on a public or private cloud or even on a large centralised supercomputer. They are increasingly likely to use a mixture of their own code, third party software and libraries, or even access global community codes. These components will be connected into a Virtual Research Environments (VREs) that will enable members of the research team who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, infrastructures, etc. Many VRE's are built in isolation: designed to meet a specific research program with components tightly coupled and not capable of being repurposed for other use cases - they are becoming 'stovepipes'. The limited number of users of some VREs also means that the cost of maintenance per researcher can be unacceptably high. The alternative is to develop service-oriented Science Platforms that enable multiple communities to develop specialised solutions for specific research programs. The platforms can offer access to data, software tools and processing infrastructures (cloud, supercomputers) through globally distributed, interconnected modules. In Australia, the Virtual Geophysics Laboratory (VGL) was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools, that is now rapidly evolving into a multi-purpose Earth science platform with access to an increased variety of data, a broader range of tools, users from more sectors and a diversity of computational infrastructures. The expansion has been relatively easy, because of the architecture whereby data, tools and compute resources are loosely coupled via interfaces that are built on international standards and accessed as services wherever possible. In recent years, investments in discoverability and accessibility of data via online services in Australia mean that data resources can be easily added to the virtual environments as and when required. Another key to increasing to reusability and uptake of the VRE is the capability to capturing workflows so that they can be reused and repurposed both within and beyond the community that that defined the original use case. Unfortunately, Software-as-a-Service in the research sector is not yet mature. In response, we developed a Scientific Software solutions Center (SSSC) that enables researchers to discover, deploy and then share computational codes, code snippets or processes both in a human and machine-readable manner. Growth has come not only from within the Earth science community but from the Australian Virtual Laboratory community which is building VREs for a diversity of communities such as astronomy, genomics, environment, humanities, climate etc. Components such as access control, provenance, visualisation, accounting etc. are common to all scientific domains and sharing of these across multiple domains reduces costs, but more importantly increases the ability to undertake interdisciplinary science. These efforts are transitioning VREs to more sustainable Service-oriented Science Platforms that can be delivered in an agile, adaptable manner for broader community interests.

  5. Time to consider sharing data extracted from trials included in systematic reviews.

    PubMed

    Wolfenden, Luke; Grimshaw, Jeremy; Williams, Christopher M; Yoong, Sze Lin

    2016-11-03

    While the debate regarding shared clinical trial data has shifted from whether such data should be shared to how this is best achieved, the sharing of data collected as part of systematic reviews has received little attention. In this commentary, we discuss the potential benefits of coordinated efforts to share data collected as part of systematic reviews. There are a number of potential benefits of systematic review data sharing. Shared information and data obtained as part of the systematic review process may reduce unnecessary duplication, reduce demand on trialist to service repeated requests from reviewers for data, and improve the quality and efficiency of future reviews. Sharing also facilitates research to improve clinical trial and systematic review methods and supports additional analyses to address secondary research questions. While concerns regarding appropriate use of data, costs, or the academic return for original review authors may impede more open access to information extracted as part of systematic reviews, many of these issues are being addressed, and infrastructure to enable greater access to such information is being developed. Embracing systems to enable more open access to systematic review data has considerable potential to maximise the benefits of research investment in undertaking systematic reviews.

  6. Fair Shares and Sharing Fairly: A Survey of Public Views on Open Science, Informed Consent and Participatory Research in Biobanking.

    PubMed

    Joly, Yann; Dalpé, Gratien; So, Derek; Birko, Stanislav

    2015-01-01

    Biobanks are important resources which enable large-scale genomic research with human samples and data, raising significant ethical concerns about how participants' information is managed and shared. Three previous studies of the Canadian public's opinion about these topics have been conducted. Building on those results, an online survey representing the first study of public perceptions about biobanking spanning all Canadian provinces was conducted. Specifically, this study examined qualitative views about biobank objectives, governance structure, control and ownership of samples and data, benefit sharing, consent practices and data sharing norms, as well as additional questions and ethical concerns expressed by the public. Over half the respondents preferred to give a one-time general consent for the future sharing of their samples among researchers. Most expressed willingness for their data to be shared with the international scientific community rather than used by one or more Canadian institutions. Whereas more respondents indicated a preference for one-time general consent than any other model of consent, they constituted less than half of the total responses, revealing a lack of consensus among survey respondents regarding this question. Respondents identified biobank objectives, governance structure and accountability as the most important information to provide participants. Respondents' concerns about biobanking generally centred around the control and ownership of biological samples and data, especially with respect to potential misuse by insurers, the government and other third parties. Although almost half the respondents suggested that these should be managed by the researchers' institutions, results indicate that the public is interested in being well-informed about these projects and suggest the importance of increased involvement from participants. In conclusion, the study discusses the viability of several proposed models for informed consent, including e-governance, independent trustees and the use of exclusion clauses, in the context of these new findings about the views of the Canadian public.

  7. Fair Shares and Sharing Fairly: A Survey of Public Views on Open Science, Informed Consent and Participatory Research in Biobanking

    PubMed Central

    Joly, Yann; Dalpé, Gratien; So, Derek; Birko, Stanislav

    2015-01-01

    Context Biobanks are important resources which enable large-scale genomic research with human samples and data, raising significant ethical concerns about how participants’ information is managed and shared. Three previous studies of the Canadian public’s opinion about these topics have been conducted. Building on those results, an online survey representing the first study of public perceptions about biobanking spanning all Canadian provinces was conducted. Specifically, this study examined qualitative views about biobank objectives, governance structure, control and ownership of samples and data, benefit sharing, consent practices and data sharing norms, as well as additional questions and ethical concerns expressed by the public. Results Over half the respondents preferred to give a one-time general consent for the future sharing of their samples among researchers. Most expressed willingness for their data to be shared with the international scientific community rather than used by one or more Canadian institutions. Whereas more respondents indicated a preference for one-time general consent than any other model of consent, they constituted less than half of the total responses, revealing a lack of consensus among survey respondents regarding this question. Respondents identified biobank objectives, governance structure and accountability as the most important information to provide participants. Respondents’ concerns about biobanking generally centred around the control and ownership of biological samples and data, especially with respect to potential misuse by insurers, the government and other third parties. Although almost half the respondents suggested that these should be managed by the researchers’ institutions, results indicate that the public is interested in being well-informed about these projects and suggest the importance of increased involvement from participants. In conclusion, the study discusses the viability of several proposed models for informed consent, including e-governance, independent trustees and the use of exclusion clauses, in the context of these new findings about the views of the Canadian public. PMID:26154134

  8. Agile Data Curation Case Studies Leading to the Identification and Development of Data Curation Design Patterns

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Lenhardt, W. C.; Young, J. W.; Gordon, L. C.; Hughes, S.; Santhana Vannan, S. K.

    2017-12-01

    The planning for and development of efficient workflows for the creation, reuse, sharing, documentation, publication and preservation of research data is a general challenge that research teams of all sizes face. In response to: requirements from funding agencies for full-lifecycle data management plans that will result in well documented, preserved, and shared research data products increasing requirements from publishers for shared data in conjunction with submitted papers interdisciplinary research team's needs for efficient data sharing within projects, and increasing reuse of research data for replication and new, unanticipated research, policy development, and public use alternative strategies to traditional data life cycle approaches must be developed and shared that enable research teams to meet these requirements while meeting the core science objectives of their projects within the available resources. In support of achieving these goals, the concept of Agile Data Curation has been developed in which there have been parallel activities in support of 1) identifying a set of shared values and principles that underlie the objectives of agile data curation, 2) soliciting case studies from the Earth science and other research communities that illustrate aspects of what the contributors consider agile data curation methods and practices, and 3) identifying or developing design patterns that are high-level abstractions from successful data curation practice that are related to common data curation problems for which common solution strategies may be employed. This paper provides a collection of case studies that have been contributed by the Earth science community, and an initial analysis of those case studies to map them to emerging shared data curation problems and their potential solutions. Following the initial analysis of these problems and potential solutions, existing design patterns from software engineering and related disciplines are identified as a starting point for the development of a catalog of data curation design patterns that may be reused in the design and execution of new data curation processes.

  9. Optimizing the resource usage in Cloud based environments: the Synergy approach

    NASA Astrophysics Data System (ADS)

    Zangrando, L.; Llorens, V.; Sgaravatto, M.; Verlato, M.

    2017-10-01

    Managing resource allocation in a cloud based data centre serving multiple virtual organizations is a challenging issue. In fact, while batch systems are able to allocate resources to different user groups according to specific shares imposed by the data centre administrator, without a static partitioning of such resources, this is not so straightforward in the most common cloud frameworks, e.g. OpenStack. In the current OpenStack implementation, it is only possible to grant fixed quotas to the different user groups and these resources cannot be exceeded by one group even if there are unused resources allocated to other groups. Moreover in the existing OpenStack implementation, when there aren’t resources available, new requests are simply rejected: it is then up to the client to later re-issue the request. The recently started EU-funded INDIGO-DataCloud project is addressing this issue through “Synergy”, a new advanced scheduling service targeted for OpenStack. Synergy adopts a fair-share model for resource provisioning which guarantees that resources are distributed among users following the fair-share policies defined by the administrator, taken also into account the past usage of such resources. We present the architecture of Synergy, the status of its implementation, some preliminary results and the foreseen evolution of the service.

  10. Nebhydro: Sharing Geospatial Data to Supportwater Management in Nebraska

    NASA Astrophysics Data System (ADS)

    Kamble, B.; Irmak, A.; Hubbard, K.; Deogun, J.; Dvorak, B.

    2012-12-01

    Recent advances in web-enabled geographical technologies have the potential to make a dramatic impact on development of highly interactive spatial applications on the web for visualization of large-scale geospatial data by water resources and irrigation scientists. Spatial and point scale water resources data visualization are an emerging and challenging application domain. Query based visual explorations of geospatial hydrological data can play an important role in stimulating scientific hypotheses and seeking causal relationships among hydro variables. The Nebraska Hydrological Information System (NebHydro) utilizes ESRI's ArcGIS server technology to increase technological awareness among farmers, irrigation managers and policy makers. Web-based geospatial applications are an effective way to expose scientific hydrological datasets to the research community and the public. NebHydro uses Adobe Flex technology to offer an online visualization and data analysis system for presentation of social and economic data. Internet mapping services is an integrated product of GIS and Internet technologies; it is a favored solution to achieve the interoperability of GIS. The development of Internet based GIS services in the state of Nebraska showcases the benefits of sharing geospatial hydrological data among agencies, resource managers and policy makers. Geospatial hydrological Information (Evapotranspiration from Remote Sensing, vegetation indices (NDVI), USGS Stream gauge data, Climatic data etc.) is generally generated through model simulation (METRIC, SWAP, Linux, Python based scripting etc). Information is compiled into and stored within object oriented relational spatial databases using a geodatabase information model that supports the key data types needed by applications including features, relationships, networks, imagery, terrains, maps and layers. The system provides online access, querying, visualization, and analysis of the hydrological data from several sources at one place. The study indicates that internet GIS, developed using advanced technologies, provides valuable education potential to users in hydrology and irrigation engineering and suggests that such a system can support advanced hydrological data access and analysis tools to improve utility of data in operations. Keywords: Hydrological Information System, NebHydro, Water Management, data sharing, data visualization, ArcGIS server.

  11. Emerging Partnerships: Safer Communities, Transformed Offenders, Shared Educational Resources.

    ERIC Educational Resources Information Center

    Brockett, E. Anne; Gibbons, Virginia M.

    Applying the philosophy that strategic partnerships are the most effective way to share knowledge, skills, and resources, emerging community corrections adult education programs and existing community adult education service providers have begun to forge critical linkages. In Texas, the law now requires assessment of the educational level of all…

  12. Sharing Resources in Open Educational Communities

    ERIC Educational Resources Information Center

    Tosato, Paolo; Arranz, Beatriz Carramolino; Avi, Bartolomé Rubia

    2014-01-01

    The spread of Internet and the latest Web developments have promoted the relationships between teachers, learners and institutions, as well as the creation and sharing of new Open Educational Resources (OERs). Despite this fact, many projects and research efforts paid more attention to content distribution focusing on their format and description,…

  13. Multimodal Information Sharing Team (MIST) - Port of Baltimore Industry and Public Sector Cooperation for Information Sharing

    DTIC Science & Technology

    2012-11-01

    engage key stakeholders. For example, it was identified that the liquid bulk, private terminal operators, rail and trucking groups are not...implementation of their counterterrorism and public safety missions. 30 Maryland Natural Resources Police/Department of Natural Resources ( MNRP ) http

  14. MiMiR – an integrated platform for microarray data sharing, mining and analysis

    PubMed Central

    Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence

    2008-01-01

    Background Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. Results A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. Conclusion The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies. PMID:18801157

  15. MiMiR--an integrated platform for microarray data sharing, mining and analysis.

    PubMed

    Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence

    2008-09-18

    Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies.

  16. The semantic web in translational medicine: current applications and future directions

    PubMed Central

    Machado, Catia M.; Rebholz-Schuhmann, Dietrich; Freitas, Ana T.; Couto, Francisco M.

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. PMID:24197933

  17. The semantic web in translational medicine: current applications and future directions.

    PubMed

    Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. © The Author 2013. Published by Oxford University Press.

  18. Sustainability in Health care by Allocating Resources Effectively (SHARE) 11: reporting outcomes of an evidence-driven approach to disinvestment in a local healthcare setting.

    PubMed

    Harris, Claire; Allen, Kelly; Ramsey, Wayne; King, Richard; Green, Sally

    2018-05-30

    This is the final paper in a thematic series reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The SHARE Program was established to explore a systematic, integrated, evidence-based organisation-wide approach to disinvestment in a large Australian health service network. This paper summarises the findings, discusses the contribution of the SHARE Program to the body of knowledge and understanding of disinvestment in the local healthcare setting, and considers implications for policy, practice and research. The SHARE program was conducted in three phases. Phase One was undertaken to understand concepts and practices related to disinvestment and the implications for a local health service and, based on this information, to identify potential settings and methods for decision-making about disinvestment. The aim of Phase Two was to implement and evaluate the proposed methods to determine which were sustainable, effective and appropriate in a local health service. A review of the current literature incorporating the SHARE findings was conducted in Phase Three to contribute to the understanding of systematic approaches to disinvestment in the local healthcare context. SHARE differed from many other published examples of disinvestment in several ways: by seeking to identify and implement disinvestment opportunities within organisational infrastructure rather than as standalone projects; considering disinvestment in the context of all resource allocation decisions rather than in isolation; including allocation of non-monetary resources as well as financial decisions; and focusing on effective use of limited resources to optimise healthcare outcomes. The SHARE findings provide a rich source of new information about local health service decision-making, in a level of detail not previously reported, to inform others in similar situations. Multiple innovations related to disinvestment were found to be acceptable and feasible in the local setting. Factors influencing decision-making, implementation processes and final outcomes were identified; and methods for further exploration, or avoidance, in attempting disinvestment in this context are proposed based on these findings. The settings, frameworks, models, methods and tools arising from the SHARE findings have potential to enhance health care and patient outcomes.

  19. Creating a Learning Community for Solutions to Climate Change

    NASA Astrophysics Data System (ADS)

    Bloom, A. J.; Benedict, B. A.; Blockstein, D. E.; Hassenzahl, D. M.; Hunter, A.; Jorgensen, A. D.; Pfirman, S. L.

    2011-12-01

    The rapidly evolving and interdisciplinary nature of climate change presents a challenge to colleges and universities as they seek to educate undergraduate students. To address this challenge, the National Council for Science and the Environment (NCSE) with NSF funding is creating a nationwide cyber-enabled learning community called CAMEL (Climate, Adaptation, and Mitigation e-Learning). CAMEL engages experts in science, policy and decision-making, education, and assessment in the production of a virtual toolbox of curricular resources designed for teaching climate change causes, consequences, and solutions. CAMEL is: ? Developing cyberinfrastructure that supports and promotes the creation of materials and community; ? Generating materials for the Encyclopedia of Earth, a site averaging 50,000 views per day; ? Ensuring that materials developed and shared are founded on the best available scientific information and follow the most appropriate educational practices; ? Assisting faculty at institutions of higher education across the United States as they create, improve, test, and share resources for teaching students not only how to diagnose climate change problems, but also to identify and effect solutions; ? Evaluating the determinants of successful community building using cybermedia. The community and resultant content range from general education to upper division courses for students in a variety of majors. At the center of the community are the 160 colleges and universities represented in NCSE's Council of Environmental Deans and Directors. Members of this group represent recognized expertise in virtually all areas of this project. A team with substantial experience with evaluating innovative initiatives in STEM education is administering the evaluation component.

  20. Visa: AN Automatic Aware and Visual Aids Mechanism for Improving the Correct Use of Geospatial Data

    NASA Astrophysics Data System (ADS)

    Hong, J. H.; Su, Y. T.

    2016-06-01

    With the fast growth of internet-based sharing mechanism and OpenGIS technology, users nowadays enjoy the luxury to quickly locate and access a variety of geospatial data for the tasks at hands. While this sharing innovation tremendously expand the possibility of application and reduce the development cost, users nevertheless have to deal with all kinds of "differences" implicitly hidden behind the acquired georesources. We argue the next generation of GIS-based environment, regardless internet-based or not, must have built-in knowledge to automatically and correctly assess the fitness of data use and present the analyzed results to users in an intuitive and meaningful way. The VISA approach proposed in this paper refer to four different types of visual aids that can be respectively used for addressing analyzed results, namely, virtual layer, informative window, symbol transformation and augmented TOC. The VISA-enabled interface works in an automatic-aware fashion, where the standardized metadata serve as the known facts about the selected geospatial resources, algorithms for analyzing the differences of temporality and quality of the geospatial resources were designed and the transformation of analyzed results into visual aids were automatically executed. It successfully presents a new way for bridging the communication gaps between systems and users. GIS has been long seen as a powerful integration tool, but its achievements would be highly restricted if it fails to provide a friendly and correct working platform.

  1. Cloud computing: a new business paradigm for biomedical information sharing.

    PubMed

    Rosenthal, Arnon; Mork, Peter; Li, Maya Hao; Stanford, Jean; Koester, David; Reynolds, Patti

    2010-04-01

    We examine how the biomedical informatics (BMI) community, especially consortia that share data and applications, can take advantage of a new resource called "cloud computing". Clouds generally offer resources on demand. In most clouds, charges are pay per use, based on large farms of inexpensive, dedicated servers, sometimes supporting parallel computing. Substantial economies of scale potentially yield costs much lower than dedicated laboratory systems or even institutional data centers. Overall, even with conservative assumptions, for applications that are not I/O intensive and do not demand a fully mature environment, the numbers suggested that clouds can sometimes provide major improvements, and should be seriously considered for BMI. Methodologically, it was very advantageous to formulate analyses in terms of component technologies; focusing on these specifics enabled us to bypass the cacophony of alternative definitions (e.g., exactly what does a cloud include) and to analyze alternatives that employ some of the component technologies (e.g., an institution's data center). Relative analyses were another great simplifier. Rather than listing the absolute strengths and weaknesses of cloud-based systems (e.g., for security or data preservation), we focus on the changes from a particular starting point, e.g., individual lab systems. We often find a rough parity (in principle), but one needs to examine individual acquisitions--is a loosely managed lab moving to a well managed cloud, or a tightly managed hospital data center moving to a poorly safeguarded cloud? 2009 Elsevier Inc. All rights reserved.

  2. Development of a remote proton radiation therapy solution over internet2.

    PubMed

    Belard, Arnaud; Tinnel, Brent; Wilson, Steve; Ferro, Ralph; O'Connell, John

    2009-12-01

    Through our existing partnership, our research program has leveraged the benefits of proton radiation therapy through the development a robust telemedicine solution for remote proton therapy planning. Our proof-of-concept system provides a cost-effective and functional videoconferencing desktop platform for both ad-hoc and scheduled communication, as well as a robust interface for data collaboration (application-sharing of a commercial radiation treatment planning package). Over a 2-year period, our evaluation of this model has highlighted the inherent benefits of this affordable remote treatment planning solution, i.e., (1) giving physicians the ability to remotely participate in refining and generating proton therapy plans via a secure and robust Internet2 VPN tunnel to the University of Pennsylvania's commercial proton treatment planning package; (2) allowing cancer-care providers sending patients to a proton treatment facility to participate in treatment planning decisions by enabling referring or accepting providers to initiate ad-hoc, point-to-point communication with their counterparts to clarify and resolve issues arising before or during patient treatment; and thus (3) allowing stewards of an otherwise highly centralized resource the ability to encourage wider participation with and referrals to sparsely located proton treatment centers by adapting telemedicine techniques that allow sharing of proton therapy planning services. We believe that our elegant and very affordable approach to remote proton treatment planning opens the door to greater worldwide referrals to the scarce resource of proton treatment units and wide-ranging scientific collaboration, both nationally and internationally.

  3. "It's Not Their Job to Share Content": A Case Study of the Role of Senior Students in Adapting Teaching Materials as Open Educational Resources at the University of Cape Town

    ERIC Educational Resources Information Center

    Hodgkinson-Williams, Cheryl; Paskevicius, Michael

    2013-01-01

    Inspired by the Massachusetts Institute of Technology's landmark decision to make its teaching and learning materials freely available to the public as OpenCourseWare (OCW), many other higher education institutions have followed suit sharing resources now more generally referred to as Open Educational Resources (OER). The University of Cape Town…

  4. Measuring the wealth of nations.

    PubMed

    Hamilton, Kirk; Dixon, John A

    2003-01-01

    The sustainability of development is closely linked to changes in total per capita wealth. This paper presents estimates of the wealth of nations for nearly 100 countries, broken down into produced assets, natural resources and human resources. While the latter is the dominant form of wealth in virtually all countries, in low income natural resource exporters the share of natural resources in total wealth is equal to the share of produced assets. For low income countries in general, cropland forms the vast majority of natural wealth. The analysis suggests the process of development can be viewed as one of portfolio management: sustainable development entails saving the rents from exhaustible resources, managing renewable resources sustainably, and investing savings in both produced assets and human resources.

  5. Exploring clouds, weather, climate, and modeling using bilingual content and activities from the Windows to the Universe program and the Center for Multiscale Modeling of Atmospheric Processes

    NASA Astrophysics Data System (ADS)

    Foster, S. Q.; Johnson, R. M.; Randall, D.; Denning, S.; Russell, R.; Gardiner, L.; Hatheway, B.; Genyuk, J.; Bergman, J.

    2008-12-01

    The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. Now in its third year, the National Science Foundation-funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University is addressing this problem through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interaction processes that are active in cloud systems. CMMAP has set ambitious education and human-resource goals to share basic information about the atmosphere, clouds, weather, climate, and modeling with diverse K-12 and public audiences through its affiliation with the Windows to the Universe (W2U) program at University Corporation for Atmospheric Research (UCAR). W2U web pages are written at three levels in English and Spanish. This information targets learners at all levels, educators, and families who seek to understand and share resources and information about the nature of weather and the climate system, and career role models from related research fields. This resource can also be helpful to educators who are building bridges in the classroom between the sciences, the arts, and literacy. Visitors to the W2U's CMMAP web portal can access a beautiful new clouds image gallery; information about each cloud type and the atmospheric processes that produce them; a Clouds in Art interactive; collections of weather-themed poetry, art, and myths; links to games and puzzles for children; and extensive classroom- ready resources and activities for K-12 teachers. Biographies of CMMAP scientists and graduate students are featured. Basic science concepts important to understanding the atmosphere, such as condensation, atmosphere pressure, lapse rate, and more have been developed, as well as 'microworlds' that enable students to interact with experimental tools while building fundamental knowledge. These resources can be accessed online at no cost by the entire atmospheric science K-12 and informal science education community.

  6. Exploring HRD in two Welsh NHS Trusts: analysing the discursive resources used by senior managers.

    PubMed

    Sambrook, Sally

    2007-01-01

    The purpose of this paper is to examine human resource development (HRD) in the UK National Health Service (NHS), and particularly in two Welsh NHS Trusts, to help illuminate the various ways in which learning, training and development are talked about. The NHS is a complex organisation, not least with its recent devolution and separation into the four distinct countries of the UK. Within this, there are multiple and often conflicting approaches to human resource development associated with the various forms of employee, professional (nursing, medical etc.), managerial and organisational development. How people are developed is crucial to developing a modern health service, and yet, with the diverse range of health workers, HRD is a complex process, and one which receives little attention. Managers have a key role and their perceptions of HRD can be analysed through the discursive resources they employ. From an interpretivist stance, the paper employs semi-structured interviews with seven Directorate-General Managers, and adopts discourse analysis to explore how HRD is talked about in two Welsh NHS Trusts. The paper finds some of the different discourses used by different managers, including those with a nursing background and those without. It examines how they talk about HRD, and also explores their own (management) development and the impact this has had on their sense of identity. The paper highlights some of the tensions associated with HRD in the NHS, including ambiguities between professional and managerial development, the functional and physical fragmentation of HRD, conflict between a focus on performance/service delivery and the need to learn, discursive dissonance between the use of the terms training and learning, a delicate balance between "going on courses" and informal, work-related learning, inequities regarding "protected time" and discourses shifting between competition and cooperation. These tensions are exposed to help develop a shared understanding of the complexities of HRD within the NHS. The paper concludes with a summary of the different discursive resources employed by senior managers to articulate and accomplish HRD. These are "surfaced" to enable managers and HRD practitioners, amongst others, to construct common repertoires and shared meaning.

  7. International Observe the Moon Night: Providing Opportunities for the Public to Engage in Lunar Observation

    NASA Astrophysics Data System (ADS)

    Hsu, B. C.; Bleacher, L.; Day, B. H.; Daou, D.; Jones, A. P.; Mitchell, B.; Shaner, A. J.; Shipp, S. S.

    2010-12-01

    International Observe the Moon Night (InOMN) is designed to engage lunar science and education communities, our partner networks, amateur astronomers, space enthusiasts, and the general public in annual lunar observation campaigns that share the excitement of lunar science and exploration. InOMN enables the public to maintain its curiosity about the Moon and gain a better understanding of the Moon's formation, its evolution, and its place in the sky. For 2010, members of the public were encouraged to host their own InOMN events. InOMN hosts such as astronomy clubs, museums, schools, or other groups could find helpful resources and share information about InOMN events they organized on the InOMN website (http://observethemoonnight.org). Images, feedback, and lessons learned from the 2010 InOMN event will be shared in order to encourage increased planning and hosting of InOMN events in 2011. From various interpretations of the lunar “face,” early pictograms of the Moon’s phases, or to the use of the lunar cycle for festivals or harvests, the Moon has an undeniable influence on human civilization. We have chosen the 2011 InOMN theme to provide an opportunity for individuals to share their personal or cultural connections to the Moon. For 2011, the InOMN website will include a ‘lunar bulletin board’ where InOMN participants can post pictures and share stories of what the Moon means to them. The 2011 InOMN contest will encourage people to submit their works of art, poems, short stories, or music about the Moon all centered around the theme “What does the Moon mean to you?” As with the winners of previous contests, winning entries will be incorporated into the following year’s InOMN advertisements and events.

  8. AgShare Open Knowledge: Improving Rural Communities through University Student Action Research

    ERIC Educational Resources Information Center

    Geith, Christine; Vignare, Karen

    2013-01-01

    The aim of AgShare is to create a scalable and sustainable collaboration of existing organizations for African publishing, localizing, and sharing of science-based teaching and learning materials that fill critical resource gaps in African MSc agriculture curriculum. Shared innovative practices are emerging through the AgShare projects, not only…

  9. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  10. Policy enabled information sharing system

    DOEpatents

    Jorgensen, Craig R.; Nelson, Brian D.; Ratheal, Steve W.

    2014-09-02

    A technique for dynamically sharing information includes executing a sharing policy indicating when to share a data object responsive to the occurrence of an event. The data object is created by formatting a data file to be shared with a receiving entity. The data object includes a file data portion and a sharing metadata portion. The data object is encrypted and then automatically transmitted to the receiving entity upon occurrence of the event. The sharing metadata portion includes metadata characterizing the data file and referenced in connection with the sharing policy to determine when to automatically transmit the data object to the receiving entity.

  11. Five-Year-Old Preschoolers' Sharing is Influenced by Anticipated Reciprocation.

    PubMed

    Xiong, Mingrui; Shi, Jiannong; Wu, Zhen; Zhang, Zhen

    2016-01-01

    Whether children share in anticipation of future benefits returned by a partner is an interesting question. In this study, 5-year-old children and an adult partner played a sharing game, in which children donated first and the partner donated afterward. In Experiment 1, the partner's resources were more attractive than the child's. In the reciprocal condition, the child was told that s/he would be a recipient when the partner played as a donor. In the non-reciprocal condition, however, the child was told that an anonymous child would be the recipient when the partner donated. Results showed that children shared more with the partner when they knew that they would be a recipient later. In Experiment 2, the child was always the recipient when the partner donated, but the partner's resources were more desirable than the child's in the high-value condition, and less desirable in the low-value condition. We found that children were more generous when the partner's resources were valued higher. These findings demonstrate that 5-year-old preschoolers' sharing choices take into account the anticipated reciprocity of the recipient, suggesting either self-interested tactical sharing or direct reciprocity in advance of receiving. Specifically, they adjust their sharing behavior depending on whether a partner has the potential to reciprocate, and whether it is worth sharing relative to the value of the payback.

  12. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Exploring Visual Evidence of Human Impact on the Environment with Planetary-Scale Zoomable Timelapse Video

    NASA Astrophysics Data System (ADS)

    Sargent, R.; Egge, M.; Dille, P. S.; O'Donnell, G. D.; Herwig, C.

    2016-12-01

    Visual evidence ignites curiosity and inspires advocacy. Zoomable imagery and video on a planetary scale provides compelling evidence of human impact on the environment. Earth Timelapse places the observable impact of 30+ years of human activity into the hands of policy makers, scientists, and advocates, with fluidity and speed that supports inquiry and exploration. Zoomability enables compelling narratives and ready apprehension of environmental changes, connecting human-scale evidence to regional and ecosystem-wide trends and changes. Leveraging the power of Google Earth Engine, join us to explore 30+ years of Landset 30m RGB imagery showing glacial retreat, agricultural deforestation, irrigation expansion, and the disappearance of lakes. These narratives are enriched with datasets showing planetary forest gain/loss, annual cycles of agricultural fires, global changes in the health of coral reefs, trends in resource extraction, and of renewable energy development. We demonstrate the intuitive and inquiry-enabling power of these planetary visualizations, and provide instruction on how scientists and advocates can create and share or contribute visualizations of their own research or topics of interest.

  14. Standardization in synthetic biology: an engineering discipline coming of age.

    PubMed

    Decoene, Thomas; De Paepe, Brecht; Maertens, Jo; Coussement, Pieter; Peters, Gert; De Maeseneire, Sofie L; De Mey, Marjan

    2018-08-01

    Leaping DNA read-and-write technologies, and extensive automation and miniaturization are radically transforming the field of biological experimentation by providing the tools that enable the cost-effective high-throughput required to address the enormous complexity of biological systems. However, standardization of the synthetic biology workflow has not kept abreast with dwindling technical and resource constraints, leading, for example, to the collection of multi-level and multi-omics large data sets that end up disconnected or remain under- or even unexploited. In this contribution, we critically evaluate the various efforts, and the (limited) success thereof, in order to introduce standards for defining, designing, assembling, characterizing, and sharing synthetic biology parts. The causes for this success or the lack thereof, as well as possible solutions to overcome these, are discussed. Akin to other engineering disciplines, extensive standardization will undoubtedly speed-up and reduce the cost of bioprocess development. In this respect, further implementation of synthetic biology standards will be crucial for the field in order to redeem its promise, i.e. to enable predictable forward engineering.

  15. Do It Yourself (DIY) Earth Science Collaboratories Using Best Practices and Breakthrough Technologies

    NASA Astrophysics Data System (ADS)

    Stephan, E.

    2017-12-01

    The objective of published earth science study data results and literature on the Web should be to provide a means to integrate discoverable science resources through an open collaborative-Web. At the core of any open science collaborative infrastructure is the ability to discover, manage and ultimately use relevant data accessible to the collaboration. Equally important are the relationships between people, applications, services, and publications, which capture critical contextual knowledge that enable their effective use. While contributions of either irreproducible or costly data can be a great asset the inability of users being able to use the data intelligently or make sense of it, makes these investments not usable. An ability to describe ad-hoc discoverable usage methodologies, provide feedback to data producers, and identify and cite data in a systematic way by leveraging existing Web-enabled off the shelf technology is needed. Fortunately many break-through advancements in data publication best practices and government, open source, and commercial investments support consumers who can provide feedback, share experiences, and contribute back to the earth science ecosystem.

  16. BIM and IoT: A Synopsis from GIS Perspective

    NASA Astrophysics Data System (ADS)

    Isikdag, U.

    2015-10-01

    Internet-of-Things (IoT) focuses on enabling communication between all devices, things that are existent in real life or that are virtual. Building Information Models (BIMs) and Building Information Modelling is a hype that has been the buzzword of the construction industry for last 15 years. BIMs emerged as a result of a push by the software companies, to tackle the problems of inefficient information exchange between different software and to enable true interoperability. In BIM approach most up-to-date an accurate models of a building are stored in shared central databases during the design and the construction of a project and at post-construction stages. GIS based city monitoring / city management applications require the fusion of information acquired from multiple resources, BIMs, City Models and Sensors. This paper focuses on providing a method for facilitating the GIS based fusion of information residing in digital building "Models" and information acquired from the city objects i.e. "Things". Once this information fusion is accomplished, many fields ranging from Emergency Response, Urban Surveillance, Urban Monitoring to Smart Buildings will have potential benefits.

  17. Enabling Security, Stability, Transition, and Reconstruction Operations through Knowledge Management

    DTIC Science & Technology

    2009-03-18

    strategy. Overall, the cultural barriers to knowledge sharing center on knowledge creation and capture. The primary barrier to knowledge sharing is lack ... Lacking a shared identity decreases the likelihood of knowledge sharing, which is essential to effective collaboration.84 Related to collaboration...to adapt, develop, and change based on experience-derived knowledge.90 A second cultural barrier to knowledge acquisition is the lack receptiveness

  18. Privacy Protection Standards for the Information Sharing Environment

    DTIC Science & Technology

    2009-09-01

    enable ISE participants to share information and data (see ISE Implementation Plan, p. 51, ISE Enterprise Architecture Framework, pp. 67, 73–74 and...of frontiers. This article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises. 2. The exercise...5 U.S.C. § 552a, as amended. Program Manager-Information Sharing Environment. (2008). Information Sharing Enterprise Architecture Framework

  19. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications

    PubMed Central

    Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S

    2007-01-01

    Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453

  20. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications.

    PubMed

    Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S

    2007-11-19

    Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.

  1. Establishing a shared vision in your organization. Winning strategies to empower your team members.

    PubMed

    Rinke, W J

    1989-01-01

    Today's health-care climate demands that you manage your human resources more effectively. Meeting the dual challenges of providing more with less requires that you tap the vast hidden resources that reside in every one of your team members. Harnessing these untapped energies requires that all of your employees clearly understand the purpose, direction, and the desired future state of your laboratory. Once this image is widely shared, your team members will know their roles in the organization and the contributions they can make to attaining the organization's vision. This shared vision empowers people and enhances their self-esteem as they recognize they are accomplishing a worthy goal. You can create and install a shared vision in your laboratory by adhering to a five-step process. The result will be a unity of purpose that will release the untapped human resources in your organization so that you can do more with less.

  2. Towards Networked Knowledge: The Learning Registry, an Infrastructure for Sharing Online Learning Resources

    ERIC Educational Resources Information Center

    Lee, Ashley; Hobson, Joe; Bienkowski, Marie; Midgley, Steve; Currier, Sarah; Campbell, Lorna M.; Novoselova, Tatiana

    2012-01-01

    In this article, the authors describe an open-source, open-data digital infrastructure for sharing information about open educational resources (OERs) across disparate systems and platforms. The Learning Registry, which began as a project funded by the U.S. Departments of Education and Defense, currently has an active international community…

  3. The Integrated Bibliographic Information System: Resource Sharing Tailored for Local Needs.

    ERIC Educational Resources Information Center

    Cotter, Gladys A.; Hartt, Richard W.

    The Defense Technical Information Center (DTIC), which is charged with providing information services to the scientific and technical community of the Department of Defense (DoD), actively seeks ways to promote resource sharing as a means for speeding access to information while reducing the costs of information processing throughout the defense…

  4. Sharing Scarce Resources: Group-Outcome Orientation, External Disaster, and Stealing in a Simulated Commons.

    ERIC Educational Resources Information Center

    Edney, Julian J.; Bell, Paul A.

    1984-01-01

    Conducted two studies in which subjects (N=216) faced the dilemma of how to harvest resources from a shared pool when faced with external catastrophies and given opportunities to steal. Results showed that tying the individual's outcome to the rest of the group is good for the group. (LLL)

  5. Measuring and Sustaining the Impact of Less Commonly Taught Language Collections in a Research Library

    ERIC Educational Resources Information Center

    Lenkart, Joe; Teper, Thomas H.; Thacker, Mara; Witt, Steven W.

    2015-01-01

    To evaluate the current state of resource sharing and cooperative collection development, this paper examines the relationship between less commonly taught language collections (LCTL) and ILL services. The study examined multiple years of the University of Illinois at Urbana-Champaign's resource-sharing data. This paper provides a historical…

  6. 30 CFR 585.541 - What is a qualified project for revenue sharing purposes?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 2 2012-07-01 2012-07-01 false What is a qualified project for revenue sharing purposes? 585.541 Section 585.541 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF...

  7. 30 CFR 285.541 - What is a qualified project for revenue sharing purposes?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What is a qualified project for revenue sharing purposes? 285.541 Section 285.541 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE...

  8. 30 CFR 585.541 - What is a qualified project for revenue sharing purposes?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 2 2014-07-01 2014-07-01 false What is a qualified project for revenue sharing purposes? 585.541 Section 585.541 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF...

  9. 30 CFR 585.541 - What is a qualified project for revenue sharing purposes?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 2 2013-07-01 2013-07-01 false What is a qualified project for revenue sharing purposes? 585.541 Section 585.541 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF...

  10. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  11. A Macro- and Micro-Examination of Family Power and Love: An Exchange Model

    ERIC Educational Resources Information Center

    Safilios-Rothschild, Constantina

    1976-01-01

    Greek families were analyzed in terms of resources available to husband and wife. An important resource was the amount of spouse's love. The more husbands loved wives, and the less wives loved husbands, the more power was shared. Power sharing was not common when both spouses were college educated. (NG)

  12. A Community Assessment Tool for Education Resources

    NASA Astrophysics Data System (ADS)

    Hou, C. Y.; Soyka, H.; Hutchison, V.; Budden, A. E.

    2016-12-01

    In order to facilitate and enhance better understanding of how to conserve life on earth and the environment that sustains it, Data Observation Network for Earth (DataONE) develops, implements, and shares educational activities and materials as part of its commitment to the education of its community, including scientific researchers, educators, and the public. Creating and maintaining educational materials that remain responsive to community needs is reliant on careful evaluations in order to enhance current and future resources. DataONE's extensive collaboration with individuals and organizations has informed the development of its educational resources and through these interactions, the need for a comprehensive, customizable education evaluation instrument became apparent. In this presentation, the authors will briefly describe the design requirements and research behind a prototype instrument that is intended to be used by the community for evaluation of its educational activities and resources. We will then demonstrate the functionality of a web based platform that enables users to identify the type of educational activity across multiple axes. This results in a set of structured evaluation questions that can be included in a survey instrument. Users can also access supporting documentation describing the types of question included in the output or simply download a full editable instrument. Our aim is that by providing the community with access to a structured evaluation instrument, Earth/Geoscience educators will be able to gather feedback easily and efficiently in order to help maintain the quality, currency/relevancy, and value of their resources, and ultimately, support a more data literate community.

  13. Promoting scientific collaboration and research through integrated social networking capabilities within the OpenTopography Portal

    NASA Astrophysics Data System (ADS)

    Nandigam, V.; Crosby, C. J.; Baru, C.

    2009-04-01

    LiDAR (Light Distance And Ranging) topography data offer earth scientists the opportunity to study the earth's surface at very high resolutions. As a result, the popularity of these data is growing dramatically. However, the management, distribution, and analysis of community LiDAR data sets is a challenge due to their massive size (multi-billion point, mutli-terabyte). We have also found that many earth science users of these data sets lack the computing resources and expertise required to process these data. We have developed the OpenTopography Portal to democratize access to these large and computationally challenging data sets. The OpenTopography Portal uses cyberinfrastructure technology developed by the GEON project to provide access to LiDAR data in a variety of formats. LiDAR data products available range from simple Google Earth visualizations of LiDAR-derived hillshades to 1 km2 tiles of standard digital elevation model (DEM) products as well as LiDAR point cloud data and user generated custom-DEMs. We have found that the wide spectrum of LiDAR users have variable scientific applications, computing resources and technical experience and thus require a data system with multiple distribution mechanisms and platforms to serve a broader range of user communities. Because the volume of LiDAR topography data available is rapidly expanding, and data analysis techniques are evolving, there is a need for the user community to be able to communicate and interact to share knowledge and experiences. To address this need, the OpenTopography Portal enables social networking capabilities through a variety of collaboration tools, web 2.0 technologies and customized usage pattern tracking. Fundamentally, these tools offer users the ability to communicate, to access and share documents, participate in discussions, and to keep up to date on upcoming events and emerging technologies. The OpenTopography portal achieves the social networking capabilities by integrating various software technologies and platforms. These include the Expression Engine Content Management System (CMS) that comes with pre-packaged collaboration tools like blogs and wikis, the Gridsphere portal framework that contains the primary GEON LiDAR System portlet with user job monitoring capabilities and a java web based discussion forum (Jforums) application all seamlessly integrated under one portal. The OpenTopography Portal also provides integrated authentication mechanism between the various CMS collaboration tools and the core gridsphere based portlets. The integration of these various technologies allows for enhanced user interaction capabilities within the portal. By integrating popular collaboration tools like discussion forums and blogs we can promote conversation and openness among users. The ability to ask question and share expertise in forum discussions allows users to easily find information and interact with users facing similar challenges. The OpenTopography Blog enables our domain experts to post ideas, news items, commentary, and other resources in order to foster discussion and information sharing. The content management capabilities of the portal allow for easy updates to information in the form of publications, documents, and news articles. Access to the most current information fosters better decision-making. As has become the standard for web 2.0 technologies, the OpenTopography Portal is fully RSS enabled to allow users of the portal to keep track of news items, forum discussions, blog updates, and system outages. We are currently exploring how the information captured by user and job monitoring components of the Gridsphere based GEON LiDAR System can be harnessed to provide a recommender system that will help users to identify appropriate processing parameters and to locate related documents and data. By seamlessly integrating the various platforms and technologies under one single portal, we can take advantage of popular online collaboration tools that are either stand alone or software platform restricted. The availability of these collaboration tools along with the data will foster more community interaction and increase the strength and vibrancy of the LiDAR topography user community.

  14. Enriching the Web of Data with Educational Information Using We-Share

    ERIC Educational Resources Information Center

    Ruiz-Calleja, Adolfo; Asensio-Pérez, Juan I.; Vega-Gorgojo, Guillermo; Gómez-Sánchez, Eduardo; Bote-Lorenzo, Miguel L.; Alario-Hoyos, Carlos

    2017-01-01

    This paper presents We-Share, a social annotation application that enables educators to publish and retrieve information about educational ICT tools. As a distinctive characteristic, We-Share provides educators data about educational tools already available on the Web of Data while allowing them to enrich such data with their experience using…

  15. Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management

    ERIC Educational Resources Information Center

    Osawa, Noritaka; Asai, Kikuo

    2008-01-01

    A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…

  16. The History of Makassan Trepang Fishing and Trade

    PubMed Central

    Schwerdtner Máñez, Kathleen; Ferse, Sebastian C. A.

    2010-01-01

    The Malayan term trepang describes a variety of edible holothurians commonly known as sea cucumbers. Although found in temperate and tropical marine waters all over the world, the centre of species diversity and abundance are the shallow coastal waters of Island Southeast Asia. For at least 300 years, trepang has been a highly priced commodity in the Chinese market. Originally, its fishing and trade was a specialized business, centred on the town of Makassar in South Sulawesi (Indonesia). The rise of trepang fishing in the 17th century added valuable export merchandize to the rich shallow seas surrounding the islands of Southeast Asia. This enabled local communities to become part of large trading networks and greatly supported their economic development. In this article, we follow Makassan trepang fishing and trading from its beginning until the industrialization of the fishery and worldwide depletion of sea cucumbers in the 20th century. Thereby, we identify a number of characteristics which trepang fishing shares with the exploitation of other marine resources, including (1) a strong influence of international markets, (2) the role of patron-client relationships which heavily influence the resource selection, and (3) the roving-bandit-syndrome, where fishermen exploit local stocks of valuable resources until they are depleted, and then move to another area. We suggest that understanding the similarities and differences between historical and recent exploitation of marine resources is an important step towards effective management solutions. PMID:20613871

  17. The history of Makassan trepang fishing and trade.

    PubMed

    Schwerdtner Máñez, Kathleen; Ferse, Sebastian C A

    2010-06-29

    The Malayan term trepang describes a variety of edible holothurians commonly known as sea cucumbers. Although found in temperate and tropical marine waters all over the world, the centre of species diversity and abundance are the shallow coastal waters of Island Southeast Asia. For at least 300 years, trepang has been a highly priced commodity in the Chinese market. Originally, its fishing and trade was a specialized business, centred on the town of Makassar in South Sulawesi (Indonesia). The rise of trepang fishing in the 17(th) century added valuable export merchandize to the rich shallow seas surrounding the islands of Southeast Asia. This enabled local communities to become part of large trading networks and greatly supported their economic development. In this article, we follow Makassan trepang fishing and trading from its beginning until the industrialization of the fishery and worldwide depletion of sea cucumbers in the 20(th) century. Thereby, we identify a number of characteristics which trepang fishing shares with the exploitation of other marine resources, including (1) a strong influence of international markets, (2) the role of patron-client relationships which heavily influence the resource selection, and (3) the roving-bandit-syndrome, where fishermen exploit local stocks of valuable resources until they are depleted, and then move to another area. We suggest that understanding the similarities and differences between historical and recent exploitation of marine resources is an important step towards effective management solutions.

  18. The interaction of cannibalism and omnivory: consequences for community dynamics.

    PubMed

    Rudolf, Volker H W

    2007-11-01

    Although cannibalism is ubiquitous in food webs and frequent in systems where a predator and its prey also share a common resource (intraguild predation, IGP), its impacts on species interactions and the dynamics and structure of communities are still poorly understood. In addition, the few existing studies on cannibalism have generally focused on cannibalism in the top-predator, ignoring that it is frequent at intermediate trophic levels. A set of structured models shows that cannibalism can completely alter the dynamics and structure of three-species IGP systems depending on the trophic position where cannibalism occurs. Contrary to the expectations of simple models, the IG predator can exploit the resources more efficiently when it is cannibalistic, enabling the predator to persist at lower resource densities than the IG prey. Cannibalism in the IG predator can also alter the effect of enrichment, preventing predator-mediated extinction of the IG prey at high productivities predicted by simple models. Cannibalism in the IG prey can reverse the effect of top-down cascades, leading to an increase in the resource with decreasing IG predator density. These predictions are consistent with current data. Overall, cannibalism promotes the coexistence of the IG predator and IG prey. These results indicate that including cannibalism in current models can overcome the discrepancy between theory and empirical data. Thus, we need to measure and account for cannibalistic interactions to reliably predict the structure and dynamics of communities.

  19. Examining the implementation of NICE guidance: cross-sectional survey of the use of NICE interventional procedures guidance by NHS Trusts.

    PubMed

    Lowson, Karin; Jenks, Michelle; Filby, Alexandra; Carr, Louise; Campbell, Bruce; Powell, John

    2015-06-30

    In the UK, NHS hospitals receive large amounts of evidence-based recommendations for care delivery from the National Institute for Health and Care Excellence (NICE) and other organisations. Little is known about how NHS organisations implement such guidance and best practice for doing so. This study was therefore designed to examine the dissemination, decision-making, and monitoring processes for NICE interventional procedures (IP) guidance and to investigate the barriers and enablers to the implementation of such guidance. A cross-sectional survey questionnaire was developed and distributed to individuals responsible for managing the processes around NICE guidance in all 181 acute NHS hospitals in England, Scotland, Wales and Northern Ireland. A review of acute NHS hospital policies for implementing NICE guidance was also undertaken using information available in the public domain and from organisations' websites. The response rate to the survey was 75 % with 135 completed surveys received. Additionally, policies from 25 % of acute NHS hospitals were identified and analysed. NHS acute hospitals typically had detailed processes in place to implement NICE guidance, although organisations recognised barriers to implementation including organisational process barriers, clinical engagement and poor targeting with a large number of guidance issued. Examples of enablers to, and good practice for, implementation of guidance were found, most notably the value of shared learning experiences between NHS hospitals. Implications for NICE were also identified. These included making improvements to the layout of guidance, signposting on the website and making better use of their shared learning platform. Most organisations have robust processes in place to deal with implementing guidance. However, resource limitations and the scope of guidance received by organisations create barriers relating to organisational processes, clinician engagement and financing of new procedures. Guidance implementation can be facilitated through encouragement of shared learning by organisations such as NICE and open knowledge transfer between organisations.

  20. Modeling for Integrated Science Management and Resilient Systems Development

    NASA Technical Reports Server (NTRS)

    Shelhamer, M.; Mindock, J.; Lumpkins, S.

    2014-01-01

    Many physiological, environmental, and operational risks exist for crewmembers during spaceflight. An understanding of these risks from an integrated perspective is required to provide effective and efficient mitigations during future exploration missions that typically have stringent limitations on resources available, such as mass, power, and crew time. The Human Research Program (HRP) is in the early stages of developing collaborative modeling approaches for the purposes of managing its science portfolio in an integrated manner to support cross-disciplinary risk mitigation strategies and to enable resilient human and engineered systems in the spaceflight environment. In this talk, we will share ideas being explored from fields such as network science, complexity theory, and system-of-systems modeling. Initial work on tools to support these explorations will be discussed briefly, along with ideas for future efforts.

Top