Sample records for distributed collaboration systems

  1. Distributed Architecture for the Object-Oriented Method for Interoperability

    DTIC Science & Technology

    2003-03-01

    Collaborative Environment. ......................121 Figure V-2. Distributed OOMI And The Collaboration Centric Paradigm. .....................123 Figure V...of systems are formed into a system federation to resolve differences in modeling. An OOMI Integrated Development Environment (OOMI IDE) lends ...space for the creation of possible distributed systems is partitioned into User Centric systems, Processing/Storage Centric systems, Implementation

  2. Virtual Collaborative Environments for System of Systems Engineering and Applications for ISAT

    NASA Technical Reports Server (NTRS)

    Dryer, David A.

    2002-01-01

    This paper describes an system of systems or metasystems approach and models developed to help prepare engineering organizations for distributed engineering environments. These changes in engineering enterprises include competition in increasingly global environments; new partnering opportunities caused by advances in information and communication technologies, and virtual collaboration issues associated with dispersed teams. To help address challenges and needs in this environment, a framework is proposed that can be customized and adapted for NASA to assist in improved engineering activities conducted in distributed, enhanced engineering environments. The approach is designed to prepare engineers for such distributed collaborative environments by learning and applying e-engineering methods and tools to a real-world engineering development scenario. The approach consists of two phases: an e-engineering basics phase and e-engineering application phase. The e-engineering basics phase addresses skills required for e-engineering. The e-engineering application phase applies these skills in a distributed collaborative environment to system development projects.

  3. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    PubMed

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  4. Collaborative volume visualization with applications to underwater acoustic signal processing

    NASA Astrophysics Data System (ADS)

    Jarvis, Susan; Shane, Richard T.

    2000-08-01

    Distributed collaborative visualization systems represent a technology whose time has come. Researchers at the Fraunhofer Center for Research in Computer Graphics have been working in the areas of collaborative environments and high-end visualization systems for several years. The medical application. TeleInVivo, is an example of a system which marries visualization and collaboration. With TeleInvivo, users can exchange and collaboratively interact with volumetric data sets in geographically distributed locations. Since examination of many physical phenomena produce data that are naturally volumetric, the visualization frameworks used by TeleInVivo have been extended for non-medical applications. The system can now be made compatible with almost any dataset that can be expressed in terms of magnitudes within a 3D grid. Coupled with advances in telecommunications, telecollaborative visualization is now possible virtually anywhere. Expert data quality assurance and analysis can occur remotely and interactively without having to send all the experts into the field. Building upon this point-to-point concept of collaborative visualization, one can envision a larger pooling of resources to form a large overview of a region of interest from contributions of numerous distributed members.

  5. Challenges of Using CSCL in Open Distributed Learning.

    ERIC Educational Resources Information Center

    Nilsen, Anders Grov; Instefjord, Elen J.

    As a compulsory part of the study in Pedagogical Information Science at the University of Bergen and Stord/Haugesund College (Norway) during the spring term of 1999, students participated in a distributed group activity that provided experience on distributed collaboration and use of online groupware systems. The group collaboration process was…

  6. Awareware: Narrowcasting Attributes for Selective Attention, Privacy, and Multipresence

    NASA Astrophysics Data System (ADS)

    Cohen, Michael; Newton Fernando, Owen Noel

    The domain of cscw, computer-supported collaborative work, and DSC, distributed synchronous collaboration, spans real-time interactive multiuser systems, shared information spaces, and applications for teleexistence and artificial reality, including collaborative virtual environments ( cves) (Benford et al., 2001). As presence awareness systems emerge, it is important to develop appropriate interfaces and architectures for managing multimodal multiuser systems. Especially in consideration of the persistent connectivity enabled by affordable networked communication, shared distributed environments require generalized control of media streams, techniques to control source → sink transmissions in synchronous groupware, including teleconferences and chatspaces, online role-playing games, and virtual concerts.

  7. The Collaborative Lecture Annotation System (CLAS): A New TOOL for Distributed Learning

    ERIC Educational Resources Information Center

    Risko, E. F.; Foulsham, T.; Dawson, S.; Kingstone, A.

    2013-01-01

    In the context of a lecture, the capacity to readily recognize and synthesize key concepts is crucial for comprehension and overall educational performance. In this paper, we introduce a tool, the Collaborative Lecture Annotation System (CLAS), which has been developed to make the extraction of important information a more collaborative and…

  8. Properties of four real world collaboration--competition networks

    NASA Astrophysics Data System (ADS)

    Fu, Chun-Hua; Xu, Xiu-Lian; He, Da-Ren

    2009-03-01

    Our research group has empirically investigated 9 real world collaboration networks and 25 real world cooperation-competition networks. Among the 34 real world systems, all the 9 real world collaboration networks and 6 real world cooperation-competition networks show the unimodal act-size distribution and the shifted power law distribution of degree and act-degree. We have proposed a collaboration network evolution model for an explanation of the rules [1]. The other 14 real world cooperation-competition networks show that the act-size distributions are not unimodal; instead, they take qualitatively the same shifted power law forms as the degree and act-degree distributions. The properties of four systems (the main land movie film network, Beijing restaurant network, 2004 Olympic network, and Tao-Bao notebook computer sale network) are reported in detail as examples. Via a numerical simulation, we show that the new rule can still be explained by the above-mentioned model. [1] H. Chang, B. B. Su, et al. Phsica A, 2007, 383: 687-702.

  9. A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises

    NASA Astrophysics Data System (ADS)

    Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.

    2012-04-01

    The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer include providing a framework for information integration on a syntactic and semantic level, leveraging distributed processing resources for a scalable data processing platform, and automating data processing and decision support workflows.

  10. Learning from Multiple Collaborating Intelligent Tutors: An Agent-based Approach.

    ERIC Educational Resources Information Center

    Solomos, Konstantinos; Avouris, Nikolaos

    1999-01-01

    Describes an open distributed multi-agent tutoring system (MATS) and discusses issues related to learning in such open environments. Topics include modeling a one student-many teachers approach in a computer-based learning context; distributed artificial intelligence; implementation issues; collaboration; and user interaction. (Author/LRW)

  11. Diversity of multilayer networks and its impact on collaborating epidemics

    NASA Astrophysics Data System (ADS)

    Min, Yong; Hu, Jiaren; Wang, Weihong; Ge, Ying; Chang, Jie; Jin, Xiaogang

    2014-12-01

    Interacting epidemics on diverse multilayer networks are increasingly important in modeling and analyzing the diffusion processes of real complex systems. A viral agent spreading on one layer of a multilayer network can interact with its counterparts by promoting (cooperative interaction), suppressing (competitive interaction), or inducing (collaborating interaction) its diffusion on other layers. Collaborating interaction displays different patterns: (i) random collaboration, where intralayer or interlayer induction has the same probability; (ii) concentrating collaboration, where consecutive intralayer induction is guaranteed with a probability of 1; and (iii) cascading collaboration, where consecutive intralayer induction is banned with a probability of 0. In this paper, we develop a top-bottom framework that uses only two distributions, the overlaid degree distribution and edge-type distribution, to model collaborating epidemics on multilayer networks. We then state the response of three collaborating patterns to structural diversity (evenness and difference of network layers). For viral agents with small transmissibility, we find that random collaboration is more effective in networks with higher diversity (high evenness and difference), while the concentrating pattern is more suitable in uneven networks. Interestingly, the cascading pattern requires a network with moderate difference and high evenness, and the moderately uneven coupling of multiple network layers can effectively increase robustness to resist cascading failure. With large transmissibility, however, we find that all collaborating patterns are more effective in high-diversity networks. Our work provides a systemic analysis of collaborating epidemics on multilayer networks. The results enhance our understanding of biotic and informative diffusion through multiple vectors.

  12. Representing situation awareness in collaborative systems: a case study in the energy distribution domain.

    PubMed

    Salmon, P M; Stanton, N A; Walker, G H; Jenkins, D; Baber, C; McMaster, R

    2008-03-01

    The concept of distributed situation awareness (DSA) is currently receiving increasing attention from the human factors community. This article investigates DSA in a collaborative real-world industrial setting by discussing the results derived from a recent naturalistic study undertaken within the UK energy distribution domain. The results describe the DSA-related information used by the networks of agents involved in the scenarios analysed, the sharing of this information between the agents and the salience of different information elements used. Thus, the structure, quality and content of each network's DSA is discussed, along with the implications for DSA theory. The findings reinforce the notion that when viewing situation awareness (SA) in collaborative systems, it is useful to focus on the coordinated behaviour of the system itself, rather than on the individual as the unit of analysis and suggest that the findings from such assessments can potentially be used to inform system, procedure and training design. SA is a critical commodity for teams working in industrial systems and systems, procedures and training programmes should be designed to facilitate efficient system SA acquisition and maintenance. This article presents approaches for describing and understanding SA during real-world collaborative tasks, the outputs from which can potentially be used to inform system, training programmes and procedure design.

  13. Implementation of a Web-Based Collaborative Process Planning System

    NASA Astrophysics Data System (ADS)

    Wang, Huifen; Liu, Tingting; Qiao, Li; Huang, Shuangxi

    Under the networked manufacturing environment, all phases of product manufacturing involving design, process planning, machining and assembling may be accomplished collaboratively by different enterprises, even different manufacturing stages of the same part may be finished collaboratively by different enterprises. Based on the self-developed networked manufacturing platform eCWS(e-Cooperative Work System), a multi-agent-based system framework for collaborative process planning is proposed. In accordance with requirements of collaborative process planning, share resources provided by cooperative enterprises in the course of collaboration are classified into seven classes. Then a reconfigurable and extendable resource object model is built. Decision-making strategy is also studied in this paper. Finally a collaborative process planning system e-CAPP is developed and applied. It provides strong support for distributed designers to collaboratively plan and optimize product process though network.

  14. Enhancing Collaborative Peer-to-Peer Systems Using Resource Aggregation and Caching: A Multi-Attribute Resource and Query Aware Approach

    ERIC Educational Resources Information Center

    Bandara, H. M. N. Dilum

    2012-01-01

    Resource-rich computing devices, decreasing communication costs, and Web 2.0 technologies are fundamentally changing the way distributed applications communicate and collaborate. With these changes, we envision Peer-to-Peer (P2P) systems that will allow for the integration and collaboration of peers with diverse capabilities to a virtual community…

  15. The Diesel Combustion Collaboratory: Combustion Researchers Collaborating over the Internet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. M. Pancerella; L. A. Rahn; C. Yang

    2000-02-01

    The Diesel Combustion Collaborator (DCC) is a pilot project to develop and deploy collaborative technologies to combustion researchers distributed throughout the DOE national laboratories, academia, and industry. The result is a problem-solving environment for combustion research. Researchers collaborate over the Internet using DCC tools, which include: a distributed execution management system for running combustion models on widely distributed computers, including supercomputers; web-accessible data archiving capabilities for sharing graphical experimental or modeling data; electronic notebooks and shared workspaces for facilitating collaboration; visualization of combustion data; and video-conferencing and data-conferencing among researchers at remote sites. Security is a key aspect of themore » collaborative tools. In many cases, the authors have integrated these tools to allow data, including large combustion data sets, to flow seamlessly, for example, from modeling tools to data archives. In this paper the authors describe the work of a larger collaborative effort to design, implement and deploy the DCC.« less

  16. Evaluating the Effects of Scripted Distributed Pair Programming on Student Performance and Participation

    ERIC Educational Resources Information Center

    Tsompanoudi, Despina; Satratzemi, Maya; Xinogalos, Stelios

    2016-01-01

    The results presented in this paper contribute to research on two different areas of teaching methods: distributed pair programming (DPP) and computer-supported collaborative learning (CSCL). An evaluation study of a DPP system that supports collaboration scripts was conducted over one semester of a computer science course. Seventy-four students…

  17. Multi-Agent Framework for Virtual Learning Spaces.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Nunez, Gustavo

    1999-01-01

    Discussion of computer-supported collaborative learning, distributed artificial intelligence, and intelligent tutoring systems focuses on the concept of agents, and describes a virtual learning environment that has a multi-agent system. Describes a model of interactions in collaborative learning and discusses agents for Web-based virtual…

  18. Learning from Listservs: Collaboration, Knowledge Exchange, and the Formation of Distributed Leadership for Farmers' Markets and the Food Movement

    ERIC Educational Resources Information Center

    Quintana, Maclovia; Morales, Alfonso

    2015-01-01

    Computer-mediated communications, in particular listservs, can be powerful tools for creating social change--namely, shifting our food system to a more healthy, just, and localised model. They do this by creating the conditions--collaborations, interaction, self-reflection, and personal empowerment--that cultivate distributed leadership. In this…

  19. Collaborative environments for capability-based planning

    NASA Astrophysics Data System (ADS)

    McQuay, William K.

    2005-05-01

    Distributed collaboration is an emerging technology for the 21st century that will significantly change how business is conducted in the defense and commercial sectors. Collaboration involves two or more geographically dispersed entities working together to create a "product" by sharing and exchanging data, information, and knowledge. A product is defined broadly to include, for example, writing a report, creating software, designing hardware, or implementing robust systems engineering and capability planning processes in an organization. Collaborative environments provide the framework and integrate models, simulations, domain specific tools, and virtual test beds to facilitate collaboration between the multiple disciplines needed in the enterprise. The Air Force Research Laboratory (AFRL) is conducting a leading edge program in developing distributed collaborative technologies targeted to the Air Force's implementation of systems engineering for a simulation-aided acquisition and capability-based planning. The research is focusing on the open systems agent-based framework, product and process modeling, structural architecture, and the integration technologies - the glue to integrate the software components. In past four years, two live assessment events have been conducted to demonstrate the technology in support of research for the Air Force Agile Acquisition initiatives. The AFRL Collaborative Environment concept will foster a major cultural change in how the acquisition, training, and operational communities conduct business.

  20. National Storage Laboratory: a collaborative research project

    NASA Astrophysics Data System (ADS)

    Coyne, Robert A.; Hulen, Harry; Watson, Richard W.

    1993-01-01

    The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.

  1. A Collaboration Network Model Of Cytokine-Protein Network

    NASA Astrophysics Data System (ADS)

    Zou, Sheng-Rong; Zhou, Ta; Peng, Yu-Jing; Guo, Zhong-Wei; Gu, Chang-Gui; He, Da-Ren

    2008-03-01

    Complex networks provide us a new view for investigation of immune systems. We collect data through STRING database and present a network description with cooperation network model. The cytokine-protein network model we consider is constituted by two kinds of nodes, one is immune cytokine types which can be regarded as collaboration acts, the other one is protein type which can be regarded as collaboration actors. From act degree distribution that can be well described by typical SPL (shifted power law) functions [1], we find that HRAS, TNFRSF13C, S100A8, S100A1, MAPK8, S100A7, LIF, CCL4, CXCL13 are highly collaborated with other proteins. It reveals that these mediators are important in cytokine-protein network to regulate immune activity. Dyad in the collaboration networks can be defined as two proteins and they appear in one cytokine collaboration relationship. The dyad act degree distribution can also be well described by typical SPL functions. [1] Assortativity and act degree distribution of some collaboration networks, Hui Chang, Bei-Bei Su, Yue-Ping Zhou, Daren He, Physica A, 383 (2007) 687-702

  2. Distributed leadership to mobilise capacity for accreditation research.

    PubMed

    Greenfield, David; Braithwaite, Jeffrey; Pawsey, Marjorie; Johnson, Brian; Robinson, Maureen

    2009-01-01

    Inquiries into healthcare organisations have highlighted organisational or system failure, attributed to poor responses to early warning signs. One response, and challenge, is for professionals and academics to build capacity for quality and safety research to provide evidence for improved systems. However, such collaborations and capacity building do not occur easily as there are many stakeholders. Leadership is necessary to unite differences into a common goal. The lessons learned and principles arising from the experience of providing distributed leadership to mobilise capacity for quality and safety research when researching health care accreditation in Australia are presented. A case study structured by temporal bracketing that presents a narrative account of multi-stakeholder perspectives. Data are collected using in-depth informal interviews with key informants and ethno-document analysis. Distributed leadership enabled a collaborative research partnership to be realised. The leadership harnessed the relative strengths of partners and accounted for, and balanced, the interests of stakeholder participants involved. Across three phases, leadership and the research partnership was enacted: identifying partnerships, bottom-up engagement and enacting the research collaboration. Two principles to maximise opportunities to mobilise capacity for quality and safety research have been identified. First, successful collaborations, particularly multi-faceted inter-related partnerships, require distributed leadership. Second, the leadership-stakeholder enactment can promote reciprocity so that the collaboration becomes mutually reinforcing and beneficial to partners. The paper addresses the need to understand the practice and challenges of distributed leadership and how to replicate positive practices to implement patient safety research.

  3. Distributed Deliberative Recommender Systems

    NASA Astrophysics Data System (ADS)

    Recio-García, Juan A.; Díaz-Agudo, Belén; González-Sanz, Sergio; Sanchez, Lara Quijano

    Case-Based Reasoning (CBR) is one of most successful applied AI technologies of recent years. Although many CBR systems reason locally on a previous experience base to solve new problems, in this paper we focus on distributed retrieval processes working on a network of collaborating CBR systems. In such systems, each node in a network of CBR agents collaborates, arguments and counterarguments its local results with other nodes to improve the performance of the system's global response. We describe D2ISCO: a framework to design and implement deliberative and collaborative CBR systems that is integrated as a part of jcolibritwo an established framework in the CBR community. We apply D2ISCO to one particular simplified type of CBR systems: recommender systems. We perform a first case study for a collaborative music recommender system and present the results of an experiment of the accuracy of the system results using a fuzzy version of the argumentation system AMAL and a network topology based on a social network. Besides individual recommendation we also discuss how D2ISCO can be used to improve recommendations to groups and we present a second case of study based on the movie recommendation domain with heterogeneous groups according to the group personality composition and a group topology based on a social network.

  4. A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System

    ERIC Educational Resources Information Center

    Chim, Hung; Deng, Xiaotie

    2008-01-01

    We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…

  5. The Pioneering Role of the Vaccine Safety Datalink Project (VSD) to Advance Collaborative Research and Distributed Data Networks

    PubMed Central

    Fahey, Kevin R.

    2015-01-01

    Introduction: Large-scale distributed data networks consisting of diverse stakeholders including providers, patients, and payers are changing health research in terms of methods, speed and efficiency. The Vaccine Safety Datalink (VSD) set the stage for expanded involvement of health plans in collaborative research. Expanding Surveillance Capacity and Progress Toward a Learning Health System: From an initial collaboration of four integrated health systems with fewer than 10 million covered lives to 16 diverse health plans with nearly 100 million lives now in the FDA Sentinel, the expanded engagement of health plan researchers has been essential to increase the value and impact of these efforts. The collaborative structure of the VSD established a pathway toward research efforts that successfully engage all stakeholders in a cohesive rather than competitive manner. The scientific expertise and methodology developed through the VSD such as rapid cycle analysis (RCA) to conduct near real-time safety surveillance allowed for the development of the expanded surveillance systems that now exist. Building on Success and Lessons Learned: These networks have learned from and built on the knowledge base and infrastructure created by the VSD investigators. This shared technical knowledge and experience expedited the development of systems like the FDA’s Mini-Sentinel and the Patient Centered Outcomes Research Institute (PCORI)’s PCORnet Conclusion: This narrative reviews the evolution of the VSD, its contribution to other collaborative research networks, longer-term sustainability of this type of distributed research, and how knowledge gained from the earlier efforts can contribute to a continually learning health system. PMID:26793736

  6. SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan: Hodge, Bri-Mathias

    This presentation provides a Smart-DS project overview and status update for the ARPA-e GRID DATA program meeting 2017, including distribution systems, models, and scenarios, as well as opportunities for GRID DATA collaborations.

  7. Chance-Constrained System of Systems Based Operation of Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargarian, Amin; Fu, Yong; Wu, Hongyu

    In this paper, a chance-constrained system of systems (SoS) based decision-making approach is presented for stochastic scheduling of power systems encompassing active distribution grids. Based on the concept of SoS, the independent system operator (ISO) and distribution companies (DISCOs) are modeled as self-governing systems. These systems collaborate with each other to run the entire power system in a secure and economic manner. Each self-governing system accounts for its local reserve requirements and line flow constraints with respect to the uncertainties of load and renewable energy resources. A set of chance constraints are formulated to model the interactions between the ISOmore » and DISCOs. The proposed model is solved by using analytical target cascading (ATC) method, a distributed optimization algorithm in which only a limited amount of information is exchanged between collaborative ISO and DISCOs. In this paper, a 6-bus and a modified IEEE 118-bus power systems are studied to show the effectiveness of the proposed algorithm.« less

  8. Developing Distributed Collaboration Systems at NASA: A Report from the Field

    NASA Technical Reports Server (NTRS)

    Becerra-Fernandez, Irma; Stewart, Helen; Knight, Chris; Norvig, Peter (Technical Monitor)

    2001-01-01

    Web-based collaborative systems have assumed a pivotal role in the information systems development arena. While business to customers (B-to-C) and business to business (B-to-B) electronic commerce systems, search engines, and chat sites are the focus of attention, web-based systems span the gamut of information systems that were traditionally confined to internal organizational client server networks. For example, the Domino Application Server allows Lotus Notes (trademarked) uses to build collaborative intranet applications and mySAP.com (trademarked) enables web portals and e-commerce applications for SAP users. This paper presents the experiences in the development of one such system: Postdoc, a government off-the-shelf web-based collaborative environment. Issues related to the design of web-based collaborative information systems, including lessons learned from the development and deployment of the system as well as measured performance, are presented in this paper. Finally, the limitations of the implementation approach as well as future plans are presented as well.

  9. Where Might We Be Headed? Signposts from Other States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiter, Emerson

    2017-04-07

    Presentation on the state of distributed energy resources interconnection in Wisconsin from the Wisconsin Distributed Resources Collaborative (WIDRC) Interconnection Forum for Distributed Generation. It addresses concerns over application submission and processing, lack of visibility into the distribution system, and uncertainty in upgrade costs.

  10. Distributed and collaborative synthetic environments

    NASA Technical Reports Server (NTRS)

    Bajaj, Chandrajit L.; Bernardini, Fausto

    1995-01-01

    Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.

  11. A new security model for collaborative environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Deborah; Lorch, Markus; Thompson, Mary

    Prevalent authentication and authorization models for distributed systems provide for the protection of computer systems and resources from unauthorized use. The rules and policies that drive the access decisions in such systems are typically configured up front and require trust establishment before the systems can be used. This approach does not work well for computer software that moderates human-to-human interaction. This work proposes a new model for trust establishment and management in computer systems supporting collaborative work. The model supports the dynamic addition of new users to a collaboration with very little initial trust placed into their identity and supportsmore » the incremental building of trust relationships through endorsements from established collaborators. It also recognizes the strength of a users authentication when making trust decisions. By mimicking the way humans build trust naturally the model can support a wide variety of usage scenarios. Its particular strength lies in the support for ad-hoc and dynamic collaborations and the ubiquitous access to a Computer Supported Collaboration Workspace (CSCW) system from locations with varying levels of trust and security.« less

  12. Distributed Pair Programming Using Collaboration Scripts: An Educational System and Initial Results

    ERIC Educational Resources Information Center

    Tsompanoudi, Despina; Satratzemi, Maya; Xinogalos, Stelios

    2015-01-01

    Since pair programming appeared in the literature as an effective method of teaching computer programming, many systems were developed to cover the application of pair programming over distance. Today's systems serve personal, professional and educational purposes allowing distributed teams to work together on the same programming project. The…

  13. Angle-of-Arrival Assisted GNSS Collaborative Positioning.

    PubMed

    Huang, Bin; Yao, Zheng; Cui, Xiaowei; Lu, Mingquan

    2016-06-20

    For outdoor and global navigation satellite system (GNSS) challenged scenarios, collaborative positioning algorithms are proposed to fuse information from GNSS satellites and terrestrial wireless systems. This paper derives the Cramer-Rao lower bound (CRLB) and algorithms for the angle-of-arrival (AOA)-assisted GNSS collaborative positioning. Based on the CRLB model and collaborative positioning algorithms, theoretical analysis are performed to specify the effects of various factors on the accuracy of collaborative positioning, including the number of users, their distribution and AOA measurements accuracy. Besides, the influences of the relative location of the collaborative users are also discussed in order to choose appropriate neighboring users, which is in favor of reducing computational complexity. Simulations and actual experiment are carried out with several GNSS receivers in different scenarios, and the results are consistent with theoretical analysis.

  14. Quid pro quo: a mechanism for fair collaboration in networked systems.

    PubMed

    Santos, Agustín; Fernández Anta, Antonio; López Fernández, Luis

    2013-01-01

    Collaboration may be understood as the execution of coordinated tasks (in the most general sense) by groups of users, who cooperate for achieving a common goal. Collaboration is a fundamental assumption and requirement for the correct operation of many communication systems. The main challenge when creating collaborative systems in a decentralized manner is dealing with the fact that users may behave in selfish ways, trying to obtain the benefits of the tasks but without participating in their execution. In this context, Game Theory has been instrumental to model collaborative systems and the task allocation problem, and to design mechanisms for optimal allocation of tasks. In this paper, we revise the classical assumptions of these models and propose a new approach to this problem. First, we establish a system model based on heterogenous nodes (users, players), and propose a basic distributed mechanism so that, when a new task appears, it is assigned to the most suitable node. The classical technique for compensating a node that executes a task is the use of payments (which in most networks are hard or impossible to implement). Instead, we propose a distributed mechanism for the optimal allocation of tasks without payments. We prove this mechanism to be robust evenevent in the presence of independent selfish or rationally limited players. Additionally, our model is based on very weak assumptions, which makes the proposed mechanisms susceptible to be implemented in networked systems (e.g., the Internet).

  15. ICCE/ICCAI 2000 Full & Short Papers (Collaborative Learning).

    ERIC Educational Resources Information Center

    2000

    This document contains the full and short papers on collaborative learning from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: comparison of applying Internet to cooperative and traditional learning; a distributed backbone system for…

  16. Efficacy of Floor Control Protocols in Distributed Multimedia Collaboration

    DTIC Science & Technology

    1999-01-01

    advanced consider- ably, support for such controlled group interaction, particularly for applications geared towards synchronous and wide-area groupwork ...transaction-oriented collaboration, and synchronous groupwork is limited mostly to text and chatting. The JETS system [32] is a recent example for a Java-based

  17. Collaboration systems for classroom instruction

    NASA Astrophysics Data System (ADS)

    Chen, C. Y. Roger; Meliksetian, Dikran S.; Chang, Martin C.

    1996-01-01

    In this paper we discuss how classroom instruction can benefit from state-of-the-art technologies in networks, worldwide web access through Internet, multimedia, databases, and computing. Functional requirements for establishing such a high-tech classroom are identified, followed by descriptions of our current experimental implementations. The focus of the paper is on the capabilities of distributed collaboration, which supports both synchronous multimedia information sharing as well as a shared work environment for distributed teamwork and group decision making. Our ultimate goal is to achieve the concept of 'living world in a classroom' such that live and dynamic up-to-date information and material from all over the world can be integrated into classroom instruction on a real-time basis. We describe how we incorporate application developments in a geography study tool, worldwide web information retrievals, databases, and programming environments into the collaborative system.

  18. Distributed Planning in a Mixed-Initiative Environment: Collaborative Technologies for Network Centric Operations

    DTIC Science & Technology

    2008-10-01

    Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements

  19. Automated Planning and Scheduling for Planetary Rover Distributed Operations

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Rabideau, Gregg; Tso, Kam S.; Chien, Steve

    1999-01-01

    Automated planning and Scheduling, including automated path planning, has been integrated with an Internet-based distributed operations system for planetary rover operations. The resulting prototype system enables faster generation of valid rover command sequences by a distributed planetary rover operations team. The Web Interface for Telescience (WITS) provides Internet-based distributed collaboration, the Automated Scheduling and Planning Environment (ASPEN) provides automated planning and scheduling, and an automated path planner provided path planning. The system was demonstrated on the Rocky 7 research rover at JPL.

  20. An Overview of the CERC ARTEMIS Project

    PubMed Central

    Jagannathan, V.; Reddy, Y. V.; Srinivas, K.; Karinthi, R.; Shank, R.; Reddy, S.; Almasi, G.; Davis, T.; Raman, R.; Qiu, S.; Friedman, S.; Merkin, B.; Kilkenny, M.

    1995-01-01

    The basic premise of this effort is that health care can be made more effective and affordable by applying modern computer technology to improve collaboration among diverse and distributed health care providers. Information sharing, communication, and coordination are basic elements of any collaborative endeavor. In the health care domain, collaboration is characterized by cooperative activities by health care providers to deliver total and real-time care for their patients. Communication between providers and managed access to distributed patient records should enable health care providers to make informed decisions about their patients in a timely manner. With an effective medical information infrastructure in place, a patient will be able to visit any health care provider with access to the network, and the provider will be able to use relevant information from even the last episode of care in the patient record. Such a patient-centered perspective is in keeping with the real mission of health care providers. Today, an easy-to-use, integrated health care network is not in place in any community, even though current technology makes such a network possible. Large health care systems have deployed partial and disparate systems that address different elements of collaboration. But these islands of automation have not been integrated to facilitate cooperation among health care providers in large communities or nationally. CERC and its team members at Valley Health Systems, Inc., St. Marys Hospital and Cabell Huntington Hospital form a consortium committed to improving collaboration among the diverse and distributed providers in the health care arena. As the first contract recipient of the multi-agency High Performance Computing and Communications (HPCC) Initiative, this team of computer system developers, practicing rural physicians, community care groups, health care researchers, and tertiary care providers are using research prototypes and commercial off-the-shelf technologies to develop an open collaboration environment for the health care domain. This environment is called ARTEMIS — Advanced Research TEstbed for Medical InformaticS. PMID:8563249

  1. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model.

    PubMed

    Liu, Tongzhu; Shen, Aizong; Hu, Xiaojian; Tong, Guixian; Gu, Wei

    2017-06-01

    We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers.

  2. Protection of Location Privacy Based on Distributed Collaborative Recommendations

    PubMed Central

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users’ location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users’ location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users’ location information profiles and used generalization and encryption to ensure the safety of the user’s location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user’s location privacy. PMID:27649308

  3. Protection of Location Privacy Based on Distributed Collaborative Recommendations.

    PubMed

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.

  4. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  5. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  6. Making the Health Insurance Flexibility and Accountability (HIFA) waiver work through collaborative governance.

    PubMed

    Zabawa, Barbara J

    2003-01-01

    This paper argues that collaborative governance should be an essential component in any HIFA waiver proposal, due to the fact that the health care system is moving away from a federal and hierarchical program design and implementation towards a more local, collaborative approach. As several current collaborative projects demonstrate, collaboration may overcome barriers to health expansion program success, such as stakeholder buy-in, notice, and state access to private health coverage information. Furthermore, collaboration within the context of the HIFA waiver process may maximize the strengths of current collaborations, such as providing: (a) access to greater and more stable funding sources; (b) access to a facilitator that can collect and distribute data; and (c) an avenue for accountability. Multiple challenges in ensuring collaborative governance are reviewed. Ms. Zabawa argues that these challenges are not insurmountable if states adopt a truly collaborative approach to designing and implementing programs under the HIFA waiver; there may be hope in expanding and improving health coverage, since collaboration is the most appropriate mechanism to address the complexity of health system reform.

  7. Web-based GIS for collaborative planning and public participation: an application to the strategic planning of wind farm sites.

    PubMed

    Simão, Ana; Densham, Paul J; Haklay, Mordechai Muki

    2009-05-01

    Spatial planning typically involves multiple stakeholders. To any specific planning problem, stakeholders often bring different levels of knowledge about the components of the problem and make assumptions, reflecting their individual experiences, that yield conflicting views about desirable planning outcomes. Consequently, stakeholders need to learn about the likely outcomes that result from their stated preferences; this learning can be supported through enhanced access to information, increased public participation in spatial decision-making and support for distributed collaboration amongst planners, stakeholders and the public. This paper presents a conceptual system framework for web-based GIS that supports public participation in collaborative planning. The framework combines an information area, a Multi-Criteria Spatial Decision Support System (MC-SDSS) and an argumentation map to support distributed and asynchronous collaboration in spatial planning. After analysing the novel aspects of this framework, the paper describes its implementation, as a proof of concept, in a system for Web-based Participatory Wind Energy Planning (WePWEP). Details are provided on the specific implementation of each of WePWEP's four tiers, including technical and structural aspects. Throughout the paper, particular emphasis is placed on the need to support user learning throughout the planning process.

  8. A Distributed Multi-Agent System for Collaborative Information Management and Learning

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this paper, we present DIAMS, a system of distributed, collaborative agents to help users access, manage, share and exchange information. A DIAMS personal agent helps its owner find information most relevant to current needs. It provides tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Flexible hierarchical display is integrated with indexed query search-to support effective information access. Automatic indexing methods are employed to support user queries and communication between agents. Contents of a repository are kept in object-oriented storage to facilitate information sharing. Collaboration between users is aided by easy sharing utilities as well as automated information exchange. Matchmaker agents are designed to establish connections between users with similar interests and expertise. DIAMS agents provide needed services for users to share and learn information from one another on the World Wide Web.

  9. Collaborative Information Agents on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Mathe, Nathalie; Wolfe, Shawn; Koga, Dennis J. (Technical Monitor)

    1998-01-01

    In this paper, we present DIAMS, a system of distributed, collaborative information agents which help users access, collect, organize, and exchange information on the World Wide Web. Personal agents provide their owners dynamic displays of well organized information collections, as well as friendly information management utilities. Personal agents exchange information with one another. They also work with other types of information agents such as matchmakers and knowledge experts to facilitate collaboration and communication.

  10. A framework using cluster-based hybrid network architecture for collaborative virtual surgery.

    PubMed

    Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann

    2009-12-01

    Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.

  11. Collaboration for Transformation: Community-Campus Engagement for Just and Sustainable Food Systems

    ERIC Educational Resources Information Center

    Levkoe, Charles Z.; Andrée, Peter; Bhatt, Vikram; Brynne, Abra; Davison, Karen M.; Kneen, Cathleen; Nelson, Erin

    2016-01-01

    This article focuses on the collaborations between academics and community-based organizations seeking to fundamentally reorganize the way food is produced, distributed, and consumed as well as valued. The central research question investigates whether and how the growth of community--campus engagement (CCE) can strengthen food movements. Drawing…

  12. Software Management for the NOνAExperiment

    NASA Astrophysics Data System (ADS)

    Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.

    2015-12-01

    The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.

  13. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model

    PubMed Central

    LIU, Tongzhu; SHEN, Aizong; HU, Xiaojian; TONG, Guixian; GU, Wei

    2017-01-01

    Background: We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. Methods: We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. Results: For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Conclusion: Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers. PMID:28828316

  14. Training Students in Distributed Collaboration: Experiences from Two Pilot Projects.

    ERIC Educational Resources Information Center

    Munkvold, Bjorn Erik; Line, Lars

    Distributed collaboration supported by different forms of information and communication technologies (ICT) is becoming increasingly widespread. Effective realization of technology supported, distributed collaboration requires learning and careful attention to both technological and organizational aspects of the collaboration. Despite increasing…

  15. Collaborative Scheduling Using JMS in a Mixed Java and .NET Environment

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Ray; Baldwin, John; Borden, Chet

    2006-01-01

    A collaborative framework/environment was proto-typed to prove the feasibility of scheduling space flight missions on NASA's Deep Space Network (DSN) in a distributed fashion. In this environment, effective collaboration relies on efficient communications among all flight mission and DSN scheduling users. There-fore, messaging becomes critical to timely event notification and data synchronization. In the prototype, a rapid messaging system using Java Message Service (JMS) in a mixed Java and .NET environment is established. This scheme allows both Java and .NET applications to communicate with each other for data synchronization and schedule negotiation. The JMS approach we used is based on a centralized messaging scheme. With proper use of a high speed messaging system, all users in this collaborative framework can communicate with each other to generate a schedule collaboratively to meet DSN and projects tracking needs.

  16. The AMPATH Nutritional Information System: Designing a Food Distribution Electronic Record System in Rural Kenya

    PubMed Central

    Lim, Jason LitJeh; Yih, Yuehwern; Gichunge, Catherine; Tierney, William M.; Le, Tung H.; Zhang, Jun; Lawley, Mark A.; Petersen, Tomeka J.; Mamlin, Joseph J.

    2009-01-01

    Objective The AMPATH program is a leading initiative in rural Kenya providing healthcare services to combat HIV. Malnutrition and food insecurity are common among AMPATH patients and the Nutritional Information System (NIS) was designed, with cross-functional collaboration between engineering and medical communities, as a comprehensive electronic system to record and assist in effective food distribution in a region with poor infrastructure. Design The NIS was designed modularly to support the urgent need of a system for the growing food distribution program. The system manages the ordering, storage, packing, shipping, and distribution of fresh produce from AMPATH farms and dry food supplements from the World Food Programme (WFP) and U.S. Agency for International Development (USAID) based on nutritionists' prescriptions for food supplements. Additionally, the system also records details of food distributed to support future studies. Measurements Patients fed weekly, patient visits per month. Results With inception of the NIS, the AMPATH food distribution program was able to support 30,000 persons fed weekly, up from 2,000 persons. Patient visits per month also saw a marked increase. Conclusion The NIS' modular design and frequent, effective interactions between developers and users has positively affected the design, implementation, support, and modifications of the NIS. It demonstrates the success of collaboration between engineering and medical communities, and more importantly the feasibility for technology readily available in a modern country to contribute to healthcare delivery in developing countries like Kenya and other parts of sub-Saharan Africa. PMID:19717795

  17. Laboratory for Computer Science Progress Report 21, July 1983-June 1984.

    DTIC Science & Technology

    1984-06-01

    Systems 269 4. Distributed Consensus 270 5. Election of a Leader in a Distributed Ring of Processors 273 6. Distributed Network Algorithms 274 7. Diagnosis...multiprocessor systems. This facility, funded by the new!y formed Strategic Computing Program of the Defense Advanced Research Projects Agency, will enable...Academic Staff P. Szo)ovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital R

  18. Laboratory information management system for membrane protein structure initiative--from gene to crystal.

    PubMed

    Troshin, Petr V; Morris, Chris; Prince, Stephen M; Papiz, Miroslav Z

    2008-12-01

    Membrane Protein Structure Initiative (MPSI) exploits laboratory competencies to work collaboratively and distribute work among the different sites. This is possible as protein structure determination requires a series of steps, starting with target selection, through cloning, expression, purification, crystallization and finally structure determination. Distributed sites create a unique set of challenges for integrating and passing on information on the progress of targets. This role is played by the Protein Information Management System (PIMS), which is a laboratory information management system (LIMS), serving as a hub for MPSI, allowing collaborative structural proteomics to be carried out in a distributed fashion. It holds key information on the progress of cloning, expression, purification and crystallization of proteins. PIMS is employed to track the status of protein targets and to manage constructs, primers, experiments, protocols, sample locations and their detailed histories: thus playing a key role in MPSI data exchange. It also serves as the centre of a federation of interoperable information resources such as local laboratory information systems and international archival resources, like PDB or NCBI. During the challenging task of PIMS integration, within the MPSI, we discovered a number of prerequisites for successful PIMS integration. In this article we share our experiences and provide invaluable insights into the process of LIMS adaptation. This information should be of interest to partners who are thinking about using LIMS as a data centre for their collaborative efforts.

  19. Family Interaction and Consensus with IT Support.

    PubMed

    Karlsudd, Peter

    2012-01-01

    Experience shows that there are great defects in information and collaboration between families and professionals in the health and care sector. In an attempt to improve the quality of the efforts planned and implemented in collaboration with relatives a family-related IT-based collaboration system called CIDC was constructed. With the intention to facilitate communication, information, documentation, and collaboration the system was tested together with parents of children with cognitive impairment. The system contains a number of functions gathered in a so-called e-collaboration room. The person administering and distributing the system authorizes the patient/care recipient or relative to build up an e-collaboration room. The result has been largely positive, but the part, which was supposed to document everyday activities, leaves much to be desired. For this reason a follow-up study was completed, and an iPad was used as a contact book, which with the help of the Dropbox software provided increased insight into the child and improved the contact with parents without losing confidentiality or causing extra workload for the staff. By automatic download from the iPad parents and/or contact persons could easily follow the documentation of children's everyday activities.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This fact sheet describes the collaboration between NREL, SolarCity, and the Hawaiian Electric Companies at the Energy Systems Integration Facility (ESIF) to address the safety, reliability, and stability challenges of interconnecting high penetrations of distributed photovoltaics with the electric power system.

  1. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  2. Research and Implementation of Key Technologies in Multi-Agent System to Support Distributed Workflow

    NASA Astrophysics Data System (ADS)

    Pan, Tianheng

    2018-01-01

    In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.

  3. Waste-to-Energy Technology Brief

    EPA Science Inventory

    ETV's Greenhouse Gas Technology (GHG) Center, operated by Southern Research Institute under a cooperative agreement with US EPA, verified two biogas processing systems and four distributed generation (DG) energy systems in collaboration with the Colorado Governors Office or the N...

  4. Automatic Tools for Enhancing the Collaborative Experience in Large Projects

    NASA Astrophysics Data System (ADS)

    Bourilkov, D.; Rodriquez, J. L.

    2014-06-01

    With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what is being done and how - for private use, reuse, and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if the log is automatically created on the fly while the scientist or software developer is working in a habitual way, without the need for extra efforts. This saves time and enables a team to do more with the same resources. The CODESH - COllaborative DEvelopment SHell - and CAVES - Collaborative Analysis Versioning Environment System projects address this problem in a novel way. They build on the concepts of virtual states and transitions to enhance the collaborative experience by providing automatic persistent virtual logbooks. CAVES is designed for sessions of distributed data analysis using the popular ROOT framework, while CODESH generalizes the approach for any type of work on the command line in typical UNIX shells like bash or tcsh. Repositories of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions or sessions shared within or between collaborating groups. A typical use case is building working scalable systems for analysis of Petascale volumes of data as encountered in the LHC experiments. Our approach is general enough to find applications in many fields.

  5. The Persistence of Hierarchy: How One School District's Top Administrators Worked to Guide a Culture Change towards Collaborative Leadership

    ERIC Educational Resources Information Center

    Bravo, Robert Ronald

    2011-01-01

    There is no shortage of scholars that believe that traditional, top-down leadership is antithetical to facilitating the improvements that must be made to the American educational system if all children are to achieve academically. These scholars call for distributive, facilitative, and/or collaborative leadership, yet little is known about how…

  6. CRC Clinical Trials Management System (CTMS): An Integrated Information Management Solution for Collaborative Clinical Research

    PubMed Central

    Payne, Philip R.O.; Greaves, Andrew W.; Kipps, Thomas J.

    2003-01-01

    The Chronic Lymphocytic Leukemia (CLL) Research Consortium (CRC) consists of 9 geographically distributed sites conducting a program of research including both basic science and clinical components. To enable the CRC’s clinical research efforts, a system providing for real-time collaboration was required. CTMS provides such functionality, and demonstrates that the use of novel data modeling, web-application platforms, and management strategies provides for the deployment of an extensible, cost effective solution in such an environment. PMID:14728471

  7. Coordinated microgrid investment and planning process considering the system operator

    DOE PAGES

    Armendáriz, M.; Heleno, M.; Cardoso, G.; ...

    2017-05-12

    Nowadays, a significant number of distribution systems are facing problems to accommodate more photovoltaic (PV) capacity, namely due to the overvoltages during the daylight periods. This has an impact on the private investments in distributed energy resources (DER), since it occurs exactly when the PV prices are becoming attractive, and the opportunity to an energy transition based on solar technologies is being wasted. In particular, this limitation of the networks is a barrier for larger consumers, such as commercial and public buildings, aiming at investing in PV capacity and start operating as microgrids connected to the MV network. To addressmore » this challenge, this paper presents a coordinated approach to the microgrid investment and planning problem, where the system operator and the microgrid owner collaborate to improve the voltage control capabilities of the distribution network, increasing the PV potential. The results prove that this collaboration has the benefit of increasing the value of the microgrid investments while improving the quality of service of the system and it should be considered in the future regulatory framework.« less

  8. Coordinated microgrid investment and planning process considering the system operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendáriz, M.; Heleno, M.; Cardoso, G.

    Nowadays, a significant number of distribution systems are facing problems to accommodate more photovoltaic (PV) capacity, namely due to the overvoltages during the daylight periods. This has an impact on the private investments in distributed energy resources (DER), since it occurs exactly when the PV prices are becoming attractive, and the opportunity to an energy transition based on solar technologies is being wasted. In particular, this limitation of the networks is a barrier for larger consumers, such as commercial and public buildings, aiming at investing in PV capacity and start operating as microgrids connected to the MV network. To addressmore » this challenge, this paper presents a coordinated approach to the microgrid investment and planning problem, where the system operator and the microgrid owner collaborate to improve the voltage control capabilities of the distribution network, increasing the PV potential. The results prove that this collaboration has the benefit of increasing the value of the microgrid investments while improving the quality of service of the system and it should be considered in the future regulatory framework.« less

  9. Architecture for distributed design and fabrication

    NASA Astrophysics Data System (ADS)

    McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.

    1997-01-01

    We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.

  10. An Attempt To Design Synchronous Collaborative Learning Environments for Peer Dyads on the World Wide Web.

    ERIC Educational Resources Information Center

    Lee, Fong-Lok; Liang, Steven; Chan, Tak-Wai

    1999-01-01

    Describes the design, implementation, and preliminary evaluation of three synchronous distributed learning prototype systems: Co-Working System, Working Along System, and Hybrid System. Each supports a particular style of interaction, referred to a socio-activity learning model, between members of student dyads (pairs). All systems were…

  11. Research on Collaborative Technology in Distributed Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi

    2018-01-01

    Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.

  12. Heterogeneous collaborative sensor network for electrical management of an automated house with PV energy.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Alvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier

    2011-01-01

    In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the "Smart Grid" which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called "MagicBox" equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.

  13. Environmental Technology Verification Report: Climate Energy freewatt™ Micro-Combined Heat and Power System

    EPA Science Inventory

    The EPA GHG Center collaborated with the New York State Energy Research and Development Authority (NYSERDA) to evaluate the performance of the Climate Energy freewatt Micro-Combined Heat and Power System. The system is a reciprocating internal combustion (IC) engine distributed e...

  14. TeleMed: Wide-area, secure, collaborative object computing with Java and CORBA for healthcare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; George, J.E.; Gavrilov, E.M.

    1998-12-31

    Distributed computing is becoming commonplace in a variety of industries with healthcare being a particularly important one for society. The authors describe the development and deployment of TeleMed in a few healthcare domains. TeleMed is a 100% Java distributed application build on CORBA and OMG standards enabling the collaboration on the treatment of chronically ill patients in a secure manner over the Internet. These standards enable other systems to work interoperably with TeleMed and provide transparent access to high performance distributed computing to the healthcare domain. The goal of wide scale integration of electronic medical records is a grand-challenge scalemore » problem of global proportions with far-reaching social benefits.« less

  15. A Domain-Specific Language for Aviation Domain Interoperability

    ERIC Educational Resources Information Center

    Comitz, Paul

    2013-01-01

    Modern information systems require a flexible, scalable, and upgradeable infrastructure that allows communication and collaboration between heterogeneous information processing and computing environments. Aviation systems from different organizations often use differing representations and distribution policies for the same data and messages,…

  16. Temporal patterns of mental model convergence: implications for distributed teams interacting in electronic collaboration spaces.

    PubMed

    McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael

    2010-04-01

    Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.

  17. A bipartite graph of Neuroendocrine System

    NASA Astrophysics Data System (ADS)

    Guo, Zhong-Wei; Zou, Sheng-Rong; Peng, Yu-Jing; Zhou, Ta; Gu, Chang-Gui; He, Da-Ren

    2008-03-01

    We present an empirical investigation on the neuroendocrine system and suggest describe it by a bipartite graph. In the net the cells can be regarded as collaboration acts and the mediators can be regarded as collaboration actors. The act degree stands for the number of the cells that secrete a single mediator. Among them bFGF (the basic fibroblast growth factor) has the largest node act degree. It is the most important mitogenic cytokine, followed by TGF-beta, IL-6, IL1-beta, VEGF, IGF-1and so on. They are critical in neuroendocrine system to maintain bodily healthiness, emotional stabilization and endocrine harmony. The act degree distribution shows a shifted power law (SPL) function forms [1]. The average act degree of neuroendocrine network is h=3.01, It means that each mediator is secreted by three cells on average. The similarity, which stands for the average probability of secreting the same mediators by all neuroendocrine cells, is observed as s=0.14. Our results may be used in the research of the medical treatment of neuroendocrine diseases. [1] Assortativity and act degree distribution of some collaboration networks, Hui Chang, Bei-Bei Su, Yue-Ping Zhou, Daren He, Physica A, 383 (2007) 687-702

  18. Distributed Leadership and Digital Collaborative Learning: A Synergistic Relationship?

    ERIC Educational Resources Information Center

    Harris, Alma; Jones, Michelle; Baba, Suria

    2013-01-01

    This paper explores the synergy between distributed leadership and digital collaborative learning. It argues that distributed leadership offers an important theoretical lens for understanding and explaining how digital collaboration is best supported and led. Drawing upon evidence from two online educational platforms, the paper explores the…

  19. The Unified Medical Language System: an informatics research collaboration.

    PubMed

    Humphreys, B L; Lindberg, D A; Schoolman, H M; Barnett, G O

    1998-01-01

    In 1986, the National Library of Medicine (NLM) assembled a large multidisciplinary, multisite team to work on the Unified Medical Language System (UMLS), a collaborative research project aimed at reducing fundamental barriers to the application of computers to medicine. Beyond its tangible products, the UMLS Knowledge Sources, and its influence on the field of informatics, the UMLS project is an interesting case study in collaborative research and development. It illustrates the strengths and challenges of substantive collaboration among widely distributed research groups. Over the past decade, advances in computing and communications have minimized the technical difficulties associated with UMLS collaboration and also facilitated the development, dissemination, and use of the UMLS Knowledge Sources. The spread of the World Wide Web has increased the visibility of the information access problems caused by multiple vocabularies and many information sources which are the focus of UMLS work. The time is propitious for building on UMLS accomplishments and making more progress on the informatics research issues first highlighted by the UMLS project more than 10 years ago.

  20. Collaborative modeling: the missing piece of distributed simulation

    NASA Astrophysics Data System (ADS)

    Sarjoughian, Hessam S.; Zeigler, Bernard P.

    1999-06-01

    The Department of Defense overarching goal of performing distributed simulation by overcoming geographic and time constraints has brought the problem of distributed modeling to the forefront. The High Level Architecture standard is primarily intended for simulation interoperability. However, as indicated, the existence of a distributed modeling infrastructure plays a fundamental and central role in supporting the development of distributed simulations. In this paper, we describe some fundamental distributed modeling concepts and their implications for constructing successful distributed simulations. In addition, we discuss the Collaborative DEVS Modeling environment that has been devised to enable graphically dispersed modelers to collaborate and synthesize modular and hierarchical models. We provide an actual example of the use of Collaborative DEVS Modeler in application to a project involving corporate partners developing an HLA-compliant distributed simulation exercise.

  1. The Distributed Space Exploration Simulation (DSES)

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Chung, Victoria I.; Blum, Mike G.; Bowman, James D.

    2007-01-01

    The paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which focuses on the investigation and development of technologies, processes and integrated simulations related to the collaborative distributed simulation of complex space systems in support of NASA's Exploration Initiative. This paper describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. In the network work area, DSES is developing a Distributed Simulation Network that will provide agency wide support for distributed simulation between all NASA centers. In the software work area, DSES is developing a collection of software models, tool and procedures that ease the burden of developing distributed simulations and provides a consistent interoperability infrastructure for agency wide participation in integrated simulation. Finally, for simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper will present current status and plans for each of these work areas with specific examples of simulations that support NASA's exploration initiatives.

  2. 3rd Annual Earth System Grid Federation and 3rd Annual Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Face-to-Face Meeting Report December 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    The climate and weather data science community gathered December 3–5, 2013, at Lawrence Livermore National Laboratory, in Livermore, California, for the third annual Earth System Grid Federation (ESGF) and Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Meeting, which was hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UV-CDAT are global collaborations designed to develop a new generation of open-source software infrastructure that provides distributed access and analysis to observed andmore » simulated data from the climate and weather communities. The tools and infrastructure developed under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change, while the F2F meetings help to build a stronger climate and weather data science community and stronger federated software infrastructure. The 2013 F2F meeting determined requirements for existing and impending national and international community projects; enhancements needed for data distribution, analysis, and visualization infrastructure; and standards and resources needed for better collaborations.« less

  3. Defending against Attribute-Correlation Attacks in Privacy-Aware Information Brokering

    NASA Astrophysics Data System (ADS)

    Li, Fengjun; Luo, Bo; Liu, Peng; Squicciarini, Anna C.; Lee, Dongwon; Chu, Chao-Hsien

    Nowadays, increasing needs for information sharing arise due to extensive collaborations among organizations. Organizations desire to provide data access to their collaborators while preserving full control over the data and comprehensive privacy of their users. A number of information systems have been developed to provide efficient and secure information sharing. However, most of the solutions proposed so far are built atop of conventional data warehousing or distributed database technologies.

  4. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  5. Heterogeneous Collaborative Sensor Network for Electrical Management of an Automated House with PV Energy

    PubMed Central

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Álvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier

    2011-01-01

    In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency. PMID:22247680

  6. Clinical data integration of distributed data sources using Health Level Seven (HL7) v3-RIM mapping

    PubMed Central

    2011-01-01

    Background Health information exchange and health information integration has become one of the top priorities for healthcare systems across institutions and hospitals. Most organizations and establishments implement health information exchange and integration in order to support meaningful information retrieval among their disparate healthcare systems. The challenges that prevent efficient health information integration for heterogeneous data sources are the lack of a common standard to support mapping across distributed data sources and the numerous and diverse healthcare domains. Health Level Seven (HL7) is a standards development organization which creates standards, but is itself not the standard. They create the Reference Information Model. RIM is developed by HL7's technical committees. It is a standardized abstract representation of HL7 data across all the domains of health care. In this article, we aim to present a design and a prototype implementation of HL7 v3-RIM mapping for information integration of distributed clinical data sources. The implementation enables the user to retrieve and search information that has been integrated using HL7 v3-RIM technology from disparate health care systems. Method and results We designed and developed a prototype implementation of HL7 v3-RIM mapping function to integrate distributed clinical data sources using R-MIM classes from HL7 v3-RIM as a global view along with a collaborative centralized web-based mapping tool to tackle the evolution of both global and local schemas. Our prototype was implemented and integrated with a Clinical Database management Systems CDMS as a plug-in module. We tested the prototype system with some use case scenarios for distributed clinical data sources across several legacy CDMS. The results have been effective in improving information delivery, completing tasks that would have been otherwise difficult to accomplish, and reducing the time required to finish tasks which are used in collaborative information retrieval and sharing with other systems. Conclusions We created a prototype implementation of HL7 v3-RIM mapping for information integration between distributed clinical data sources to promote collaborative healthcare and translational research. The prototype has effectively and efficiently ensured the accuracy of the information and knowledge extractions for systems that have been integrated PMID:22104558

  7. Advanced Image Processing for NASA Applications

    NASA Technical Reports Server (NTRS)

    LeMoign, Jacqueline

    2007-01-01

    The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.

  8. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  9. Distributed Collaborative Homework Activities in a Problem-Based Usability Engineering Course

    ERIC Educational Resources Information Center

    Carroll, John M.; Jiang, Hao; Borge, Marcela

    2015-01-01

    Teams of students in an upper-division undergraduate Usability Engineering course used a collaborative environment to carry out a series of three distributed collaborative homework assignments. Assignments were case-based analyses structured using a jigsaw design; students were provided a collaborative software environment and introduced to a…

  10. ARTEMIS: a collaborative framework for health care.

    PubMed

    Reddy, R; Jagannathan, V; Srinivas, K; Karinthi, R; Reddy, S M; Gollapudy, C; Friedman, S

    1993-01-01

    Patient centered healthcare delivery is an inherently collaborative process. This involves a wide range of individuals and organizations with diverse perspectives: primary care physicians, hospital administrators, labs, clinics, and insurance. The key to cost reduction and quality improvement in health care is effective management of this collaborative process. The use of multi-media collaboration technology can facilitate timely delivery of patient care and reduce cost at the same time. During the last five years, the Concurrent Engineering Research Center (CERC), under the sponsorship of DARPA (Defense Advanced Research Projects Agency, recently renamed ARPA) developed a number of generic key subsystems of a comprehensive collaboration environment. These subsystems are intended to overcome the barriers that inhibit the collaborative process. Three subsystems developed under this program include: MONET (Meeting On the Net)--to provide consultation over a computer network, ISS (Information Sharing Server)--to provide access to multi-media information, and PCB (Project Coordination Board)--to better coordinate focussed activities. These systems have been integrated into an open environment to enable collaborative processes. This environment is being used to create a wide-area (geographically distributed) research testbed under DARPA sponsorship, ARTEMIS (Advance Research Testbed for Medical Informatics) to explore the collaborative health care processes. We believe this technology will play a key role in the current national thrust to reengineer the present health-care delivery system.

  11. Learning Systems in Post-Statutory Education

    ERIC Educational Resources Information Center

    Catherall, Paul

    2008-01-01

    This article examines the broad scope of systemised learning (e-learning) in post-statutory education. Issues for discussion include the origins and forms of learning systems, including technical and educational concepts and approaches, such as distributed and collaborative learning. The VLE (Virtual Learning Environment) is defined as the…

  12. The Impact of Virtual Collaboration and Collaboration Technologies on Knowledge Transfer and Team Performance in Distributed Organizations

    ERIC Educational Resources Information Center

    Ngoma, Ngoma Sylvestre

    2013-01-01

    Virtual teams are increasingly viewed as a powerful determinant of competitive advantage in geographically distributed organizations. This study was designed to provide insights into the interdependencies between virtual collaboration, collaboration technologies, knowledge transfer, and virtual team performance in an effort to understand whether…

  13. A Multi-touch Tool for Co-creation

    NASA Astrophysics Data System (ADS)

    Ludden, Geke D. S.; Broens, Tom

    Multi-touch technology provides an attractive way for knowledge workers to collaborate. Co-creation is an important collaboration process in which collecting resources, creating results and distributing these results is essential. We propose a wall-based multi-touch system (called CoCreate) in which these steps are made easy due to the notion of connected private spaces and a shared co-create space. We present our ongoing work, expert evaluation of interaction scenarios and future plans.

  14. Collaborative Workspaces within Distributed Virtual Environments.

    DTIC Science & Technology

    1996-12-01

    such as a text document, a 3D model, or a captured image using a collaborative workspace called the InPerson Whiteboard . The Whiteboard contains a...commands for editing objects drawn on the screen. Finally, when the call is completed, the Whiteboard can be saved to a file for future use . IRIS Annotator... use , and a shared whiteboard that includes a number of multimedia annotation tools. Both systems are also mindful of bandwidth limitations and can

  15. Distributed situation awareness in complex collaborative systems: A field study of bridge operations on platform supply vessels.

    PubMed

    Sandhåland, Hilde; Oltedal, Helle A; Hystad, Sigurd W; Eid, Jarle

    2015-06-01

    This study provides empirical data about shipboard practices in bridge operations on board a selection of platform supply vessels (PSVs). Using the theoretical concept of distributed situation awareness, the study examines how situation awareness (SA)-related information is distributed and coordinated at the bridge. This study thus favours a systems approach to studying SA, viewing it not as a phenomenon that solely happens in each individual's mind but rather as something that happens between individuals and the tools that they use in a collaborative system. Thus, this study adds to our understanding of SA as a distributed phenomenon. Data were collected in four field studies that lasted between 8 and 14 days on PSVs that operate on the Norwegian continental shelf and UK continental shelf. The study revealed pronounced variations in shipboard practices regarding how the bridge team attended to operational planning, communication procedures, and distracting/interrupting factors during operations. These findings shed new light on how SA might decrease in bridge teams during platform supply operations. The findings from this study emphasize the need to assess and establish shipboard practices that support the bridge teams' SA needs in day-to-day operations. Provides insights into how shipboard practices that are relevant to planning, communication and the occurrence of distracting/interrupting factors are realized in bridge operations.Notes possible areas for improvement to enhance distributed SA in bridge operations.

  16. Driving the need to feed: Insight into the collaborative interaction between ghrelin and endocannabinoid systems in modulating brain reward systems.

    PubMed

    Edwards, Alexander; Abizaid, Alfonso

    2016-07-01

    Independent stimulation of either the ghrelin or endocannabinoid system promotes food intake and increases adiposity. Given the similar distribution of their receptors in feeding associated brain regions and organs involved in metabolism, it is not surprising that evidence of their interaction and its importance in modulating energy balance has emerged. This review documents the relationship between ghrelin and endocannabinoid systems within the periphery and hypothalamus (HYP) before presenting evidence suggesting that these two systems likewise work collaboratively within the ventral tegmental area (VTA) to modulate non-homeostatic feeding. Mechanisms, consistent with current evidence and local infrastructure within the VTA, will be proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Towards multi-platform software architecture for Collaborative Teleoperation

    NASA Astrophysics Data System (ADS)

    Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik

    2009-03-01

    Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.

  18. Towards multi-platform software architecture for Collaborative Teleoperation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domingues, Christophe; Otmane, Samir; Davesne, Frederic

    2009-03-05

    Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robotmore » simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.« less

  19. Collaborative Control of Media Playbacks in SCDNs

    ERIC Educational Resources Information Center

    Fortino, Giancarlo; Russo, Wilma; Palau, Carlos E.

    2006-01-01

    In this paper we present a CDN-based system, namely the COMODIN system, which is a media on-demand platform for synchronous cooperative work which supports an explicitly-formed cooperative group of distributed users with the following integrated functionalities: request of an archived multimedia session, sharing of its playback, and collaboration…

  20. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  1. C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training

    PubMed Central

    Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter

    2008-01-01

    The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178

  2. Research in Modeling and Simulation for Airspace Systems Innovation

    NASA Technical Reports Server (NTRS)

    Ballin, Mark G.; Kimmel, William M.; Welch, Sharon S.

    2007-01-01

    This viewgraph presentation provides an overview of some of the applied research and simulation methodologies at the NASA Langley Research Center that support aerospace systems innovation. Risk assessment methodologies, complex systems design and analysis methodologies, and aer ospace operations simulations are described. Potential areas for future research and collaboration using interactive and distributed simula tions are also proposed.

  3. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  4. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software.

    PubMed

    Suhanic, West; Crandall, Ian; Pennefather, Peter

    2009-07-17

    Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation enables local control over the creation and use of diagnostic data, while allowing for remote collaborative support of diagnostic data interpretation and tracking. It can enable global pooling of malaria disease information and the development of open, participatory, and adaptable laboratory medicine practices. The informatic model highlights how the larger issue of access to generic commoditized measurement, information processing, and communication technology in both high- and low-income countries can enable diagnostic services that are much less expensive, but substantially equivalent to those currently in use in high-income countries.

  5. Semantic Service Design for Collaborative Business Processes in Internetworked Enterprises

    NASA Astrophysics Data System (ADS)

    Bianchini, Devis; Cappiello, Cinzia; de Antonellis, Valeria; Pernici, Barbara

    Modern collaborating enterprises can be seen as borderless organizations whose processes are dynamically transformed and integrated with the ones of their partners (Internetworked Enterprises, IE), thus enabling the design of collaborative business processes. The adoption of Semantic Web and service-oriented technologies for implementing collaboration in such distributed and heterogeneous environments promises significant benefits. IE can model their own processes independently by using the Software as a Service paradigm (SaaS). Each enterprise maintains a catalog of available services and these can be shared across IE and reused to build up complex collaborative processes. Moreover, each enterprise can adopt its own terminology and concepts to describe business processes and component services. This brings requirements to manage semantic heterogeneity in process descriptions which are distributed across different enterprise systems. To enable effective service-based collaboration, IEs have to standardize their process descriptions and model them through component services using the same approach and principles. For enabling collaborative business processes across IE, services should be designed following an homogeneous approach, possibly maintaining a uniform level of granularity. In the paper we propose an ontology-based semantic modeling approach apt to enrich and reconcile semantics of process descriptions to facilitate process knowledge management and to enable semantic service design (by discovery, reuse and integration of process elements/constructs). The approach brings together Semantic Web technologies, techniques in process modeling, ontology building and semantic matching in order to provide a comprehensive semantic modeling framework.

  6. Collaborative enterprise and virtual prototyping (CEVP): a product-centric approach to distributed simulation

    NASA Astrophysics Data System (ADS)

    Saunders, Vance M.

    1999-06-01

    The downsizing of the Department of Defense (DoD) and the associated reduction in budgets has re-emphasized the need for commonality, reuse, and standards with respect to the way DoD does business. DoD has implemented significant changes in how it buys weapon systems. The new emphasis is on concurrent engineering with Integrated Product and Process Development and collaboration with Integrated Product Teams. The new DoD vision includes Simulation Based Acquisition (SBA), a process supported by robust, collaborative use of simulation technology that is integrated across acquisition phases and programs. This paper discusses the Air Force Research Laboratory's efforts to use Modeling and Simulation (M&S) resources within a Collaborative Enterprise Environment to support SBA and other Collaborative Enterprise and Virtual Prototyping (CEVP) applications. The paper will discuss four technology areas: (1) a Processing Ontology that defines a hierarchically nested set of collaboration contexts needed to organize and support multi-disciplinary collaboration using M&S, (2) a partial taxonomy of intelligent agents needed to manage different M&S resource contributions to advancing the state of product development, (3) an agent- based process for interfacing disparate M&S resources into a CEVP framework, and (4) a Model-View-Control based approach to defining `a new way of doing business' for users of CEVP frameworks/systems.

  7. Making sense of sparse rating data in collaborative filtering via topographic organization of user preference patterns.

    PubMed

    Polcicová, Gabriela; Tino, Peter

    2004-01-01

    We introduce topographic versions of two latent class models (LCM) for collaborative filtering. Latent classes are topologically organized on a square grid. Topographic organization of latent classes makes orientation in rating/preference patterns captured by the latent classes easier and more systematic. The variation in film rating patterns is modelled by multinomial and binomial distributions with varying independence assumptions. In the first stage of topographic LCM construction, self-organizing maps with neural field organized according to the LCM topology are employed. We apply our system to a large collection of user ratings for films. The system can provide useful visualization plots unveiling user preference patterns buried in the data, without loosing potential to be a good recommender model. It appears that multinomial distribution is most adequate if the model is regularized by tight grid topologies. Since we deal with probabilistic models of the data, we can readily use tools from probability and information theories to interpret and visualize information extracted by our system.

  8. Stakeholder Convening and Working Groups | Solar Research | NREL

    Science.gov Websites

    . Distributed Generation Interconnection Collaborative Established in 2013 by NREL, the Distributed Generation Interconnection Collaborative (DGIC) provides a forum for the exchange of best practices for distributed

  9. Applying a New Model for Sharing Population Health Data to National Syndromic Influenza Surveillance: DiSTRIBuTE Project Proof of Concept, 2006 to 2009.

    PubMed

    Olson, Donald R; Paladini, Marc; Lober, William B; Buckeridge, David L

    2011-08-02

    The Distributed Surveillance Taskforce for Real-time Influenza Burden Tracking and Evaluation (DiSTRIBuTE) project began as a pilot effort initiated by the International Society for Disease Surveillance (ISDS) in autumn 2006 to create a collaborative electronic emergency department (ED) syndromic influenza-like illness (ILI) surveillance network based on existing state and local systems and expertise. DiSTRIBuTE brought together health departments that were interested in: 1) sharing aggregate level data; 2) maintaining jurisdictional control; 3) minimizing barriers to participation; and 4) leveraging the flexibility of local systems to create a dynamic and collaborative surveillance network. This approach was in contrast to the prevailing paradigm for surveillance where record level information was collected, stored and analyzed centrally. The DiSTRIBuTE project was created with a distributed design, where individual level data remained local and only summarized, stratified counts were reported centrally, thus minimizing privacy risks. The project was responsive to federal mandates to improve integration of federal, state, and local biosurveillance capabilities. During the proof of concept phase, 2006 to 2009, ten jurisdictions from across North America sent ISDS on a daily to weekly basis year-round, aggregated data by day, stratified by local ILI syndrome, age-group and region. During this period, data from participating U.S. state or local health departments captured over 13% of all ED visits nationwide. The initiative focused on state and local health department trust, expertise, and control. Morbidity trends observed in DiSTRIBuTE were highly correlated with other influenza surveillance measures. With the emergence of novel A/H1N1 influenza in the spring of 2009, the project was used to support information sharing and ad hoc querying at the state and local level. In the fall of 2009, through a broadly collaborative effort, the project was expanded to enhance electronic ED surveillance nationwide.

  10. A Scalable, Collaborative, Interactive Light-field Display System

    DTIC Science & Technology

    2014-06-01

    displays, 3D display, holographic video, integral photography, plenoptic , computed photography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...light-field, holographic displays, 3D display, holographic video, integral photography, plenoptic , computed photography 1 Distribution A: Approved

  11. THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL

    EPA Science Inventory

    A toolkit for distributed hydrologic modeling at multiple scales using a geographic information system is presented. This open-source, freely available software was developed through a collaborative endeavor involving two Universities and two government agencies. Called the Auto...

  12. Distributed Cognition and Process Management Enabling Individualized Translational Research: The NIH Undiagnosed Diseases Program Experience

    PubMed Central

    Links, Amanda E.; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A.; Mungall, Christopher J.; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M.; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F.; Gahl, William A.; Sincan, Murat

    2016-01-01

    The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement. PMID:27785453

  13. ARTEMIS: a collaborative framework for health care.

    PubMed Central

    Reddy, R.; Jagannathan, V.; Srinivas, K.; Karinthi, R.; Reddy, S. M.; Gollapudy, C.; Friedman, S.

    1993-01-01

    Patient centered healthcare delivery is an inherently collaborative process. This involves a wide range of individuals and organizations with diverse perspectives: primary care physicians, hospital administrators, labs, clinics, and insurance. The key to cost reduction and quality improvement in health care is effective management of this collaborative process. The use of multi-media collaboration technology can facilitate timely delivery of patient care and reduce cost at the same time. During the last five years, the Concurrent Engineering Research Center (CERC), under the sponsorship of DARPA (Defense Advanced Research Projects Agency, recently renamed ARPA) developed a number of generic key subsystems of a comprehensive collaboration environment. These subsystems are intended to overcome the barriers that inhibit the collaborative process. Three subsystems developed under this program include: MONET (Meeting On the Net)--to provide consultation over a computer network, ISS (Information Sharing Server)--to provide access to multi-media information, and PCB (Project Coordination Board)--to better coordinate focussed activities. These systems have been integrated into an open environment to enable collaborative processes. This environment is being used to create a wide-area (geographically distributed) research testbed under DARPA sponsorship, ARTEMIS (Advance Research Testbed for Medical Informatics) to explore the collaborative health care processes. We believe this technology will play a key role in the current national thrust to reengineer the present health-care delivery system. PMID:8130536

  14. FAIRDOMHub: a repository and collaboration environment for sharing systems biology research.

    PubMed

    Wolstencroft, Katherine; Krebs, Olga; Snoep, Jacky L; Stanford, Natalie J; Bacall, Finn; Golebiewski, Martin; Kuzyakiv, Rostyk; Nguyen, Quyen; Owen, Stuart; Soiland-Reyes, Stian; Straszewski, Jakub; van Niekerk, David D; Williams, Alan R; Malmström, Lars; Rinn, Bernd; Müller, Wolfgang; Goble, Carole

    2017-01-04

    The FAIRDOMHub is a repository for publishing FAIR (Findable, Accessible, Interoperable and Reusable) Data, Operating procedures and Models (https://fairdomhub.org/) for the Systems Biology community. It is a web-accessible repository for storing and sharing systems biology research assets. It enables researchers to organize, share and publish data, models and protocols, interlink them in the context of the systems biology investigations that produced them, and to interrogate them via API interfaces. By using the FAIRDOMHub, researchers can achieve more effective exchange with geographically distributed collaborators during projects, ensure results are sustained and preserved and generate reproducible publications that adhere to the FAIR guiding principles of data stewardship. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. A Geo-Distributed System Architecture for Different Domains

    NASA Astrophysics Data System (ADS)

    Moßgraber, Jürgen; Middleton, Stuart; Tao, Ran

    2013-04-01

    The presentation will describe work on the system-of-systems (SoS) architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". In this project we deal with two use-cases: Natural Crisis Management (e.g. Tsunami Early Warning) and Industrial Subsurface Development (e.g. drilling for oil). These use-cases seem to be quite different at first sight but share a lot of similarities, like managing and looking up available sensors, extracting data from them and annotate it semantically, intelligently manage the data (big data problem), run mathematical analysis algorithms on the data and finally provide decision support on this basis. The main challenge was to create a generic architecture which fits both use-cases. The requirements to the architecture are manifold and the whole spectrum of a modern, geo-distributed and collaborative system comes into play. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. The most important architectural challenges we needed to address are 1. Build a scalable communication layer for a System-of-sytems 2. Build a resilient communication layer for a System-of-sytems 3. Efficiently publish large volumes of semantically rich sensor data 4. Scalable and high performance storage of large distributed datasets 5. Handling federated multi-domain heterogeneous data 6. Discovery of resources in a geo-distributed SoS 7. Coordination of work between geo-distributed systems The design decisions made for each of them will be presented. These developed concepts are also applicable to the requirements of the Future Internet (FI) and Internet of Things (IoT) which will provide services like smart grids, smart metering, logistics and environmental monitoring.

  16. Collaborative Defense of Transmission and Distribution Protection and Control Devices Against Cyber Attacks (CODEF) DE-OE0000674. ABB Inc. Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuqui, Reynaldo

    This report summarizes the activities conducted under the DOE-OE funded project DEOE0000674, where ABB Inc. (ABB), in collaboration with University of Illinois at Urbana-Champaign (UIUC), Bonneville Power Administration (BPA), and Ameren-Illinois (Ameren-IL) pursued the development of a system of collaborative defense of electrical substation’s intelligent electronic devices against cyber-attacks (CODEF). An electrical substation with CODEF features will be more capable of mitigating cyber-attacks especially those that seek to control switching devices. It leverages the security extensions of IEC 61850 to empower existing devices to collaborate in identifying and blocking malicious intents to trip circuit breakers, mis-coordinate devices settings, even thoughmore » the commands and the measurements comply with correct syntax. The CODEF functions utilize the physics of electromagnetic systems, electric power engineering principles, and computer science to bring more in depth cyber defense closer to the protected substation devices.« less

  17. The Unified Medical Language System

    PubMed Central

    Humphreys, Betsy L.; Lindberg, Donald A. B.; Schoolman, Harold M.; Barnett, G. Octo

    1998-01-01

    In 1986, the National Library of Medicine (NLM) assembled a large multidisciplinary, multisite team to work on the Unified Medical Language System (UMLS), a collaborative research project aimed at reducing fundamental barriers to the application of computers to medicine. Beyond its tangible products, the UMLS Knowledge Sources, and its influence on the field of informatics, the UMLS project is an interesting case study in collaborative research and development. It illustrates the strengths and challenges of substantive collaboration among widely distributed research groups. Over the past decade, advances in computing and communications have minimized the technical difficulties associated with UMLS collaboration and also facilitated the development, dissemination, and use of the UMLS Knowledge Sources. The spread of the World Wide Web has increased the visibility of the information access problems caused by multiple vocabularies and many information sources which are the focus of UMLS work. The time is propitious for building on UMLS accomplishments and making more progress on the informatics research issues first highlighted by the UMLS project more than 10 years ago. PMID:9452981

  18. Indiva: a middleware for managing distributed media environment

    NASA Astrophysics Data System (ADS)

    Ooi, Wei-Tsang; Pletcher, Peter; Rowe, Lawrence A.

    2003-12-01

    This paper presents a unified set of abstractions and operations for hardware devices, software processes, and media data in a distributed audio and video environment. These abstractions, which are provided through a middleware layer called Indiva, use a file system metaphor to access resources and high-level commands to simplify the development of Internet webcast and distributed collaboration control applications. The design and implementation of Indiva are described and examples are presented to illustrate the usefulness of the abstractions.

  19. The architecture of a distributed medical dictionary.

    PubMed

    Fowler, J; Buffone, G; Moreau, D

    1995-01-01

    Exploiting high-speed computer networks to provide a national medical information infrastructure is a goal for medical informatics. The Distributed Medical Dictionary under development at Baylor College of Medicine is a model for an architecture that supports collaborative development of a distributed online medical terminology knowledge-base. A prototype is described that illustrates the concept. Issues that must be addressed by such a system include high availability, acceptable response time, support for local idiom, and control of vocabulary.

  20. Foundations of data-intensive science: Technology and practice for high throughput, widely distributed, data management and analysis systems

    NASA Astrophysics Data System (ADS)

    Johnston, William; Ernst, M.; Dart, E.; Tierney, B.

    2014-04-01

    Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large-scale, widely distributed data analysis systems, and the experience of the LHC can be applied to other scientific disciplines. In particular, specific analogies to the SKA will be cited in the talk.

  1. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  2. Cybersecurity Technology R&D | Energy Systems Integration Facility | NREL

    Science.gov Websites

    and development (R&D) in cybersecurity is focused on distributed energy resources and the control equipment. The team is focusing on integrity for command and control messages in transit to and from systems and control architectures. Moving Target Defense In collaboration with Kansas State University

  3. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  4. Multisites Coordination in Shared Multicast Trees

    DTIC Science & Technology

    1999-01-01

    conferencing, distributed interactive simulations, and collaborative systems. We de- scribe a novel protocol to coordinate multipoint groupwork in the IP...multicast framework. The pro- tocol supports Internet-wide coordination for large and highly-interactive groupwork , relying on trans- mission of

  5. OXIDANT/DISINFECTANT CHEMISTRY AND IMPACTS ON LEAD CORROSION

    EPA Science Inventory

    In response to continued elevated lead levels throughout the District of Columbia's distribution system, a collaboration was begun with the District of Columbia's Water & Sewer Authority (WASA) and Water Resources Division of U. S. Environmental Protection Agency's (USEPA) Office...

  6. Pipe and Solids Analysis: What Can I Learn?

    EPA Science Inventory

    This presentation gives a brief overview of techniques that regulators, utilities and consultants might want to request from laboratories to anticipate or solve water treatment and distribution system water quality problems. Actual examples will be given from EPA collaborations,...

  7. Practices and Strategies of Distributed Knowledge Collaboration

    ERIC Educational Resources Information Center

    Kudaravalli, Srinivas

    2010-01-01

    Information Technology is enabling large-scale, distributed collaboration across many different kinds of boundaries. Researchers have used the label new organizational forms to describe such collaborations and suggested that they are better able to meet the demands of flexibility, speed and adaptability that characterize the knowledge economy.…

  8. Fostering Distributed Science Learning through Collaborative Technologies

    ERIC Educational Resources Information Center

    Vazquez-Abad, Jesus; Brousseau, Nancy; Guillermina, Waldegg C.; Vezina, Mylene; Martinez, Alicia D.; de Verjovsky, Janet Paul

    2004-01-01

    TACTICS (French and Spanish acronym standing for Collaborative Work and Learning in Science with Information and Communications Technologies) is an ongoing project aimed at investigating a distributed community of learning and practice in which information and communications technologies (ICT) take the role of collaborative tools to support social…

  9. The open research system: a web-based metadata and data repository for collaborative research

    Treesearch

    Charles M. Schweik; Alexander Stepanov; J. Morgan Grove

    2005-01-01

    Beginning in 1999, a web-based metadata and data repository we call the "open research system" (ORS) was designed and built to assist geographically distributed scientific research teams. The purpose of this innovation was to promote the open sharing of data within and across organizational lines and across geographic distances. As the use of the system...

  10. Distributed situation awareness in complex collaborative systems: A field study of bridge operations on platform supply vessels

    PubMed Central

    Sandhåland, Hilde; Oltedal, Helle A; Hystad, Sigurd W; Eid, Jarle

    2015-01-01

    This study provides empirical data about shipboard practices in bridge operations on board a selection of platform supply vessels (PSVs). Using the theoretical concept of distributed situation awareness, the study examines how situation awareness (SA)-related information is distributed and coordinated at the bridge. This study thus favours a systems approach to studying SA, viewing it not as a phenomenon that solely happens in each individual's mind but rather as something that happens between individuals and the tools that they use in a collaborative system. Thus, this study adds to our understanding of SA as a distributed phenomenon. Data were collected in four field studies that lasted between 8 and 14 days on PSVs that operate on the Norwegian continental shelf and UK continental shelf. The study revealed pronounced variations in shipboard practices regarding how the bridge team attended to operational planning, communication procedures, and distracting/interrupting factors during operations. These findings shed new light on how SA might decrease in bridge teams during platform supply operations. The findings from this study emphasize the need to assess and establish shipboard practices that support the bridge teams' SA needs in day-to-day operations. Practitioner points Provides insights into how shipboard practices that are relevant to planning, communication and the occurrence of distracting/interrupting factors are realized in bridge operations. Notes possible areas for improvement to enhance distributed SA in bridge operations. PMID:26028823

  11. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  12. Fall 2014 SEI Research Review Edge-Enabled Tactical Systems (EETS)

    DTIC Science & Technology

    2014-10-29

    Effective communicate and reasoning despite connectivity issues • More generally, how to make programming distributed algorithms with extensible...distributed collaboration in VREP simulations for 5-12 quadcopters and ground robots • Open-source middleware and algorithms released to community...Integration into CMU Drone-RK quadcopter and Platypus autonomous boat platforms • Presentations at DARPA (CODE), AFRL C4I Workshop, and AFRL Eglin

  13. Scalable collaborative risk management technology for complex critical systems

    NASA Technical Reports Server (NTRS)

    Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.

    2004-01-01

    We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.

  14. A Virtual Bioinformatics Knowledge Environment for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald

    2003-01-01

    Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.

  15. Evoking Knowledge and Information Awareness for Enhancing Computer-Supported Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Engelmann, Tanja; Tergan, Sigmar-Olaf; Hesse, Friedrich W.

    2010-01-01

    Computer-supported collaboration by spatially distributed group members still involves interaction problems within the group. This article presents an empirical study investigating the question of whether computer-supported collaborative problem solving by spatially distributed group members can be fostered by evoking knowledge and information…

  16. The proposed monitoring system for the Fermilab D0 colliding beams detector

    NASA Astrophysics Data System (ADS)

    Goodwin, Robert; Florian, Robert; Johnson, Marvin; Jones, Alan; Shea, Mike

    1986-06-01

    The Fermilab D0 Detector is a collaborative effort that includes seventeen universities and national laboratories. The monitoring and control system for this detector will be separate from the online detector data system. A distributed, stand-alone, microprocessor-based system is being designed to allow monitoring and control functions to be available to the collaborators at their home institutions during the design, fabrication, and testing phases of the project. Individual stations are VMEbus-based 68000 systems that are networked together during installation using an ARCnet (by Datapoint Corporation) Local Area Network. One station, perhaps a MicroVAX, would have a hard disk to store a backup copy of the distributed database located in non-volatile RAM in the local stations. This station would also serve as a gateway to the online system, so that data from the control system will be available for logging with the detector data. Apple Macintosh personal computers are being developed for use as the local control consoles. Each would be interfaced to ARCnet to provide access to all control system data. Through the use of bit-mapped graphics with multiple windows and pull-down menus, a cost effective, flexible display system can be provided, taking advantage of familiar modern software tools to support the operator interface.

  17. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  18. Accelerating Cancer Systems Biology Research through Semantic Web Technology

    PubMed Central

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.

    2012-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758

  19. Accelerating cancer systems biology research through Semantic Web technology.

    PubMed

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.

  20. Software To Secure Distributed Propulsion Simulations

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2003-01-01

    Distributed-object computing systems are presented with many security threats, including network eavesdropping, message tampering, and communications middleware masquerading. NASA Glenn Research Center, and its industry partners, has taken an active role in mitigating the security threats associated with developing and operating their proprietary aerospace propulsion simulations. In particular, they are developing a collaborative Common Object Request Broker Architecture (CORBA) Security (CORBASec) test bed to secure their distributed aerospace propulsion simulations. Glenn has been working with its aerospace propulsion industry partners to deploy the Numerical Propulsion System Simulation (NPSS) object-based technology. NPSS is a program focused on reducing the cost and time in developing aerospace propulsion engines

  1. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    PubMed

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  2. NASA's MERBoard: An Interactive Collaborative Workspace Platform. Chapter 4

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Wales, Roxana; Gossweiler, Rich

    2003-01-01

    This chapter describes the ongoing process by which a multidisciplinary group at NASA's Ames Research Center is designing and implementing a large interactive work surface called the MERBoard Collaborative Workspace. A MERBoard system involves several distributed, large, touch-enabled, plasma display systems with custom MERBoard software. A centralized server and database back the system. We are continually tuning MERBoard to support over two hundred scientists and engineers during the surface operations of the Mars Exploration Rover Missions. These scientists and engineers come from various disciplines and are working both in small and large groups over a span of space and time. We describe the multidisciplinary, human-centered process by which this h4ERBoard system is being designed, the usage patterns and social interactions that we have observed, and issues we are currently facing.

  3. Collaboration in a Multidisciplinary, Distributed Research Organization: A Case Study

    ERIC Educational Resources Information Center

    Duysburgh, Pieter; Naessens, Kris; Konings, Wim; Jacobs, An

    2012-01-01

    Collaboration has become a main characteristic of academic research today. New forms of research organizations, colaboratories, have come to the fore, with distributed research centres as their most complex example. In this study, we aim to provide some insight into the collaboration strategies of researchers in their daily researching activities…

  4. A DICOM Based Collaborative Platform for Real-Time Medical Teleconsultation on Medical Images.

    PubMed

    Maglogiannis, Ilias; Andrikos, Christos; Rassias, Georgios; Tsanakas, Panayiotis

    2017-01-01

    The paper deals with the design of a Web-based platform for real-time medical teleconsultation on medical images. The proposed platform combines the principles of heterogeneous Workflow Management Systems (WfMSs), the peer-to-peer networking architecture and the SPA (Single-Page Application) concept, to facilitate medical collaboration among healthcare professionals geographically distributed. The presented work leverages state-of-the-art features of the web to support peer-to-peer communication using the WebRTC (Web Real Time Communication) protocol and client-side data processing for creating an integrated collaboration environment. The paper discusses the technical details of implementation and presents the operation of the platform in practice along with some initial results.

  5. Electronic collaboration: Some effects of telecommunication media and machine intelligence on team performance

    NASA Technical Reports Server (NTRS)

    Wellens, A. Rodney

    1991-01-01

    Both NASA and DoD have had a long standing interest in teamwork, distributed decision making, and automation. While research on these topics has been pursued independently, it is becoming increasingly clear that the integration of social, cognitive, and human factors engineering principles will be necessary to meet the challenges of highly sophisticated scientific and military programs of the future. Images of human/intelligent-machine electronic collaboration were drawn from NASA and Air Force reports as well as from other sources. Here, areas of common concern are highlighted. A description of the author's research program testing a 'psychological distancing' model of electronic media effects and human/expert system collaboration is given.

  6. Network-based collaborative research environment LDRD final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, B.R.; McDonald, M.J.

    1997-09-01

    The Virtual Collaborative Environment (VCE) and Distributed Collaborative Workbench (DCW) are new technologies that make it possible for diverse users to synthesize and share mechatronic, sensor, and information resources. Using these technologies, university researchers, manufacturers, design firms, and others can directly access and reconfigure systems located throughout the world. The architecture for implementing VCE and DCW has been developed based on the proposed National Information Infrastructure or Information Highway and a tool kit of Sandia-developed software. Further enhancements to the VCE and DCW technologies will facilitate access to other mechatronic resources. This report describes characteristics of VCE and DCW andmore » also includes background information about the evolution of these technologies.« less

  7. The D3 Middleware Architecture

    NASA Technical Reports Server (NTRS)

    Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang

    2002-01-01

    DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.

  8. Programming with process groups: Group and multicast semantics

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.

  9. A Virtual Mission Operations Center: Collaborative Environment

    NASA Technical Reports Server (NTRS)

    Medina, Barbara; Bussman, Marie; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    The Virtual Mission Operations Center - Collaborative Environment (VMOC-CE) intent is to have a central access point for all the resources used in a collaborative mission operations environment to assist mission operators in communicating on-site and off-site in the investigation and resolution of anomalies. It is a framework that as a minimum incorporates online chat, realtime file sharing and remote application sharing components in one central location. The use of a collaborative environment in mission operations opens up the possibilities for a central framework for other project members to access and interact with mission operations staff remotely. The goal of the Virtual Mission Operations Center (VMOC) Project is to identify, develop, and infuse technology to enable mission control by on-call personnel in geographically dispersed locations. In order to achieve this goal, the following capabilities are needed: Autonomous mission control systems Automated systems to contact on-call personnel Synthesis and presentation of mission control status and history information Desktop tools for data and situation analysis Secure mechanism for remote collaboration commanding Collaborative environment for remote cooperative work The VMOC-CE is a collaborative environment that facilitates remote cooperative work. It is an application instance of the Virtual System Design Environment (VSDE), developed by NASA Goddard Space Flight Center's (GSFC) Systems Engineering Services & Advanced Concepts (SESAC) Branch. The VSDE is a web-based portal that includes a knowledge repository and collaborative environment to serve science and engineering teams in product development. It is a "one stop shop" for product design, providing users real-time access to product development data, engineering and management tools, and relevant design specifications and resources through the Internet. The initial focus of the VSDE has been to serve teams working in the early portion of the system/product lifecycle - concept development, proposal preparation, and formulation. The VMOC-CE expands the application of the VSDE into the operations portion of the system lifecycle. It will enable meaningful and real-time collaboration regardless of the geographical distribution of project team members. Team members will be able to interact in satellite operations, specifically for resolving anomalies, through access to a desktop computer and the Internet. Mission Operations Management will be able to participate and monitor up to the minute status of anomalies or other mission operations issues. In this paper we present the VMOC-CE project, system capabilities, and technologies.

  10. Design of material management system of mining group based on Hadoop

    NASA Astrophysics Data System (ADS)

    Xia, Zhiyuan; Tan, Zhuoying; Qi, Kuan; Li, Wen

    2018-01-01

    Under the background of persistent slowdown in mining market at present, improving the management level in mining group has become the key link to improve the economic benefit of the mine. According to the practical material management in mining group, three core components of Hadoop are applied: distributed file system HDFS, distributed computing framework Map/Reduce and distributed database HBase. Material management system of mining group based on Hadoop is constructed with the three core components of Hadoop and SSH framework technology. This system was found to strengthen collaboration between mining group and affiliated companies, and then the problems such as inefficient management, server pressure, hardware equipment performance deficiencies that exist in traditional mining material-management system are solved, and then mining group materials management is optimized, the cost of mining management is saved, the enterprise profit is increased.

  11. Collaborative Scheduling Using JMS in a Mixed Java and .NET Environment

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Ray; Baldwin, John; Borden, Chet

    2006-01-01

    A viewgraph presentation to demonstrate collaborative scheduling using Java Message Service (JMS) in a mixed Java and .Net environment is given. The topics include: 1) NASA Deep Space Network scheduling; 2) Collaborative scheduling concept; 3) Distributed computing environment; 4) Platform concerns in a distributed environment; 5) Messaging and data synchronization; and 6) The prototype.

  12. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ananthakrishnan, Rachana; Bell, Gavin; Cinquini, Luca

    2013-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  13. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinquini, Luca; Crichton, Daniel; Miller, Neill

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  14. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    NASA Technical Reports Server (NTRS)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  15. Collaborative Distributed Scheduling Approaches for Wireless Sensor Network

    PubMed Central

    Niu, Jianjun; Deng, Zhidong

    2009-01-01

    Energy constraints restrict the lifetime of wireless sensor networks (WSNs) with battery-powered nodes, which poses great challenges for their large scale application. In this paper, we propose a family of collaborative distributed scheduling approaches (CDSAs) based on the Markov process to reduce the energy consumption of a WSN. The family of CDSAs comprises of two approaches: a one-step collaborative distributed approach and a two-step collaborative distributed approach. The approaches enable nodes to learn the behavior information of its environment collaboratively and integrate sleep scheduling with transmission scheduling to reduce the energy consumption. We analyze the adaptability and practicality features of the CDSAs. The simulation results show that the two proposed approaches can effectively reduce nodes' energy consumption. Some other characteristics of the CDSAs like buffer occupation and packet delay are also analyzed in this paper. We evaluate CDSAs extensively on a 15-node WSN testbed. The test results show that the CDSAs conserve the energy effectively and are feasible for real WSNs. PMID:22408491

  16. Aerospace Systems Design in NASA's Collaborative Engineering Environment

    NASA Technical Reports Server (NTRS)

    Monell, Donald W.; Piland, William M.

    1999-01-01

    Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g. manufacturing and systems operations). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often lead to the inability of assessing critical programmatic and technical issues (e.g., cost risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across the Agency.

  17. Aerospace Systems Design in NASA's Collaborative Engineering Environment

    NASA Technical Reports Server (NTRS)

    Monell, Donald W.; Piland, William M.

    2000-01-01

    Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g., manufacturing and systems operation). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often lead to the inability of assessing critical programmatic and technical issues (e.g., cost, risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographical distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across Agency.

  18. Aerospace Systems Design in NASA's Collaborative Engineering Environment

    NASA Astrophysics Data System (ADS)

    Monell, Donald W.; Piland, William M.

    2000-07-01

    Past designs of complex aerospace systems involved an environment consisting of collocated design teams with project managers, technical discipline experts, and other experts (e.g., manufacturing and systems operations). These experts were generally qualified only on the basis of past design experience and typically had access to a limited set of integrated analysis tools. These environments provided less than desirable design fidelity, often led to the inability of assessing critical programmatic and technical issues (e.g., cost, risk, technical impacts), and generally derived a design that was not necessarily optimized across the entire system. The continually changing, modern aerospace industry demands systems design processes that involve the best talent available (no matter where it resides) and access to the best design and analysis tools. A solution to these demands involves a design environment referred to as collaborative engineering. The collaborative engineering environment evolving within the National Aeronautics and Space Administration (NASA) is a capability that enables the Agency's engineering infrastructure to interact and use the best state-of-the-art tools and data across organizational boundaries. Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location. In addition, a more efficient, higher quality design product is delivered by bringing together the best engineering talent with more up-to-date design and analysis tools. These tools are focused on interactive, multidisciplinary design and analysis with emphasis on the complete life cycle of the system, and they include nontraditional, integrated tools for life cycle cost estimation and risk assessment. NASA has made substantial progress during the last two years in developing a collaborative engineering environment. NASA is planning to use this collaborative engineering infrastructure to provide better aerospace systems life cycle design and analysis, which includes analytical assessment of the technical and programmatic aspects of a system from "cradle to grave." This paper describes the recent NASA developments in the area of collaborative engineering, the benefits (realized and anticipated) of using the developed capability, and the long-term plans for implementing this capability across the Agency.

  19. Distributed collaborative team effectiveness: measurement and process improvement

    NASA Technical Reports Server (NTRS)

    Wheeler, R.; Hihn, J.; Wilkinson, B.

    2002-01-01

    This paper describes a measurement methodology developed for assessing the readiness, and identifying opportunities for improving the effectiveness, of distributed collaborative design teams preparing to conduct a coccurent design session.

  20. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  1. Catalyzing Collaborative Learning: How Automated Task Distribution May Prompt Students to Collaborate

    ERIC Educational Resources Information Center

    Armstrong, Chandler

    2010-01-01

    Collaborative learning must prompt collaborative behavior among students. Once initiated, collaboration then must facilitate awareness between students of each other's activities and knowledge. Collaborative scripts provide explicit framework and guidance for roles and activities within student interactions, and are one method of fulfilling the…

  2. Resolving Complex Research Data Management Issues in Biomedical Laboratories: Qualitative Study of an Industry-Academia Collaboration

    PubMed Central

    Myneni, Sahiti; Patel, Vimla L.; Bova, G. Steven; Wang, Jian; Ackerman, Christopher F.; Berlinicke, Cynthia A.; Chen, Steve H.; Lindvall, Mikael; Zack, Donald J.

    2016-01-01

    This paper describes a distributed collaborative effort between industry and academia to systematize data management in an academic biomedical laboratory. Heterogeneous and voluminous nature of research data created in biomedical laboratories make information management difficult and research unproductive. One such collaborative effort was evaluated over a period of four years using data collection methods including ethnographic observations, semi-structured interviews, web-based surveys, progress reports, conference call summaries, and face-to-face group discussions. Data were analyzed using qualitative methods of data analysis to 1) characterize specific problems faced by biomedical researchers with traditional information management practices, 2) identify intervention areas to introduce a new research information management system called Labmatrix, and finally to 3) evaluate and delineate important general collaboration (intervention) characteristics that can optimize outcomes of an implementation process in biomedical laboratories. Results emphasize the importance of end user perseverance, human-centric interoperability evaluation, and demonstration of return on investment of effort and time of laboratory members and industry personnel for success of implementation process. In addition, there is an intrinsic learning component associated with the implementation process of an information management system. Technology transfer experience in a complex environment such as the biomedical laboratory can be eased with use of information systems that support human and cognitive interoperability. Such informatics features can also contribute to successful collaboration and hopefully to scientific productivity. PMID:26652980

  3. Instructional Design Issues in a Distributed Collaborative Engineering Design (CED) Instructional Environment

    ERIC Educational Resources Information Center

    Koszalka, Tiffany A.; Wu, Yiyan

    2010-01-01

    Changes in engineering practices have spawned changes in engineering education and prompted the use of distributed learning environments. A distributed collaborative engineering design (CED) course was designed to engage engineering students in learning about and solving engineering design problems. The CED incorporated an advanced interactive…

  4. Leadership in Partially Distributed Teams

    ERIC Educational Resources Information Center

    Plotnick, Linda

    2009-01-01

    Inter-organizational collaboration is becoming more common. When organizations collaborate they often do so in partially distributed teams (PDTs). A PDT is a hybrid team that has at least one collocated subteam and at least two subteams that are geographically distributed and communicate primarily through electronic media. While PDTs share many…

  5. Systems engineering implementation in the preliminary design phase of the Giant Magellan Telescope

    NASA Astrophysics Data System (ADS)

    Maiten, J.; Johns, M.; Trancho, G.; Sawyer, D.; Mady, P.

    2012-09-01

    Like many telescope projects today, the 24.5-meter Giant Magellan Telescope (GMT) is truly a complex system. The primary and secondary mirrors of the GMT are segmented and actuated to support two operating modes: natural seeing and adaptive optics. GMT is a general-purpose telescope supporting multiple science instruments operated in those modes. GMT is a large, diverse collaboration and development includes geographically distributed teams. The need to implement good systems engineering processes for managing the development of systems like GMT becomes imperative. The management of the requirements flow down from the science requirements to the component level requirements is an inherently difficult task in itself. The interfaces must also be negotiated so that the interactions between subsystems and assemblies are well defined and controlled. This paper will provide an overview of the systems engineering processes and tools implemented for the GMT project during the preliminary design phase. This will include requirements management, documentation and configuration control, interface development and technical risk management. Because of the complexity of the GMT system and the distributed team, using web-accessible tools for collaboration is vital. To accomplish this GMTO has selected three tools: Cognition Cockpit, Xerox Docushare, and Solidworks Enterprise Product Data Management (EPDM). Key to this is the use of Cockpit for managing and documenting the product tree, architecture, error budget, requirements, interfaces, and risks. Additionally, drawing management is accomplished using an EPDM vault. Docushare, a documentation and configuration management tool is used to manage workflow of documents and drawings for the GMT project. These tools electronically facilitate collaboration in real time, enabling the GMT team to track, trace and report on key project metrics and design parameters.

  6. A Web-Based Multi-Database System Supporting Distributed Collaborative Management and Sharing of Microarray Experiment Information

    PubMed Central

    Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco

    2006-01-01

    We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488

  7. User requirements for geo-collaborative work with spatio-temporal data in a web-based virtual globe environment.

    PubMed

    Yovcheva, Zornitza; van Elzakker, Corné P J M; Köbben, Barend

    2013-11-01

    Web-based tools developed in the last couple of years offer unique opportunities to effectively support scientists in their effort to collaborate. Communication among environmental researchers often involves not only work with geographical (spatial), but also with temporal data and information. Literature still provides limited documentation when it comes to user requirements for effective geo-collaborative work with spatio-temporal data. To start filling this gap, our study adopted a User-Centered Design approach and first explored the user requirements of environmental researchers working on distributed research projects for collaborative dissemination, exchange and work with spatio-temporal data. Our results show that system design will be mainly influenced by the nature and type of data users work with. From the end-users' perspective, optimal conversion of huge files of spatio-temporal data for further dissemination, accuracy of conversion, organization of content and security have a key role for effective geo-collaboration. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  9. a Kml-Based Approach for Distributed Collaborative Interpretation of Remote Sensing Images in the Geo-Browser

    NASA Astrophysics Data System (ADS)

    Huang, L.; Zhu, X.; Guo, W.; Xiang, L.; Chen, X.; Mei, Y.

    2012-07-01

    Existing implementations of collaborative image interpretation have many limitations for very large satellite imageries, such as inefficient browsing, slow transmission, etc. This article presents a KML-based approach to support distributed, real-time, synchronous collaborative interpretation for remote sensing images in the geo-browser. As an OGC standard, KML (Keyhole Markup Language) has the advantage of organizing various types of geospatial data (including image, annotation, geometry, etc.) in the geo-browser. Existing KML elements can be used to describe simple interpretation results indicated by vector symbols. To enlarge its application, this article expands KML elements to describe some complex image processing operations, including band combination, grey transformation, geometric correction, etc. Improved KML is employed to describe and share interpretation operations and results among interpreters. Further, this article develops some collaboration related services that are collaboration launch service, perceiving service and communication service. The launch service creates a collaborative interpretation task and provides a unified interface for all participants. The perceiving service supports interpreters to share collaboration awareness. Communication service provides interpreters with written words communication. Finally, the GeoGlobe geo-browser (an extensible and flexible geospatial platform developed in LIESMARS) is selected to perform experiments of collaborative image interpretation. The geo-browser, which manage and visualize massive geospatial information, can provide distributed users with quick browsing and transmission. Meanwhile in the geo-browser, GIS data (for example DEM, DTM, thematic map and etc.) can be integrated to assist in improving accuracy of interpretation. Results show that the proposed method is available to support distributed collaborative interpretation of remote sensing image

  10. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  11. Build It: Will They Come?

    NASA Astrophysics Data System (ADS)

    Corrie, Brian; Zimmerman, Todd

    Scientific research is fundamentally collaborative in nature, and many of today's complex scientific problems require domain expertise in a wide range of disciplines. In order to create research groups that can effectively explore such problems, research collaborations are often formed that involve colleagues at many institutions, sometimes spanning a country and often spanning the world. An increasingly common manifestation of such a collaboration is the collaboratory (Bos et al., 2007), a “…center without walls in which the nation's researchers can perform research without regard to geographical location — interacting with colleagues, accessing instrumentation, sharing data and computational resources, and accessing information from digital libraries.” In order to bring groups together on such a scale, a wide range of components need to be available to researchers, including distributed computer systems, remote instrumentation, data storage, collaboration tools, and the financial and human resources to operate and run such a system (National Research Council, 1993). Media Spaces, as both a technology and a social facilitator, have the potential to meet many of these needs. In this chapter, we focus on the use of scientific media spaces (SMS) as a tool for supporting collaboration in scientific research. In particular, we discuss the design, deployment, and use of a set of SMS environments deployed by WestGrid and one of its collaborating organizations, the Centre for Interdisciplinary Research in the Mathematical and Computational Sciences (IRMACS) over a 5-year period.

  12. Using a commodity high-definition television for collaborative structural biology

    PubMed Central

    Yennamalli, Ragothaman; Arangarasan, Raj; Bryden, Aaron; Gleicher, Michael; Phillips, George N.

    2014-01-01

    Visualization of protein structures using stereoscopic systems is frequently needed by structural biologists working to understand a protein’s structure–function relationships. Often several scientists are working as a team and need simultaneous interaction with each other and the graphics representations. Most existing molecular visualization tools support single-user tasks, which are not suitable for a collaborative group. Expensive caves, domes or geowalls have been developed, but the availability and low cost of high-definition televisions (HDTVs) and game controllers in the commodity entertainment market provide an economically attractive option to achieve a collaborative environment. This paper describes a low-cost environment, using standard consumer game controllers and commercially available stereoscopic HDTV monitors with appropriate signal converters for structural biology collaborations employing existing binary distributions of commonly used software packages like Coot, PyMOL, Chimera, VMD, O, Olex2 and others. PMID:24904249

  13. Organizing Diverse, Distributed Project Information

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    2003-01-01

    SemanticOrganizer is a software application designed to organize and integrate information generated within a distributed organization or as part of a project that involves multiple, geographically dispersed collaborators. SemanticOrganizer incorporates the capabilities of database storage, document sharing, hypermedia navigation, and semantic-interlinking into a system that can be customized to satisfy the specific information-management needs of different user communities. The program provides a centralized repository of information that is both secure and accessible to project collaborators via the World Wide Web. SemanticOrganizer's repository can be used to collect diverse information (including forms, documents, notes, data, spreadsheets, images, and sounds) from computers at collaborators work sites. The program organizes the information using a unique network-structured conceptual framework, wherein each node represents a data record that contains not only the original information but also metadata (in effect, standardized data that characterize the information). Links among nodes express semantic relationships among the data records. The program features a Web interface through which users enter, interlink, and/or search for information in the repository. By use of this repository, the collaborators have immediate access to the most recent project information, as well as to archived information. A key advantage to SemanticOrganizer is its ability to interlink information together in a natural fashion using customized terminology and concepts that are familiar to a user community.

  14. Serving ocean model data on the cloud

    USGS Publications Warehouse

    Meisinger, Michael; Farcas, Claudiu; Farcas, Emilia; Alexander, Charles; Arrott, Matthew; de La Beaujardiere, Jeff; Hubbard, Paul; Mendelssohn, Roy; Signell, Richard P.

    2010-01-01

    The NOAA-led Integrated Ocean Observing System (IOOS) and the NSF-funded Ocean Observatories Initiative Cyberinfrastructure Project (OOI-CI) are collaborating on a prototype data delivery system for numerical model output and other gridded data using cloud computing. The strategy is to take an existing distributed system for delivering gridded data and redeploy on the cloud, making modifications to the system that allow it to harness the scalability of the cloud as well as adding functionality that the scalability affords.

  15. Managing Distributed Innovation Processes in Virtual Organizations by Applying the Collaborative Network Relationship Analysis

    NASA Astrophysics Data System (ADS)

    Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter

    Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.

  16. Dynamics of human categorization in a collaborative tagging system: How social processes of semantic stabilization shape individual sensemaking.

    PubMed

    Ley, Tobias; Seitlinger, Paul

    2015-10-01

    We study how categories form and develop over time in a sensemaking task by groups of students employing a collaborative tagging system. In line with distributed cognition theories, we look at both the tags students use and their strength of representation in memory. We hypothesize that categories get more differentiated over time as students learn, and that semantic stabilization on the group level (i.e. the convergence in the use of tags) mediates this relationship. Results of a field experiment that tested the impact of topic study duration on the specificity of tags confirms these hypotheses, although it was not study duration that produced this effect, but rather the effectiveness of the collaborative taxonomy the groups built. In the groups with higher levels of semantic stabilization, we found use of more specific tags and better representation in memory. We discuss these findings with regard to the important role of the information value of tags that would drive both the convergence on the group level as well as a shift to more specific levels of categorization. We also discuss the implication for cognitive science research by highlighting the importance of collaboratively built artefacts in the process of how knowledge is acquired, and implications for educational applications of collaborative tagging environments.

  17. Dynamics of human categorization in a collaborative tagging system: How social processes of semantic stabilization shape individual sensemaking

    PubMed Central

    Ley, Tobias; Seitlinger, Paul

    2015-01-01

    We study how categories form and develop over time in a sensemaking task by groups of students employing a collaborative tagging system. In line with distributed cognition theories, we look at both the tags students use and their strength of representation in memory. We hypothesize that categories get more differentiated over time as students learn, and that semantic stabilization on the group level (i.e. the convergence in the use of tags) mediates this relationship. Results of a field experiment that tested the impact of topic study duration on the specificity of tags confirms these hypotheses, although it was not study duration that produced this effect, but rather the effectiveness of the collaborative taxonomy the groups built. In the groups with higher levels of semantic stabilization, we found use of more specific tags and better representation in memory. We discuss these findings with regard to the important role of the information value of tags that would drive both the convergence on the group level as well as a shift to more specific levels of categorization. We also discuss the implication for cognitive science research by highlighting the importance of collaboratively built artefacts in the process of how knowledge is acquired, and implications for educational applications of collaborative tagging environments. PMID:26566299

  18. A System to Provide Real-Time Collaborative Situational Awareness by Web Enabling a Distributed Sensor Network

    NASA Technical Reports Server (NTRS)

    Panangadan, Anand; Monacos, Steve; Burleigh, Scott; Joswig, Joseph; James, Mark; Chow, Edward

    2012-01-01

    In this paper, we describe the architecture of both the PATS and SAP systems and how these two systems interoperate with each other forming a unified capability for deploying intelligence in hostile environments with the objective of providing actionable situational awareness of individuals. The SAP system works in concert with the UICDS information sharing middleware to provide data fusion from multiple sources. UICDS can then publish the sensor data using the OGC's Web Mapping Service, Web Feature Service, and Sensor Observation Service standards. The system described in the paper is able to integrate a spatially distributed sensor system, operating without the benefit of the Web infrastructure, with a remote monitoring and control system that is equipped to take advantage of SWE.

  19. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems

    ERIC Educational Resources Information Center

    Ham, MyungJoo

    2009-01-01

    We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…

  20. Design Considerations in Developing a Web-Based Mentor Network.

    ERIC Educational Resources Information Center

    Sumner, Todd

    This paper describes a Web-based mentor network designed to pair students in rural independent schools with undergraduates at selected liberal arts colleges. It is one of nine central program elements that constitute the Proteus(TM) system, a multimedia technologies architecture that supports distributed collaborations and work undertaken in the…

  1. Phases and Patterns of Group Development in Virtual Learning Teams

    ERIC Educational Resources Information Center

    Yoon, Seung Won; Johnson, Scott D.

    2008-01-01

    With the advancement of Internet communication technologies, distributed work groups have great potential for remote collaboration and use of collective knowledge. Adopting the Complex Adaptive System (CAS) perspective (McGrath, Arrow, & Berdhal, "Personal Soc Psychol Rev" 4 (2000) 95), which views virtual learning teams as an adaptive and…

  2. Assessing the Application of Three-Dimensional Collaborative Technologies within an E-Learning Environment

    ERIC Educational Resources Information Center

    McArdle, Gavin; Bertolotto, Michela

    2012-01-01

    Today, the Internet plays a major role in distributing learning material within third level education. Multiple online facilities provide access to educational resources. While early systems relied on webpages, which acted as repositories for learning material, nowadays sophisticated online applications manage and deliver learning resources.…

  3. A Three-Level Analysis of Collaborative Learning in Dual-Interaction Spaces

    ERIC Educational Resources Information Center

    Lonchamp, Jacques

    2009-01-01

    CSCL systems which follow the dual-interaction spaces paradigm support the synchronous construction and discussion of shared artifacts by distributed or colocated small groups of learners. The most recent generic dual-interaction space environments, either model based or component based, can be deeply customized by teachers for supporting…

  4. Architecture for an advanced biomedical collaboration domain for the European paediatric cancer research community (ABCD-4-E).

    PubMed

    Nitzlnader, Michael; Falgenhauer, Markus; Gossy, Christian; Schreier, Günter

    2015-01-01

    Today, progress in biomedical research often depends on large, interdisciplinary research projects and tailored information and communication technology (ICT) support. In the context of the European Network for Cancer Research in Children and Adolescents (ENCCA) project the exchange of data between data source (Source Domain) and data consumer (Consumer Domain) systems in a distributed computing environment needs to be facilitated. This work presents the requirements and the corresponding solution architecture of the Advanced Biomedical Collaboration Domain for Europe (ABCD-4-E). The proposed concept utilises public as well as private cloud systems, the Integrating the Healthcare Enterprise (IHE) framework and web-based applications to provide the core capabilities in accordance with privacy and security needs. The utility of crucial parts of the concept was evaluated by prototypic implementation. A discussion of the design indicates that the requirements of ENCCA are fully met. A whole system demonstration is currently being prepared to verify that ABCD-4-E has the potential to evolve into a domain-bridging collaboration platform in the future.

  5. Two Paths from the Same Place: Task Driven and Human Centered Evolution of a Group Information Surface

    NASA Technical Reports Server (NTRS)

    Russell, Daniel M.; Trimble, Jay; Wales, Roxana; Clancy, Daniel (Technical Monitor)

    2003-01-01

    This is the tale of two different implementations of a collaborative information tool, that started from the same design source. The Blueboard, developed at IBM Research, is a tool for groups to use in exchanging information in a lightweight, informal collaborative way. It began as a large display surface for walk-by use in a corporate setting and has evolved in response to task demands and user needs. At NASA, the MERBoard is being designed to support surface operations for the upcoming Mars Exploration Rover Missions. The MERBoard is a tool that was inspired by the Blueboard design, extending this design to support the collaboration requirements for viewing, annotating, linking and distributing information for the science and engineering teams that will operate two rovers on the surface of Mars. The ways in which each group transformed the system reflects not only technical requirements, but also the needs of users in each setting and embedding of the system within the larger socio-technical environment. Lessons about how task requirements, information flow requirements and work practice drive the evolution of a system are illustrated.

  6. Distributed Application of the Unified Noah LSM with Hydrologic Flow Routing on an Appalachian Headwater Basin

    NASA Astrophysics Data System (ADS)

    Garcia, M.; Kumar, S.; Gochis, D.; Yates, D.; McHenry, J.; Burnet, T.; Coats, C.; Condrey, J.

    2006-05-01

    Collaboration between scientists at UMBC-GEST and NASA-GSFC, the NCAR Research Applications Laboratory (RAL), and Baron Advanced Meteorological Services (BAMS), has produced a modeling framework for the application of traditional land surface models (LSMs) in a distributed hydrologic system which can be used for diagnosis and prediction of routed stream discharge hydrographs. This collaboration is oriented on near-term system implementation across Romania for flood and flash-flood analyses and forecasting as part of the World Bank-funded Destructive Waters Abatement (DESWAT) program. Meteorological forcing from surface observations, model analyses and numerical forecasts are employed in the NASA-GSFC Land Information System (LIS) to drive the Unified Noah LSM with Noah-Distributed components, stream network delineation and routing schemes original to this work. The Unified Noah LSM is the outgrowth of a joint modeling effort between several research partners including NCAR, the NOAA National Center for Environmental Prediction (NCEP), and the Air Force Weather Agency (AFWA). At NCAR, hydrologically-oriented extensions to the Noah LSM have been developed for LSM applications in a distributed domain in order to address the lateral redistribution of soil moisture by surface and subsurface flow processes. These advancements have been integrated into the NASA-GSFC Land Information System (LIS) and coupled with an original framework for hydraulic channel network definition and specification, linkages with the Noah-Distributed overland and subsurface flow framework, and distributed cell- to-cell (or link-node) hydraulic routing. This poster presents an overview of the system components and their organization, as well as results of the first U.S. case study performed with this system under various configurations. The case study simulated precipitation events over a headwater basin in the southern Appalachian Mountains in October 2005 following the landfall of Tropical Storm Tammy in South Carolina. These events followed on a long dry period in the region, lending to the demonstration of watershed response to strong precipitation forcing under nearly ideal and easily-specified initial conditions. The results presented here will compare simulated versus observed streamflow conditions at various locations in the test watershed using a selection of routing methods.

  7. Resolving complex research data management issues in biomedical laboratories: Qualitative study of an industry-academia collaboration.

    PubMed

    Myneni, Sahiti; Patel, Vimla L; Bova, G Steven; Wang, Jian; Ackerman, Christopher F; Berlinicke, Cynthia A; Chen, Steve H; Lindvall, Mikael; Zack, Donald J

    2016-04-01

    This paper describes a distributed collaborative effort between industry and academia to systematize data management in an academic biomedical laboratory. Heterogeneous and voluminous nature of research data created in biomedical laboratories make information management difficult and research unproductive. One such collaborative effort was evaluated over a period of four years using data collection methods including ethnographic observations, semi-structured interviews, web-based surveys, progress reports, conference call summaries, and face-to-face group discussions. Data were analyzed using qualitative methods of data analysis to (1) characterize specific problems faced by biomedical researchers with traditional information management practices, (2) identify intervention areas to introduce a new research information management system called Labmatrix, and finally to (3) evaluate and delineate important general collaboration (intervention) characteristics that can optimize outcomes of an implementation process in biomedical laboratories. Results emphasize the importance of end user perseverance, human-centric interoperability evaluation, and demonstration of return on investment of effort and time of laboratory members and industry personnel for success of implementation process. In addition, there is an intrinsic learning component associated with the implementation process of an information management system. Technology transfer experience in a complex environment such as the biomedical laboratory can be eased with use of information systems that support human and cognitive interoperability. Such informatics features can also contribute to successful collaboration and hopefully to scientific productivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Technology for national asset storage systems

    NASA Technical Reports Server (NTRS)

    Coyne, Robert A.; Hulen, Harry; Watson, Richard

    1993-01-01

    An industry-led collaborative project, called the National Storage Laboratory, was organized to investigate technology for storage systems that will be the future repositories for our national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and the provider of applications. The expected result is an evaluation of a high performance storage architecture assembled from commercially available hardware and software, with some software enhancements to meet the project's goals. It is anticipated that the integrated testbed system will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte class files at gigabit-per-second data rates. The National Storage Laboratory was officially launched on 27 May 1992.

  9. Neural network based visualization of collaborations in a citizen science project

    NASA Astrophysics Data System (ADS)

    Morais, Alessandra M. M.; Santos, Rafael D. C.; Raddick, M. Jordan

    2014-05-01

    Citizen science projects are those in which volunteers are asked to collaborate in scientific projects, usually by volunteering idle computer time for distributed data processing efforts or by actively labeling or classifying information - shapes of galaxies, whale sounds, historical records are all examples of citizen science projects in which users access a data collecting system to label or classify images and sounds. In order to be successful, a citizen science project must captivate users and keep them interested on the project and on the science behind it, increasing therefore the time the users spend collaborating with the project. Understanding behavior of citizen scientists and their interaction with the data collection systems may help increase the involvement of the users, categorize them accordingly to different parameters, facilitate their collaboration with the systems, design better user interfaces, and allow better planning and deployment of similar projects and systems. Users behavior can be actively monitored or derived from their interaction with the data collection systems. Records of the interactions can be analyzed using visualization techniques to identify patterns and outliers. In this paper we present some results on the visualization of more than 80 million interactions of almost 150 thousand users with the Galaxy Zoo I citizen science project. Visualization of the attributes extracted from their behaviors was done with a clustering neural network (the Self-Organizing Map) and a selection of icon- and pixel-based techniques. These techniques allows the visual identification of groups of similar behavior in several different ways.

  10. Distributed collaborative environments for virtual capability-based planning

    NASA Astrophysics Data System (ADS)

    McQuay, William K.

    2003-09-01

    Distributed collaboration is an emerging technology that will significantly change how decisions are made in the 21st century. Collaboration involves two or more geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions. The marriage of information, collaboration, and simulation technologies provides the decision maker with a collaborative virtual environment for planning and decision support. This paper reviews research that is focusing on the applying open standards agent-based framework with integrated modeling and simulation to a new Air Force initiative in capability-based planning and the ability to implement it in a distributed virtual environment. Virtual Capability Planning effort will provide decision-quality knowledge for Air Force resource allocation and investment planning including examining proposed capabilities and cost of alternative approaches, the impact of technologies, identification of primary risk drivers, and creation of executable acquisition strategies. The transformed Air Force business processes are enabled by iterative use of constructive and virtual modeling, simulation, and analysis together with information technology. These tools are applied collaboratively via a technical framework by all the affected stakeholders - warfighter, laboratory, product center, logistics center, test center, and primary contractor.

  11. Comparison of childbirth care models in public hospitals, Brazil.

    PubMed

    Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos

    2014-04-01

    To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.

  12. Fermilab DART run control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1996-02-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system.

  13. Fermilab DART run control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-05-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system.

  14. Risk Information Management Resource (RIMR): modeling an approach to defending against military medical information assurance brain drain

    NASA Astrophysics Data System (ADS)

    Wright, Willie E.

    2003-05-01

    As Military Medical Information Assurance organizations face off with modern pressures to downsize and outsource, they battle with losing knowledgeable people who leave and take with them what they know. This knowledge is increasingly being recognized as an important resource and organizations are now taking steps to manage it. In addition, as the pressures for globalization (Castells, 1998) increase, collaboration and cooperation are becoming more distributed and international. Knowledge sharing in a distributed international environment is becoming an essential part of Knowledge Management. This is a major shortfall in the current approach to capturing and sharing knowledge in Military Medical Information Assurance. This paper addresses this challenge by exploring Risk Information Management Resource (RIMR) as a tool for sharing knowledge using the concept of Communities of Practice. RIMR is based no the framework of sharing and using knowledge. This concept is done through three major components - people, process and technology. The people aspect enables remote collaboration, support communities of practice, reward and recognize knowledge sharing while encouraging storytelling. The process aspect enhances knowledge capture and manages information. While the technology aspect enhance system integration and data mining, it also utilizes intelligent agents and exploits expert systems. These coupled with supporting activities of education and training, technology infrastructure and information security enables effective information assurance collaboration.

  15. Light-front spin-dependent spectral function and nucleon momentum distributions for a three-body system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Dotto, Alessio; Pace, Emanuele; Salme, Giovanni

    Poincare covariant definitions for the spin-dependent spectral function and for the momentum distributions within the light-front Hamiltonian dynamics are proposed for a three-fermion bound system, starting from the light-front wave function of the system. The adopted approach is based on the Bakamjian–Thomas construction of the Poincaré generators, which allows one to easily import the familiar and wide knowledge on the nuclear interaction into a light-front framework. The proposed formalism can find useful applications in refined nuclear calculations, such as those needed for evaluating the European Muon Collaboration effect or the semi-inclusive deep inelastic cross sections with polarized nuclear targets, sincemore » remarkably the light-front unpolarized momentum distribution by definition fulfills both normalization and momentum sum rules. As a result, also shown is a straightforward generalization of the definition of the light-front spectral function to an A-nucleon system.« less

  16. Light-front spin-dependent spectral function and nucleon momentum distributions for a three-body system

    DOE PAGES

    Del Dotto, Alessio; Pace, Emanuele; Salme, Giovanni; ...

    2017-01-10

    Poincare covariant definitions for the spin-dependent spectral function and for the momentum distributions within the light-front Hamiltonian dynamics are proposed for a three-fermion bound system, starting from the light-front wave function of the system. The adopted approach is based on the Bakamjian–Thomas construction of the Poincaré generators, which allows one to easily import the familiar and wide knowledge on the nuclear interaction into a light-front framework. The proposed formalism can find useful applications in refined nuclear calculations, such as those needed for evaluating the European Muon Collaboration effect or the semi-inclusive deep inelastic cross sections with polarized nuclear targets, sincemore » remarkably the light-front unpolarized momentum distribution by definition fulfills both normalization and momentum sum rules. As a result, also shown is a straightforward generalization of the definition of the light-front spectral function to an A-nucleon system.« less

  17. Distributed visualization of gridded geophysical data: the Carbon Data Explorer, version 0.2.3

    NASA Astrophysics Data System (ADS)

    Endsley, K. A.; Billmire, M. G.

    2016-01-01

    Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool - the Carbon Data Explorer - that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.

  18. Decentralized asset management for collaborative sensing

    NASA Astrophysics Data System (ADS)

    Malhotra, Raj P.; Pribilski, Michael J.; Toole, Patrick A.; Agate, Craig

    2017-05-01

    There has been increased impetus to leverage Small Unmanned Aerial Systems (SUAS) for collaborative sensing applications in which many platforms work together to provide critical situation awareness in dynamic environments. Such applications require critical sensor observations to be made at the right place and time to facilitate the detection, tracking, and classification of ground-based objects. This further requires rapid response to real-world events and the balancing of multiple, competing mission objectives. In this context, human operators become overwhelmed with management of many platforms. Further, current automated planning paradigms tend to be centralized and don't scale up well to many collaborating platforms. We introduce a decentralized approach based upon information-theory and distributed fusion which enable us to scale up to large numbers of collaborating Small Unmanned Aerial Systems (SUAS) platforms. This is exercised against a military application involving the autonomous detection, tracking, and classification of critical mobile targets. We further show that, based upon monte-carlo simulation results, our decentralized approach out-performs more static management strategies employed by human operators and achieves similar results to a centralized approach while being scalable and robust to degradation of communication. Finally, we describe the limitations of our approach and future directions for our research.

  19. Hidden asymmetry and long range rapidity correlations

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Bzdak, A.; Zalewski, K.

    2012-04-01

    Interpretation of long-range rapidity correlations in terms of the fluctuating rapidity density distribution of the system created in high-energy collisions is proposed. When applied to recent data of the STAR Collaboration, it shows a substantial asymmetric component in the shape of this system in central Au-Au collisions, implying that boost invariance is violated on the event-by-event basis even at central rapidity. This effect may seriously influence the hydrodynamic expansion of the system.

  20. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  1. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  2. Asymmetric Fireballs in Symmetric Collisions

    DOE PAGES

    Bialas, A.; Bzdak, A.; Zalewski, K.

    2013-01-01

    Here, this contribution reports on the results obtained in the two recently published papers demonstrating that data of the STAR Collaboration show a substantial asymmetric component in the rapidity distribution of the system created in central Au-Au collisions, implying that boost invariance is violated on the event-by-event basis even at the mid c.m. rapidity.

  3. The Computer Science Technical Report (CS-TR) Project: A Pioneering Digital Library Project Viewed from a Library Perspective.

    ERIC Educational Resources Information Center

    Anderson, Greg; And Others

    1996-01-01

    Describes the Computer Science Technical Report Project, one of the earliest investigations into the system engineering of digital libraries which pioneered multiinstitutional collaborative research into technical, social, and legal issues related to the development and implementation of a large, heterogeneous, distributed digital library. (LRW)

  4. Running R Statistical Computing Environment Software on the Peregrine

    Science.gov Websites

    for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing

  5. An Exploration of Distributed Parallel Sorting in GSS

    ERIC Educational Resources Information Center

    Diller, Christopher B. R.

    2013-01-01

    When the members of a group work collaboratively using a group support system (GSS), they often "brainstorm" a list of ideas in response to a question or challenge that faces the group. The satisfaction levels of group members are usually high following this activity. However, satisfaction levels with the process almost always drop…

  6. QSIA--A Web-Based Environment for Learning, Assessing and Knowledge Sharing in Communities

    ERIC Educational Resources Information Center

    Rafaeli, Sheizaf; Barak, Miri; Dan-Gur, Yuval; Toch, Eran

    2004-01-01

    This paper describes a Web-based and distributed system named QSIA that serves as an environment for learning, assessing and knowledge sharing. QSIA--Questions Sharing and Interactive Assignments--offers a unified infrastructure for developing, collecting, managing and sharing of knowledge items. QSIA enhances collaboration in authoring via online…

  7. CALINVASIVES: a revolutionary tool to monitor invasive threats

    Treesearch

    M. Garbelotto; S. Drill; C. Powell; J. Malpas

    2017-01-01

    CALinvasives is a web-based relational database and content management system (CMS) cataloging the statewide distribution of invasive pathogens and pests and the plant hosts they impact. The database has been developed as a collaboration between the Forest Pathology and Mycology Laboratory at UC Berkeley and Calflora. CALinvasives will combine information on the...

  8. Evaluation of a low-end architecture for collaborative software development, remote observing, and data analysis from multiple sites

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Otruba, Wolfgang; Hanslmeier, Arnold

    2000-06-01

    The Kanzelhoehe Solar Observatory is an observing facility located in Carinthia (Austria) and operated by the Institute of Geophysics, Astrophysics and Meteorology of the Karl- Franzens University Graz. A set of instruments for solar surveillance at different wavelengths bands is continuously operated in automatic mode and is presently being upgraded to be used in supplying near-real-time solar activity indexes for space weather applications. In this frame, we tested a low-end software/hardware architecture running on the PC platform in a non-homogeneous, remotely distributed environment that allows efficient or moderately efficient application sharing at the Intranet and Extranet (i.e., Wide Area Network) levels respectively. Due to the geographical distributed of participating teams (Trieste, Italy; Kanzelhoehe and Graz, Austria), we have been using such features for collaborative remote software development and testing, data analysis and calibration, and observing run emulation from multiple sites as well. In this work, we describe the used architecture and its performances based on a series of application sharing tests we carried out to ascertain its effectiveness in real collaborative remote work, observations and data exchange. The system proved to be reliable at the Intranet level for most distributed tasks, limited to less demanding ones at the Extranet level, but quite effective in remote instrument control when real time response is not needed.

  9. Development of the AuScope Australian Earth Observing System

    NASA Astrophysics Data System (ADS)

    Rawling, T.

    2017-12-01

    Advances in monitoring technology and significant investment in new national research initiatives, will provide significant new opportunities for delivery of novel geoscience data streams from across the Australian continent over the next decade. The AuScope Australian Earth Observing System (AEOS) is linking field and laboratory infrastructure across Australia to form a national sensor array focusing on the Solid Earth. As such AuScope is working with these programs to deploy observational infrastructure, including MT, passive seismic, and GNSS networks across the entire Australian Continent. Where possible the observational grid will be co-located with strategic basement drilling in areas of shallow cover and tied with national reflection seismic and sampling transects. This integrated suite of distributed earth observation and imaging sensors will provide unprecedented imaging fidelity of our crust, across all length and time scales, to fundamental and applied researchers in the earth, environmental and geospatial sciences. The AEOS will the Earth Science community's Square Kilometer Array (SKA) - a distributed telescope that looks INTO the earth rather than away from it - a 10 million SKA. The AEOS is strongly aligned with other community strategic initiatives including the UNCOVER research program as well as other National Collaborative Research Infrastructure programs such as the Terrestrial Environmental Research Network (TERN) and the Integrated Marine Observing System (IMOS) providing an interdisciplinary collaboration platform across the earth and environmental sciences. There is also very close alignment between AuScope and similar international programs such as EPOS, the USArray and EarthCube - potential collaborative linkages we are currently in the process of pursuing more fomally. The AuScope AEOS Infrastructure System is ultimately designed to enable the progressive construction, refinement and ongoing enrichment of a live, "FAIR" four-dimensional Earth Model for the Australian Continent and its immediate environs.

  10. Empirical analysis on the human dynamics of blogging behavior on GitHub

    NASA Astrophysics Data System (ADS)

    Yan, Deng-Cheng; Wei, Zong-Wen; Han, Xiao-Pu; Wang, Bing-Hong

    2017-01-01

    GitHub is a social collaborative coding platform on which software developers not only collaborate on codes but also share knowledge through blogs using GitHub Pages. In this article, we analyze the blogging behavior of software developers on GitHub Pages. The results show that both the commit number and the inter-event time of two consecutive blogging actions follow heavy-tailed distribution. We further observe a significant variety of activity among individual developers, and a strongly positive correlation between the activity and the power-law exponent of the inter-event time distribution. We also find a difference between the user behaviors of GitHub Pages and other online systems which is driven by the diversity of users and length of contents. In addition, our result shows an obvious difference between the majority of developers and elite developers in their burstiness property.

  11. Empirical study on dyad act-degree distribution in some collaboration networks

    NASA Astrophysics Data System (ADS)

    Chang, Hui; Zhang, Pei-Pei; He, Yue; He, Da-Ren

    2006-03-01

    We (and cooperators) suggest studying the evolution of the extended collaboration networks by a dyad-act organizing model. The analytic and numeric studies of the model lead to a conclusion that most of the collaboration networks should show a dyad act-degree distribution (how many acts a dyad belongs to) between a power law and an exponential function, which can be described by a shifted power law. We have done an empirical study on dyad act-degree distribution in some collaboration networks. They are: the train networks in China, the bus network of Beijing, and traditional Chinese medical prescription formulation network. The results show good agreement with this conclusion. We also discuss what dyad act-degree implies in these networks and what are the possible applications of the study. The details will be published elsewhere.

  12. A Knowledge Portal and Collaboration Environment for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    D'Agnese, F. A.

    2008-12-01

    Earth Knowledge is developing a web-based 'Knowledge Portal and Collaboration Environment' that will serve as the information-technology-based foundation of a modular Internet-based Earth-Systems Monitoring, Analysis, and Management Tool. This 'Knowledge Portal' is essentially a 'mash- up' of web-based and client-based tools and services that support on-line collaboration, community discussion, and broad public dissemination of earth and environmental science information in a wide-area distributed network. In contrast to specialized knowledge-management or geographic-information systems developed for long- term and incremental scientific analysis, this system will exploit familiar software tools using industry standard protocols, formats, and APIs to discover, process, fuse, and visualize existing environmental datasets using Google Earth and Google Maps. An early form of these tools and services is being used by Earth Knowledge to facilitate the investigations and conversations of scientists, resource managers, and citizen-stakeholders addressing water resource sustainability issues in the Great Basin region of the desert southwestern United States. These ongoing projects will serve as use cases for the further development of this information-technology infrastructure. This 'Knowledge Portal' will accelerate the deployment of Earth- system data and information into an operational knowledge management system that may be used by decision-makers concerned with stewardship of water resources in the American Desert Southwest.

  13. Eagle Racing: Addressing Corporate Collaboration Challenges through an Online Simulation Game

    ERIC Educational Resources Information Center

    Angehrn, Albert A.; Maxwell, Katrina

    2009-01-01

    Effective collaboration is necessary for corporation-wide learning, knowledge exchange, and innovation. However, it is difficult to create a corporate culture that encourages collaboration; the complexity of such collaboration is increased substantially by the diverse and distributed nature of knowledge sources and decision makers in the global…

  14. The double power law in human collaboration behavior: The case of Wikipedia

    NASA Astrophysics Data System (ADS)

    Kwon, Okyu; Son, Woo-Sik; Jung, Woo-Sung

    2016-11-01

    We study human behavior in terms of the inter-event time distribution of revision behavior on Wikipedia, an online collaborative encyclopedia. We observe a double power law distribution for the inter-editing behavior at the population level and a single power law distribution at the individual level. Although interactions between users are indirect or moderate on Wikipedia, we determine that the synchronized editing behavior among users plays a key role in determining the slope of the tail of the double power law distribution.

  15. Collaborative Information Technologies

    NASA Astrophysics Data System (ADS)

    Meyer, William; Casper, Thomas

    1999-11-01

    Significant effort has been expended to provide infrastructure and to facilitate the remote collaborations within the fusion community and out. Through the Office of Fusion Energy Science Information Technology Initiative, communication technologies utilized by the fusion community are being improved. The initial thrust of the initiative has been collaborative seminars and meetings. Under the initiative 23 sites, both laboratory and university, were provided with hardware required to remotely view, or project, documents being presented. The hardware is capable of delivering documents to a web browser, or to compatible hardware, over ESNET in an access controlled manner. The ability also exists for documents to originate from virtually any of the collaborating sites. In addition, RealNetwork servers are being tested to provide audio and/or video, in a non-interactive environment with MBONE providing two-way interaction where needed. Additional effort is directed at remote distributed computing, file systems, security, and standard data storage and retrieval methods. This work supported by DoE contract No. W-7405-ENG-48

  16. New similarity of triangular fuzzy number and its application.

    PubMed

    Zhang, Xixiang; Ma, Weimin; Chen, Liping

    2014-01-01

    The similarity of triangular fuzzy numbers is an important metric for application of it. There exist several approaches to measure similarity of triangular fuzzy numbers. However, some of them are opt to be large. To make the similarity well distributed, a new method SIAM (Shape's Indifferent Area and Midpoint) to measure triangular fuzzy number is put forward, which takes the shape's indifferent area and midpoint of two triangular fuzzy numbers into consideration. Comparison with other similarity measurements shows the effectiveness of the proposed method. Then, it is applied to collaborative filtering recommendation to measure users' similarity. A collaborative filtering case is used to illustrate users' similarity based on cloud model and triangular fuzzy number; the result indicates that users' similarity based on triangular fuzzy number can obtain better discrimination. Finally, a simulated collaborative filtering recommendation system is developed which uses cloud model and triangular fuzzy number to express users' comprehensive evaluation on items, and result shows that the accuracy of collaborative filtering recommendation based on triangular fuzzy number is higher.

  17. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.

  18. An Orion/Ares I Launch and Ascent Simulation: One Segment of the Distributed Space Exploration Simulation (DSES)

    NASA Technical Reports Server (NTRS)

    Chung, Victoria I.; Crues, Edwin Z.; Blum, Mike G.; Alofs, Cathy; Busto, Juan

    2007-01-01

    This paper describes the architecture and implementation of a distributed launch and ascent simulation of NASA's Orion spacecraft and Ares I launch vehicle. This simulation is one segment of the Distributed Space Exploration Simulation (DSES) Project. The DSES project is a research and development collaboration between NASA centers which investigates technologies and processes for distributed simulation of complex space systems in support of NASA's Exploration Initiative. DSES is developing an integrated end-to-end simulation capability to support NASA development and deployment of new exploration spacecraft and missions. This paper describes the first in a collection of simulation capabilities that DSES will support.

  19. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    NASA Technical Reports Server (NTRS)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  20. Collaborative medical informatics research using the Internet and the World Wide Web.

    PubMed Central

    Shortliffe, E. H.; Barnett, G. O.; Cimino, J. J.; Greenes, R. A.; Huff, S. M.; Patel, V. L.

    1996-01-01

    The InterMed Collaboratory is an interdisciplinary project involving six participating medical institutions. There are two broad mandates for the effort. The first is to further the development, sharing, and demonstration of numerous software and system components, data sets, procedures and tools that will facilitate the collaborations and support the application goals of these projects. The second is to provide a distributed suite of clinical applications, guidelines, and knowledge-bases for clinical, educational, and administrative purposes. To define the interactions among the components, datasets, procedures, and tools that we are producing and sharing, we have identified a model composed of seven tiers, each of which supports the levels above it. In this paper we briefly describe those tiers and the nature of the collaborative process with which we have experimented. PMID:8947641

  1. Alignment of process compliance and monitoring requirements in dynamic business collaborations

    NASA Astrophysics Data System (ADS)

    Comuzzi, Marco

    2017-07-01

    Dynamic business collaborations are intrinsically characterised by change because processes can be distributed or outsourced and partners may be substituted by new ones with enhanced or different capabilities. In this context, compliance requirements management becomes particularly challenging. Partners in a collaboration may join and leave dynamically and tasks over which compliance requirements are specified may be consequently distributed or delegated to new partners. This article considers the issue of aligning compliance requirements in a dynamic business collaboration with the monitoring requirements induced on the collaborating partners when change occurs. We first provide a conceptual model of business collaborations and their compliance requirements, introducing the concept of monitoring capabilities induced by compliance requirements. Then, we present a set of mechanisms to ensure consistency between monitoring and compliance requirements in the presence of change, e.g. when tasks are delegated or backsourced in-house. We also discuss a set of metrics to evaluate the status of a collaboration in respect of compliance monitorability. Finally, we discuss a prototype implementation of our framework.

  2. Autonomous Mission Operations for Sensor Webs

    NASA Astrophysics Data System (ADS)

    Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.

    2008-12-01

    We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.

  3. Two-Party secret key distribution via a modified quantum secret sharing protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grice, Warren P.; Evans, Philip G.; Lawrie, Benjamin

    We present and demonstrate a method of distributing secret information based on N-party single-qubit Quantum Secret Sharing (QSS) in a modied plug-and-play two-party Quantum Key Distribution (QKD) system with N 2 intermediate nodes and compare it to both standard QSS and QKD. Our setup is based on the Clavis2 QKD system built by ID Quantique but is generalizable to any implementation. We show that any two out of N parties can build a secret key based on partial information from each other and with collaboration from the remaining N 2 parties. This method signicantly reduces the number of resources (singlemore » photon detectors, lasers and dark ber connections) needed to implement QKD on the grid.« less

  4. Two-Party secret key distribution via a modified quantum secret sharing protocol

    DOE PAGES

    Grice, Warren P.; Evans, Philip G.; Lawrie, Benjamin; ...

    2015-01-01

    We present and demonstrate a method of distributing secret information based on N-party single-qubit Quantum Secret Sharing (QSS) in a modied plug-and-play two-party Quantum Key Distribution (QKD) system with N 2 intermediate nodes and compare it to both standard QSS and QKD. Our setup is based on the Clavis2 QKD system built by ID Quantique but is generalizable to any implementation. We show that any two out of N parties can build a secret key based on partial information from each other and with collaboration from the remaining N 2 parties. This method signicantly reduces the number of resources (singlemore » photon detectors, lasers and dark ber connections) needed to implement QKD on the grid.« less

  5. A Human Factors Approach to Bridging Systems and Introducing New Technologies

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.

    2011-01-01

    The application of human factors in aviation has grown to cover a wide range of disciplines and methods capable of assessing human-systems integration at many levels. For example, at the individual level, pilot workload may be studied while at the team level, coordinated workload distribution may be the focal point. At the organizational level, the way in which individuals and teams are supported by training and standards, policies and procedures may introduce additional, relevant topics. A consideration of human factors at each level contributes to our understanding of successes and failures in pilot performance, but this system focused on the flight deck alone -- is only one part of the airspace system. In the FAA's NextGen plan to overhaul the National Airspace System (NAS), new capabilities will enhance flightdeck systems (pilots), flight operations centers (dispatchers) and air traffic control systems (controllers and air traffic managers). At a minimum, the current roles and responsibilities of these three systems are likely to change. Since increased automation will be central to many of the enhancements, the role of automation is also likely to change. Using NextGen examples, a human factors approach for bridging complex airspace systems will be the main focus of this presentation. It is still crucial to consider the human factors within each system, but the successful implementation of new technologies in the NAS requires an understanding of the collaborations that occur when these systems intersect. This human factors approach to studying collaborative systems begins with detailed task descriptions within each system to establish a baseline of the current operations. The collaborative content and context are delineated through the review of regulatory and advisory materials, letters of agreement, policies, procedures and documented practices. Field observations and interviews also help to fill out the picture. Key collaborative functions across systems are identified and placed on a phase-of-flight timeline including information requirements, decision authority and use of automation, as well as level of frequency and criticality.

  6. A Game of Hide and Seek: Expectations of Clumpy Resources Influence Hiding and Searching Patterns

    PubMed Central

    Wilke, Andreas; Minich, Steven; Panis, Megane; Langen, Tom A.; Skufca, Joseph D.; Todd, Peter M.

    2015-01-01

    Resources are often distributed in clumps or patches in space, unless an agent is trying to protect them from discovery and theft using a dispersed distribution. We uncover human expectations of such spatial resource patterns in collaborative and competitive settings via a sequential multi-person game in which participants hid resources for the next participant to seek. When collaborating, resources were mostly hidden in clumpy distributions, but when competing, resources were hidden in more dispersed (random or hyperdispersed) patterns to increase the searching difficulty for the other player. More dispersed resource distributions came at the cost of higher overall hiding (as well as searching) times, decreased payoffs, and an increased difficulty when the hider had to recall earlier hiding locations at the end of the experiment. Participants’ search strategies were also affected by their underlying expectations, using a win-stay lose-shift strategy appropriate for clumpy resources when searching for collaboratively-hidden items, but moving equally far after finding or not finding an item in competitive settings, as appropriate for dispersed resources. Thus participants showed expectations for clumpy versus dispersed spatial resources that matched the distributions commonly found in collaborative versus competitive foraging settings. PMID:26154661

  7. Integrating software architectures for distributed simulations and simulation analysis communities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context ofmore » the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.« less

  8. Dan Goldin Presentation: Pathway to the Future

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In the "Path to the Future" presentation held at NASA's Langley Center on March 31, 1999, NASA's Administrator Daniel S. Goldin outlined the future direction and strategies of NASA in relation to the general space exploration enterprise. NASA's Vision, Future System Characteristics, Evolutions of Engineering, and Revolutionary Changes are the four main topics of the presentation. In part one, the Administrator talks in detail about NASA's vision in relation to the NASA Strategic Activities that are Space Science, Earth Science, Human Exploration, and Aeronautics & Space Transportation. Topics discussed in this section include: space science for the 21st century, flying in mars atmosphere (mars plane), exploring new worlds, interplanetary internets, earth observation and measurements, distributed information-system-in-the-sky, science enabling understanding and application, space station, microgravity, science and exploration strategies, human mars mission, advance space transportation program, general aviation revitalization, and reusable launch vehicles. In part two, he briefly talks about the future system characteristics. He discusses major system characteristics like resiliencey, self-sufficiency, high distribution, ultra-efficiency, and autonomy and the necessity to overcome any distance, time, and extreme environment barriers. Part three of Mr. Goldin's talk deals with engineering evolution, mainly evolution in the Computer Aided Design (CAD)/Computer Aided Engineering (CAE) systems. These systems include computer aided drafting, computerized solid models, virtual product development (VPD) systems, networked VPD systems, and knowledge enriched networked VPD systems. In part four, the last part, the Administrator talks about the need for revolutionary changes in communication and networking areas of a system. According to the administrator, the four major areas that need cultural changes in the creativity process are human-centered computing, an infrastructure for distributed collaboration, rapid synthesis and simulation tools, and life-cycle integration and validation. Mr. Goldin concludes his presentation with the following maxim "Collaborate, Integrate, Innovate or Stagnate and Evaporate." He also answers some questions after the presentation.

  9. Building a Better Grid, in Partnership with the OMNETRIC Group and Siemens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waight, Jim; Grover, Shailendra; Wiedetz, Clark

    In collaboration with Siemens and the National Renewable Energy Laboratory (NREL), OMNETRIC Group developed a distributed control hierarchy—based on an open field message bus (OpenFMB) framework—that allows control decisions to be made at the edge of the grid. The technology was validated and demonstrated at NREL’s Energy Systems Integration Facility.

  10. Collaborative Model for Remote Experimentation Laboratories Used by Non-Hierarchical Distributed Groups of Engineering Students

    ERIC Educational Resources Information Center

    Herrera, Oriel A.; Fuller, David A.

    2011-01-01

    Remote experimentation laboratories (REL) are systems based on real equipment that allow students to carry out a laboratory practice through the Internet on the computer. In engineering, there have been numerous initiatives to implement REL over recent years, given the fundamental role of laboratory activities. However, in the past efforts have…

  11. Designing an Engaged Swarm: Toward a "Techne" for Multi-Class, Interdisciplinary Collaborations with Nonprofit Partners

    ERIC Educational Resources Information Center

    McCarthy, Seán

    2016-01-01

    This essay proposes a model of university-community partnership called "an engaged swarm" that mobilizes networks of students from across classes and disciplines to work with off-campus partners such as nonprofits. Based on theories that translate the distributed, adaptive, and flexible activity of actors in biological systems to…

  12. Understanding the Impacts and Meaning of Maintaining Detectable Disinfection Residuals in Drinking Water Distribution Systems: Controlling Waterborne Pathogens, Disinfection Byproducts, Organic Chloramines, and Nitrification

    EPA Science Inventory

    : EPA Region 6, in collaboration with the Office of Research and Development and Office of Water (OW) in Cincinnati, Ohio, and the Louisiana Department of Health and Hospitals (LDHH), proposes a drinking water research project to understand how maintaining various drinking water...

  13. Managing Communications with Experts in Geographically Distributed Collaborative Networks

    DTIC Science & Technology

    2009-03-01

    agent architectures, and management of sensor-unmanned vehicle decision maker self organizing environments . Although CENETIX has its beginnings...understanding how everything in a complex system is interconnected. Additionally, environmental factors that impact the management of communications with...unrestricted warfare environment . In “Unconventional Insights for Managing Stakeholder Trust”, Pirson, et al. (2008) emphasizes the challenges of managing

  14. The ESA Hubble 15th Anniversary Campaign: A Trans-European collaboration project

    NASA Astrophysics Data System (ADS)

    Zoulias, Manolis; Christensen, Lars Lindberg; Kornmesser, Martin

    2006-08-01

    On April 24th 2005, NASA/ESA Hubble Space Telescope had been in orbit for 15 years. The anniversary was celebrated by ESA with the production of an 83 min. scientific movie and a 120 pages book, both titled ``Hubble, 15 years of discovery''. In order to cross language and distribution barriers a network of 16 translators and 22 partners from more than 10 countries was established. The DVD was distributed in approximately 700,000 copies throughout Europe. The project was amongst the largest of its kind with respect to collaboration, distribution and audience impact. It clearly demonstrated how international collaboration can produce effective cross-cultural educational and outreach products for astronomy.

  15. Information integration from heterogeneous data sources: a Semantic Web approach.

    PubMed

    Kunapareddy, Narendra; Mirhaji, Parsa; Richards, David; Casscells, S Ward

    2006-01-01

    Although the decentralized and autonomous implementation of health information systems has made it possible to extend the reach of surveillance systems to a variety of contextually disparate domains, public health use of data from these systems is not primarily anticipated. The Semantic Web has been proposed to address both representational and semantic heterogeneity in distributed and collaborative environments. We introduce a semantic approach for the integration of health data using the Resource Definition Framework (RDF) and the Simple Knowledge Organization System (SKOS) developed by the Semantic Web community.

  16. Mnemonic transmission, social contagion, and emergence of collective memory: Influence of emotional valence, group structure, and information distribution.

    PubMed

    Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna

    2017-09-01

    Social transmission of memory and its consequence on collective memory have generated enduring interdisciplinary interest because of their widespread significance in interpersonal, sociocultural, and political arenas. We tested the influence of 3 key factors-emotional salience of information, group structure, and information distribution-on mnemonic transmission, social contagion, and collective memory. Participants individually studied emotionally salient (negative or positive) and nonemotional (neutral) picture-word pairs that were completely shared, partially shared, or unshared within participant triads, and then completed 3 consecutive recalls in 1 of 3 conditions: individual-individual-individual (control), collaborative-collaborative (identical group; insular structure)-individual, and collaborative-collaborative (reconfigured group; diverse structure)-individual. Collaboration enhanced negative memories especially in insular group structure and especially for shared information, and promoted collective forgetting of positive memories. Diverse group structure reduced this negativity effect. Unequally distributed information led to social contagion that creates false memories; diverse structure propagated a greater variety of false memories whereas insular structure promoted confidence in false recognition and false collective memory. A simultaneous assessment of network structure, information distribution, and emotional valence breaks new ground to specify how network structure shapes the spread of negative memories and false memories, and the emergence of collective memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Recent GRC Aerospace Technologies Applicable to Terrestrial Energy Systems

    NASA Technical Reports Server (NTRS)

    Kankam, David; Lyons, Valerie J.; Hoberecht, Mark A.; Tacina, Robert R.; Hepp, Aloysius F.

    2000-01-01

    This paper is an overview of a wide range of recent aerospace technologies under development at the NASA Glenn Research Center, in collaboration with other NASA centers, government agencies, industry and academia. The focused areas are space solar power, advanced power management and distribution systems, Stirling cycle conversion systems, fuel cells, advanced thin film photovoltaics and batteries, and combustion technologies. The aerospace-related objectives of the technologies are generation of space power, development of cost-effective and reliable, high performance power systems, cryogenic applications, energy storage, and reduction in gas-turbine emissions, with attendant clean jet engines. The terrestrial energy applications of the technologies include augmentation of bulk power in ground power distribution systems, and generation of residential, commercial and remote power, as well as promotion of pollution-free environment via reduction in combustion emissions.

  18. NASA's EOSDIS Cumulus: Ingesting, Archiving, Managing, and Distributing from Commercial Cloud

    NASA Astrophysics Data System (ADS)

    Baynes, K.; Ramachandran, R.; Pilone, D.; Quinn, P.; Schuler, I.; Gilman, J.; Jazayeri, A.

    2017-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk will outline the motivation for this work, present the achievements and hurdles of the past 18 months and will chart a course for the future expansion of the Cumulus expansion. We will explore on not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud and how we are rising to meet those challenges through open collaboration and intentional stakeholder engagement.

  19. Collaborative Professional Development for Distributed Teacher Leadership towards School Change

    ERIC Educational Resources Information Center

    Sales, Auxiliadora; Moliner, Lidón; Francisco Amat, Andrea

    2017-01-01

    Professional development that aims to build school change capacity requires spaces for collaborative action and reflection. These spaces should promote learning and foster skills for distributed leadership in managing school change. The present study analyses the case of the Seminar for Critical Citizenship (SCC) established by teachers of infant,…

  20. The Pathway Program: How a Collaborative, Distributed Learning Program Showed Us the Future of Social Work Education

    ERIC Educational Resources Information Center

    Morris, Teresa; Mathias, Christine; Swartz, Ronnie; Jones, Celeste A; Klungtvet-Morano, Meka

    2013-01-01

    This paper describes a three-campus collaborative, distributed learning program that delivers social work education to remote rural and desert communities in California via distance learning modalities. This "Pathway Program" provides accredited social work education for a career ladder beginning with advising and developing an academic…

  1. A Critical Exploration of Collaborative and Distributed Leadership in Higher Education: Developing an Alternative Ontology through Leadership-as-Practice

    ERIC Educational Resources Information Center

    Youngs, Howard

    2017-01-01

    Since the turn of the millennium, interest in collaborative and distributed conceptualisations of leadership has gathered momentum, particularly in education. During the same period, higher education institutions have been embedded in practices shaped by New Public Management. The resultant reconfiguration of structural arrangements within…

  2. Collaborative Strategic Board Games as a Site for Distributed Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative--that is, a game requires that players work in joint pursuit of a shared goal--the computational thinking is easily observed as distributed across…

  3. Distributed collaborative environments for predictive battlespace awareness

    NASA Astrophysics Data System (ADS)

    McQuay, William K.

    2003-09-01

    The past decade has produced significant changes in the conduct of military operations: asymmetric warfare, the reliance on dynamic coalitions, stringent rules of engagement, increased concern about collateral damage, and the need for sustained air operations. Mission commanders need to assimilate a tremendous amount of information, make quick-response decisions, and quantify the effects of those decisions in the face of uncertainty. Situational assessment is crucial in understanding the battlespace. Decision support tools in a distributed collaborative environment offer the capability of decomposing complex multitask processes and distributing them over a dynamic set of execution assets that include modeling, simulations, and analysis tools. Decision support technologies can semi-automate activities, such as analysis and planning, that have a reasonably well-defined process and provide machine-level interfaces to refine the myriad of information that the commander must fused. Collaborative environments provide the framework and integrate models, simulations, and domain specific decision support tools for the sharing and exchanging of data, information, knowledge, and actions. This paper describes ongoing AFRL research efforts in applying distributed collaborative environments to predictive battlespace awareness.

  4. Surviving the Glut: The Management of Event Streams in Cyberphysical Systems

    NASA Astrophysics Data System (ADS)

    Buchmann, Alejandro

    Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de

  5. A Methodology and a Web Platform for the Collaborative Development of Context-Aware Systems

    PubMed Central

    Martín, David; López-de-Ipiña, Diego; Alzua-Sorzabal, Aurkene; Lamsfus, Carlos; Torres-Manzanera, Emilio

    2013-01-01

    Information and services personalization is essential for an optimal user experience. Systems have to be able to acquire data about the user's context, process them in order to identify the user's situation and finally, adapt the functionality of the system to that situation, but the development of context-aware systems is complex. Data coming from distributed and heterogeneous sources have to be acquired, processed and managed. Several programming frameworks have been proposed in order to simplify the development of context-aware systems. These frameworks offer high-level application programming interfaces for programmers that complicate the involvement of domain experts in the development life-cycle. The participation of users that do not have programming skills but are experts in the application domain can speed up and improve the development process of these kinds of systems. Apart from that, there is a lack of methodologies to guide the development process. This article presents as main contributions, the implementation and evaluation of a web platform and a methodology to collaboratively develop context-aware systems by programmers and domain experts. PMID:23666131

  6. Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.

    2010-01-01

    The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.

  7. LP DAAC MEaSUREs Project Artifact Tracking Via the NASA Earthdata Collaboration Environment

    NASA Astrophysics Data System (ADS)

    Bennett, S. D.

    2015-12-01

    The Land Processes Distributed Active Archive Center (LP DAAC) is a NASA Earth Observing System (EOS) Data and Information System (EOSDIS) DAAC that supports selected EOS Community non-standard data products such as the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Emissivity Database (GED), and also supports NASA Earth Science programs such as Making Earth System Data Records for Use in Research Environments (MEaSUREs) to contribute in providing long-term, consistent, and mature data products. As described in The LP DAAC Project Lifecycle Plan (Daucsavage, J.; Bennett, S., 2014), key elements within the Project Inception Phase fuse knowledge between NASA stakeholders, data producers, and NASA data providers. To support and deliver excellence for NASA data stewardship, and to accommodate long-tail data preservation with Community and MEaSUREs products, the LP DAAC is utilizing NASA's own Earthdata Collaboration Environment to bridge stakeholder communication divides. By leveraging a NASA supported platform, this poster describes how the Atlassian Confluence software combined with a NASA URS/Earthdata support can maintain each project's members, status, documentation, and artifact checklist. Furthermore, this solution provides a gateway for project communities to become familiar with NASA clients, as well as educating the project's NASA DAAC Scientists for NASA client distribution.

  8. Computer-Based Assessment of Collaborative Problem Solving: Exploring the Feasibility of Human-to-Agent Approach

    ERIC Educational Resources Information Center

    Rosen, Yigal

    2015-01-01

    How can activities in which collaborative skills of an individual are measured be standardized? In order to understand how students perform on collaborative problem solving (CPS) computer-based assessment, it is necessary to examine empirically the multi-faceted performance that may be distributed across collaboration methods. The aim of this…

  9. Potential and Impact Factors of the Knowledge and Information Awareness Approach for Fostering Net-Based Collaborative Problem-Solving: An Overview

    ERIC Educational Resources Information Center

    Engelmann, Tanja

    2014-01-01

    For effective communication and collaboration in learning situations, it is important to know what the collaboration partners know. However, the acquisition of this knowledge is difficult, especially in collaborating groups with spatially distributed members. One solution is the "Knowledge and Information Awareness" approach developed by…

  10. Enabling Tools and Methods for International, Inter-disciplinary and Educational Collaboration

    NASA Astrophysics Data System (ADS)

    Robinson, E. M.; Hoijarvi, K.; Falke, S.; Fialkowski, E.; Kieffer, M.; Husar, R. B.

    2008-05-01

    In the past, collaboration has taken place in tightly-knit workgroups where the members had direct connections to each other. Such collaboration was confined to small workgroups and person-to-person communication. Recent developments through the Internet foster virtual workgroups and organizations where dynamic, 'just-in-time' collaboration can take place over a much larger scale. The emergence of virtual workgroups has strongly influenced the interaction of inter-national, inter-disciplinary, as well as educational activities. In this paper we present an array of enabling tools and methods that incorporate the new technologies including web services, software mashups, tag-based structuring and searching, and wikis for collaborative writing and content organization. Large monolithic, 'do-it-all' software tools are giving way to web service modules, combined through service chaining. Application software can now be created using Service Oriented Architecture (SOA). In the air quality community, data providers and users are distributed in space and time creating barriers for data access. By exposing the data on the internet the space, time barriers are lessened. The federated data system, DataFed, developed at Washington University, accesses data from autonomous, distributed providers. Through data "wrappers", DataFed provides uniform and standards-based access services to heterogeneous, distributed data. Service orientation not only lowers the entry resistance for service providers, but it also allows the creation of user-defined applications and/or mashups. For example, Google Earth's open API allowed many groups to mash their content with Google Earth. Ad hoc tagging gives a rich description of the internet resources, but it has the disadvantage of providing a fuzzy schema. The semantic uniformity of the internet resources can be improved by controlled tagging which apply a consistent namespace and tag combinations to diverse objects. One example of this is the photo-sharing web application Flickr. Just like data, by exposing photos through the internet those can be reused in ways unknown and unanticipated by the provider. For air quality application, Flickr allowed a rich collection of images of forest fire smoke, wind blown dust and haze events to be tagged with controlled tags and used in for evaluating subtle features of the events. Wikis, originally used just for collaboratively writing and discuss documents, are now also a social software workflow managers. In air quality data, wikis provides the means to collaboratively create rich metadata. Wikis become a virtual meeting place to discuss ideas before a workshop of conference, display tagged internet resources, and collaboratively work on documents. Wikis are also useful in the classroom. For instance in class projects, the wiki displays harvested resources, maintains collaborative documents and discussions and is the organizational memory for the project.

  11. Balloon-Borne Full-Column Greenhouse Gas Profiling Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Marc L

    The vertical distributions of CO2, CH4, and other gases provide important constraints for the determination of terrestrial and ocean sources and sinks of carbon and other biogeochemical processes in the Earth system. The DOE Biological and Environmental Research Program (DOE-BER) and the NOAA Earth System Research Laboratory (NOAA-ESRL) collaborate to quantify the vertically resolved distribution of atmospheric carbon-cycle gases (CO2, and CH4) within approximately 99% of the atmospheric column at the DOE ARM Southern Great Plains Facility in Oklahoma. In 2015, flights were delayed while research at NOAA focused on evaluating sources of systematic errors in the gas collection andmore » analysis system and modifying the sampling system to provide duplicate air samples in a single flight package. In 2017, we look forward to proposing additional sampling and analysis at ARM-SGP (and other sites) that characterize the vertical distribution of CO2 and CH4 over time and space.« less

  12. A Randomized Effectiveness Trial of a Systems-Level Approach to Stepped Care for War-Related PTSD

    DTIC Science & Technology

    2016-05-01

    period rather than storing the hard copies at their respective posts was approved. Also, an amendment changing the study Initiating PI from COL...care is the de facto mental health system; in Collaborative Medicine Case Studies : Evidence in Prac- tice. Edited by Kessler R, Stafford D. New York...Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT During the 6.5 year study period, investigators developed the STEPS UP

  13. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  14. Making Sense of Rocket Science - Building NASA's Knowledge Management Program

    NASA Technical Reports Server (NTRS)

    Holm, Jeanne

    2002-01-01

    The National Aeronautics and Space Administration (NASA) has launched a range of KM activities-from deploying intelligent "know-bots" across millions of electronic sources to ensuring tacit knowledge is transferred across generations. The strategy and implementation focuses on managing NASA's wealth of explicit knowledge, enabling remote collaboration for international teams, and enhancing capture of the key knowledge of the workforce. An in-depth view of the work being done at the Jet Propulsion Laboratory (JPL) shows the integration of academic studies and practical applications to architect, develop, and deploy KM systems in the areas of document management, electronic archives, information lifecycles, authoring environments, enterprise information portals, search engines, experts directories, collaborative tools, and in-process decision capture. These systems, together, comprise JPL's architecture to capture, organize, store, and distribute key learnings for the U.S. exploration of space.

  15. NPS-NRL-Rice-UIUC Collaboration on Navy Atmosphere-Ocean Coupled Models on Many-Core Computer Architectures Annual Report

    DTIC Science & Technology

    2015-09-30

    DISTRIBUTION STATEMENT A: Distribution approved for public release; distribution is unlimited. NPS-NRL- Rice -UIUC Collaboration on Navy Atmosphere...portability. There is still a gap in the OCCA support for Fortran programmers who do not have accelerator experience. Activities at Rice /Virginia Tech are...for automated data movement and for kernel optimization using source code analysis and run-time detective work. In this quarter the Rice /Virginia

  16. Advanced Group Support Systems and Facilities

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1999-01-01

    The document contains the proceedings of the Workshop on Advanced Group Support Systems and Facilities held at NASA Langley Research Center, Hampton, Virginia, July 19-20, 1999. The workshop was jointly sponsored by the University of Virginia Center for Advanced Computational Technology and NASA. Workshop attendees came from NASA, other government agencies, industry, and universities. The objectives of the workshop were to assess the status of advanced group support systems and to identify the potential of these systems for use in future collaborative distributed design and synthesis environments. The presentations covered the current status and effectiveness of different group support systems.

  17. Inflection Points in Magnetic Resonance Imaging Technology-35 Years of Collaborative Research and Development.

    PubMed

    Wood, Michael L; Griswold, Mark A; Henkelman, Mark; Hennig, Jürgen

    2015-09-01

    The technology for clinical magnetic resonance imaging (MRI) has advanced with remarkable speed and in such a manner reflecting the influence of 3 forces-collaboration between disciplines, collaboration between academia and industry, and the enabling of software applications by hardware. The forces are evident in the key developments from the past and emerging trends for the future highlighted in this review article. These developments are associated with MRI system attributes, such as wider, shorter, and stronger magnets; specialty magnets and hybrid devices; k space; and the notion that magnetic field gradients perform a Fourier transform on the spatial distribution of magnetization, phased-array coils and parallel imaging, the user interface, the wide range of contrast possible, and applications that exploit motion-induced phase shifts. An attempt is made to show connections between these developments and how the 3 forces mentioned previously will continue to shape the technology used so productively in clinical MRI.

  18. Dementia, distributed interactional competence and social membership.

    PubMed

    Gjernes, Trude; Måseide, Per

    2015-12-01

    The article analyzes how a person with dementia playing a guitar collaborates with other people in a joint activity. The analysis shows that a person with dementia may gain social membership in a group of persons with and without dementia through social interaction, collaboration, scaffolding and use of material anchors. It shows that interactional skills as well as skills as guitar player are not only products of a mind-body system, but also a product of collaboration between different actors with different participant statuses in a particular situation. The guitar player's mind emerges in the social context of the joint activity and scaffolding. Scaffolding comes from interactive moves from the other participants without dementia and from the guitar. The guitar represents a material anchor. It is a tool for participation, experiences of pleasure, and coping, but it is also a challenge that requires management of face threatening events. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Cloud-based image sharing network for collaborative imaging diagnosis and consultation

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo

    2018-03-01

    In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.

  20. Market-Based Decision Guidance Framework for Power and Alternative Energy Collaboration

    NASA Astrophysics Data System (ADS)

    Altaleb, Hesham

    With the introduction of power energy markets deregulation, innovations have transformed once a static network into a more flexible grid. Microgrids have also been deployed to serve various purposes (e.g., reliability, sustainability, etc.). With the rapid deployment of smart grid technologies, it has become possible to measure and record both, the quantity and time of the consumption of electrical power. In addition, capabilities for controlling distributed supply and demand have resulted in complex systems where inefficiencies are possible and where improvements can be made. Electric power like other volatile resources cannot be stored efficiently, therefore, managing such resource requires considerable attention. Such complex systems present a need for decisions that can streamline consumption, delay infrastructure investments, and reduce costs. When renewable power resources and the need for limiting harmful emissions are added to the equation, the search space for decisions becomes increasingly complex. As a result, the need for a comprehensive decision guidance system for electrical power resources consumption and productions becomes evident. In this dissertation, I formulate and implement a comprehensive framework that addresses different aspect of the electrical power generation and consumption using optimization models and utilizing collaboration concepts. Our solution presents a two-prong approach: managing interaction in real-time for the short-term immediate consumption of already allocated resources; and managing the operational planning for the long-run consumption. More specifically, in real-time, we present and implement a model of how to organize a secondary market for peak-demand allocation and describe the properties of the market that guarantees efficient execution and a method for the fair distribution of collaboration gains. We also propose and implement a primary market for peak demand bounds determination problem with the assumption that participants of this market have the ability to collaborate in real-time. Moreover, proposed in this dissertation is an extensible framework to facilitate C&I entities forming a consortium to collaborate on their electric power supply and demand. The collaborative framework includes the structure of market setting, bids, and market resolution that produces a schedule of how power components are controlled as well as the resulting payment. The market resolution must satisfy a number of desirable properties (i.e., feasibility, Nash equilibrium, Pareto optimality, and equal collaboration profitability) which are formally defined in the dissertation. Furthermore, to support the extensible framework components' library, power components such as utility contract, back-up power generator, renewable resource, and power consuming service are formally modeled. Finally, the validity of this framework is evaluated by a case study using simulated load scenarios to examine the ability of the framework to efficiently operate at the specified time intervals with minimal overhead cost.

  1. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  2. Distributed Cognition and Embodiment in Text Planning: A Situated Study of Collaborative Writing in the Workplace

    ERIC Educational Resources Information Center

    Clayson, Ashley

    2018-01-01

    Through a study of collaborative writing at a student advocacy nonprofit, this article explores how writers distribute their text planning across tools, artifacts, and gestures, with a particular focus on how embodied representations of texts are present in text planning. Findings indicate that these and other representations generated by the…

  3. Distribution of Feedback among Teacher and Students in Online Collaborative Learning in Small Groups

    ERIC Educational Resources Information Center

    Coll, Cesar; Rochera, Maria Jose; de Gispert, Ines; Diaz-Barriga, Frida

    2013-01-01

    This study explores the characteristics and distribution of the feedback provided by the participants (a teacher and her students) in an activity organized inside a collaborative online learning environment. We analyse 853 submissions made by two groups of graduate students and their teacher (N1 = 629 & N2 = 224) involved in the collaborative…

  4. Managing security risks for inter-organisational information systems: a multiagent collaborative model

    NASA Astrophysics Data System (ADS)

    Feng, Nan; Wu, Harris; Li, Minqiang; Wu, Desheng; Chen, Fuzan; Tian, Jin

    2016-09-01

    Information sharing across organisations is critical to effectively managing the security risks of inter-organisational information systems. Nevertheless, few previous studies on information systems security have focused on inter-organisational information sharing, and none have studied the sharing of inferred beliefs versus factual observations. In this article, a multiagent collaborative model (MACM) is proposed as a practical solution to assess the risk level of each allied organisation's information system and support proactive security treatment by sharing beliefs on event probabilities as well as factual observations. In MACM, for each allied organisation's information system, we design four types of agents: inspection agent, analysis agent, control agent, and communication agent. By sharing soft findings (beliefs) in addition to hard findings (factual observations) among the organisations, each organisation's analysis agent is capable of dynamically predicting its security risk level using a Bayesian network. A real-world implementation illustrates how our model can be used to manage security risks in distributed information systems and that sharing soft findings leads to lower expected loss from security risks.

  5. Concurrent Mission and Systems Design at NASA Glenn Research Center: The Origins of the COMPASS Team

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Oleson, Steven R.; Sarver-Verhey, Timothy R.

    2012-01-01

    Established at the NASA Glenn Research Center (GRC) in 2006 to meet the need for rapid mission analysis and multi-disciplinary systems design for in-space and human missions, the Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team is a multidisciplinary, concurrent engineering group whose primary purpose is to perform integrated systems analysis, but it is also capable of designing any system that involves one or more of the disciplines present in the team. The authors were involved in the development of the COMPASS team and its design process, and are continuously making refinements and enhancements. The team was unofficially started in the early 2000s as part of the distributed team known as Team JIMO (Jupiter Icy Moons Orbiter) in support of the multi-center collaborative JIMO spacecraft design during Project Prometheus. This paper documents the origins of a concurrent mission and systems design team at GRC and how it evolved into the COMPASS team, including defining the process, gathering the team and tools, building the facility, and performing studies.

  6. An Integrated and Collaborative Approach for NASA Earth Science Data

    NASA Technical Reports Server (NTRS)

    Murphy, K.; Lowe, D.; Behnke, J.; Ramapriyan, H.; Behnke, J.; Sofinowski, E.

    2012-01-01

    Earth science research requires coordination and collaboration across multiple disparate science domains. Data systems that support this research are often as disparate as the disciplines that they support. These distinctions can create barriers limiting access to measurements, which could otherwise enable cross-discipline Earth science. NASA's Earth Observing System Data and Information System (EOSDIS) is continuing to bridge the gap between discipline-centric data systems with a coherent and transparent system of systems that offers up to date and engaging science related content, creates an active and immersive science user experience, and encourages the use of EOSDIS earth data and services. The new Earthdata Coherent Web (ECW) project encourages cohesiveness by combining existing websites, data and services into a unified website with a common look and feel, common tools and common processes. It includes cross-linking and cross-referencing across the Earthdata site and NASA's Distributed Active Archive Centers (DAAC), and by leveraging existing EOSDIS Cyber-infrastructure and Web Service technologies to foster re-use and to reduce barriers to discovering Earth science data (http://earthdata.nasa.gov).

  7. Building an International Geosciences Network (i-GEON) for cyberinfrastructure-based Research and Education

    NASA Astrophysics Data System (ADS)

    Seber, D.; Baru, C.

    2007-05-01

    The Geosciences Network (GEON) project is a collaboration among multiple institutions to develop a cyberinfrastructure (CI) platform in support of integrative geoscience research activities. Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, resource discovery, semantic data integration, high-end computations and 4D visualization in an easy-to-use web-based environment. The cyberinfrastructure in GEON is required to support an inherently distributed system, since the scientists, who are users as well as providers of resources, are themselves distributed. International collaborations are a natural extension of GEON; the geoscience research requires strong international collaborations. The goals of the i-GEON activities are to collaborate with international partners and jointly build a cyberinfrastructure for the geosciences to enable collaborative work environments. International partners can participate in GEON efforts, establish GEON nodes at their universities, institutes, or agencies and also contribute data and tools to the network. Via jointly run cyberinfrastructure workshops, the GEON team also introduces students, scientists, and research professionals to the concepts of IT-based geoscience research and education. Currently, joint activities are underway with the Chinese Academy of Sciences in China, the GEO Grid project at AIST in Japan, and the University of Hyderabad in India (where the activity is funded by the Indo-US Science and Technology Forum). Several other potential international partnerships are under consideration. iGEON is open to all international partners who are interested in working towards the goal of data sharing, managing and integration via IT-based platforms. Information about GEON and its international activities can be found at http:www.geongrid.org/

  8. Collaborative Systems Testing

    ERIC Educational Resources Information Center

    Pocatilu, Paul; Ciurea, Cristian

    2009-01-01

    Collaborative systems are widely used today in various activity fields. Their complexity is high and the development involves numerous resources and costs. Testing collaborative systems has a very important role for the systems' success. In this paper we present taxonomy of collaborative systems. The collaborative systems are classified in many…

  9. Generative Leadership: A Case Study of Distributed Leadership and Leadership Sustainability at Two New York City High Schools

    ERIC Educational Resources Information Center

    Lynch, Olivia

    2009-01-01

    Generative Leadership is a case study of how two New York City High Schools sustain and develop leadership. The study explores their system of school governance, its rationale and beliefs, the leadership structures and how their collaborative leadership practice insures that no leader stands alone and that replacement leadership is available at…

  10. An Exploratory Case Study of Information-Sharing and Collaboration within Air Force Supply Chain Management

    DTIC Science & Technology

    2006-03-01

    International Journal of Production Economics , Vol. 93-94, pp. 53-99, 2005. -----. “Approximate...Optimization of a Two-level Distribution Inventory System,” International Journal of Production Economics , Vol. 81-81, pp. 545-553, 2003...Scaling Down Multi-Echelon Inventory Problems,” International Journal of Production Economics , Vol. 71, pp. 255-261, 2001. Axsater, Sven

  11. English as a Second Language Teachers and the Use of New Media: Collaboration and Connection

    ERIC Educational Resources Information Center

    Paraiso, Johnna

    2012-01-01

    The role of the ESL teacher is that of both educator and advocate. Frequently ESL teachers work with a population that presents complex challenges to the school culture at large. The distribution of the ELL population within a school system may require the ESL teacher to fulfill responsibilities at multiple schools, thus maintaining an itinerant…

  12. Constructing Scientific Applications from Heterogeneous Resources

    NASA Technical Reports Server (NTRS)

    Schichting, Richard D.

    1995-01-01

    A new model for high-performance scientific applications in which such applications are implemented as heterogeneous distributed programs or, equivalently, meta-computations, is investigated. The specific focus of this grant was a collaborative effort with researchers at NASA and the University of Toledo to test and improve Schooner, a software interconnection system, and to explore the benefits of increased user interaction with existing scientific applications.

  13. KODAMA and VPC based Framework for Ubiquitous Systems and its Experiment

    NASA Astrophysics Data System (ADS)

    Takahashi, Kenichi; Amamiya, Satoshi; Iwao, Tadashige; Zhong, Guoqiang; Kainuma, Tatsuya; Amamiya, Makoto

    Recently, agent technologies have attracted a lot of interest as an emerging programming paradigm. With such agent technologies, services are provided through collaboration among agents. At the same time, the spread of mobile technologies and communication infrastructures has made it possible to access the network anytime and from anywhere. Using agents and mobile technologies to realize ubiquitous computing systems, we propose a new framework based on KODAMA and VPC. KODAMA provides distributed management mechanisms by using the concept of community and communication infrastructure to deliver messages among agents without agents being aware of the physical network. VPC provides a method of defining peer-to-peer services based on agent communication with policy packages. By merging the characteristics of both KODAMA and VPC functions, we propose a new framework for ubiquitous computing environments. It provides distributed management functions according to the concept of agent communities, agent communications which are abstracted from the physical environment, and agent collaboration with policy packages. Using our new framework, we conducted a large-scale experiment in shopping malls in Nagoya, which sent advertisement e-mails to users' cellular phones according to user location and attributes. The empirical results showed that our new framework worked effectively for sales in shopping malls.

  14. An Examination of the Characteristics Impacting Collaborative Tool Efficacy: The Uncanny Valley of Collaborative Tools

    ERIC Educational Resources Information Center

    Dishaw, Mark T.; Eierman, Michael A.; Iversen, Jacob H.; Philip, George

    2013-01-01

    As collaboration among teams that are distributed in time and space is becoming increasingly important, there is a need to understand the efficacy of tools available to support that collaboration. This study employs a combination of the Technology Acceptance Model (TAM) and the Task-Technology Fit (TTF) model to compare four different technologies…

  15. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  16. A modular approach to addressing model design, scale, and parameter estimation issues in distributed hydrological modelling

    USGS Publications Warehouse

    Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.

    2002-01-01

    A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.

  17. HydroShare: An online, collaborative environment for the sharing of hydrologic data and models (Invited)

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Arrigo, J.; Hooper, R. P.; Valentine, D. W.; Maidment, D. R.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. HydroShare will use the integrated Rule-Oriented Data System (iRODS) to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

  18. Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2003-01-01

    The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.

  19. Full Scale Drinking Water System Decontamination at the Water Security Test Bed.

    PubMed

    Szabo, Jeffrey; Hall, John; Reese, Steve; Goodrich, Jim; Panguluri, Sri; Meiners, Greg; Ernst, Hiba

    2018-03-20

    The EPA's Water Security Test Bed (WSTB) facility is a full-scale representation of a drinking water distribution system. In collaboration with the Idaho National Laboratory (INL), EPA designed the WSTB facility to support full-scale evaluations of water infrastructure decontamination, real-time sensors, mobile water treatment systems, and decontamination of premise plumbing and appliances. The EPA research focused on decontamination of 1) Bacillus globigii (BG) spores, a non-pathogenic surrogate for Bacillus anthracis and 2) Bakken crude oil. Flushing and chlorination effectively removed most BG spores from the bulk water but BG spores still remained on the pipe wall coupons. Soluble oil components of Bakken crude oil were removed by flushing although oil components persisted in the dishwasher and refrigerator water dispenser. Using this full-scale distribution system allows EPA to 1) test contaminants without any human health or ecological risk and 2) inform water systems on effective methodologies responding to possible contamination incidents.

  20. NASA's EOSDIS Cumulus: Ingesting, Archiving, Managing, and Distributing Earth Science Data from the Commercial Cloud

    NASA Technical Reports Server (NTRS)

    Baynes, Katie; Ramachandran, Rahul; Pilone, Dan; Quinn, Patrick; Gilman, Jason; Schuler, Ian; Jazayeri, Alireza

    2017-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk will outline the motivation for this work, present the achievements and hurdles of the past 18 months and will chart a course for the future expansion of the Cumulus expansion. We will explore on not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud and how we are rising to meet those challenges through open collaboration and intentional stakeholder engagement.

  1. Collaboration and decision making tools for mobile groups

    NASA Astrophysics Data System (ADS)

    Abrahamyan, Suren; Balyan, Serob; Ter-Minasyan, Harutyun; Degtyarev, Alexander

    2017-12-01

    Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerates development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. Solutions from basic internet-messengers to complex software for online collaboration equipment in large-scale workgroups are implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decisionmaking and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices, describe its architecture and evaluate performance.

  2. Virtual Teaming: Faculty Collaboration in Online Spaces

    ERIC Educational Resources Information Center

    Almjeld, Jen; Rybas, Natalia; Rybas, Sergey

    2013-01-01

    This collaborative article chronicles the experiences of three faculty at three universities utilizing wiki technology to transform themselves and their students into a virtual team. Rooted in workplace approaches to distributed teaming, the project expands notions of classroom collaboration to include planning, administration, and assessment of a…

  3. Services supporting collaborative alignment of engineering networks

    NASA Astrophysics Data System (ADS)

    Jansson, Kim; Uoti, Mikko; Karvonen, Iris

    2015-08-01

    Large-scale facilities such as power plants, process factories, ships and communication infrastructures are often engineered and delivered through geographically distributed operations. The competencies required are usually distributed across several contributing organisations. In these complicated projects, it is of key importance that all partners work coherently towards a common goal. VTT and a number of industrial organisations in the marine sector have participated in a national collaborative research programme addressing these needs. The main output of this programme was development of the Innovation and Engineering Maturity Model for Marine-Industry Networks. The recently completed European Union Framework Programme 7 project COIN developed innovative solutions and software services for enterprise collaboration and enterprise interoperability. One area of focus in that work was services for collaborative project management. This article first addresses a number of central underlying research themes and previous research results that have influenced the development work mentioned above. This article presents two approaches for the development of services that support distributed engineering work. Experience from use of the services is analysed, and potential for development is identified. This article concludes with a proposal for consolidation of the two above-mentioned methodologies. This article outlines the characteristics and requirements of future services supporting collaborative alignment of engineering networks.

  4. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  5. A common body of care: the ethics and politics of teamwork in the operating theater are inseparable.

    PubMed

    Bleakley, Alan

    2006-06-01

    In the operating theater, the micro-politics of practice, such as interpersonal communications, are central to patient safety and are intimately tied with values as well as knowledge and skills. Team communication is a shared and distributed work activity. In an era of "professionalism," that must now encompass "interprofessionalism," a virtue ethics framework is often invoked to inform practice choices, with reference to phronesis or practical wisdom. However, such a framework is typically cast in individualistic terms as a character trait, rather than in terms of a distributed quality that may be constituted through intentionally collaborative practice, or is an emerging property of a complex, adaptive system. A virtue ethics approach is a necessary but not sufficient condition for a collaborative bioethics within the operating theater. There is also an ecological imperative-the patient's entry into the household (oikos) of the operating theater invokes the need for "hospitality" as a form of ethical practice.

  6. Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error

    PubMed Central

    Sahoo, Prasan Kumar; Hwang, I-Shyan

    2011-01-01

    Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738

  7. An overview of Korean patients with mucopolysaccharidosis and collaboration through the Asia Pacific MPS Network.

    PubMed

    Cho, Sung Yoon; Sohn, Young Bae; Jin, Dong-Kyu

    2014-08-01

    Mucopolysaccharidosis (MPS) is a constellation of disorders characterized by the accumulation of mucopolysaccharides in tissues and organs. This accumulation results in the deterioration and degeneration of multiple organs. This paper describes the general distribution of types of MPS in patients, their clinical characteristics and genotypes, the development of animal studies and preclinical studies, enzyme replacement therapy in South Korea, and the development of idursulfase beta and clinical trials on idursulfase beta in South Korea. In addition, this paper discusses academic collaboration among specialists in MPS care in the Asia-Pacific region, which includes Japan, Taiwan, Malaysia, and South Korea, through an organization called the Asia-Pacific MPS Network (APMN). The Asia-Pacific MPS Registry, an electronic remote data entry system, has been developed by key doctors in the APMN. Rare diseases require international cooperation and collaboration to elucidate their mechanisms and carry out clinical trials; therefore, an organization such as the APMN is required. Furthermore, international collaboration among Asian countries and countries around the world will be of utmost importance in the future.

  8. An overview of Korean patients with mucopolysaccharidosis and collaboration through the Asia Pacific MPS Network

    PubMed Central

    Cho, Sung Yoon; Sohn, Young Bae; Jin, Dong-Kyu

    2014-01-01

    Summary Mucopolysaccharidosis (MPS) is a constellation of disorders characterized by the accumulation of mucopolysaccharides in tissues and organs. This accumulation results in the deterioration and degeneration of multiple organs. This paper describes the general distribution of types of MPS in patients, their clinical characteristics and genotypes, the development of animal studies and preclinical studies, enzyme replacement therapy in South Korea, and the development of idursulfase beta and clinical trials on idursulfase beta in South Korea. In addition, this paper discusses academic collaboration among specialists in MPS care in the Asia-Pacific region, which includes Japan, Taiwan, Malaysia, and South Korea, through an organization called the Asia-Pacific MPS Network (APMN). The Asia-Pacific MPS Registry, an electronic remote data entry system, has been developed by key doctors in the APMN. Rare diseases require international cooperation and collaboration to elucidate their mechanisms and carry out clinical trials; therefore, an organization such as the APMN is required. Furthermore, international collaboration among Asian countries and countries around the world will be of utmost importance in the future. PMID:25364648

  9. Supporting Trust in Globally Distributed Software Teams: The Impact of Visualized Collaborative Traces on Perceived Trustworthiness

    ERIC Educational Resources Information Center

    Trainer, Erik Harrison

    2012-01-01

    Trust plays an important role in collaborations because it creates an environment in which people can openly exchange ideas and information with one another and engineer innovative solutions together with less perceived risk. The rise in globally distributed software development has created an environment in which workers are likely to have less…

  10. Post-Web 2.0 Pedagogy: From Student-Generated Content to International Co-Production Enabled by Mobile Social Media

    ERIC Educational Resources Information Center

    Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel

    2013-01-01

    The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…

  11. Traditional anthropology and geographical information systems in the collaborative study of Cassava in Africa

    NASA Technical Reports Server (NTRS)

    Romanoff, Steven

    1991-01-01

    Cross-cultural, village-level, and farmer surveys have been used with a geographical information system to describe the distribution and relative importance of cassava (manioc, yuca, Manihot esculenta) in its cultural, economic, and ecological contexts. It presents examples of data management for mapping, sample selection, cross-tabulation of characteristics, combination of data types for indices and hypothesis testing. The methods used are reviewed, and some of the main conclusions of the study are presented.

  12. Design of an image-distribution service from a clinical PACS

    NASA Astrophysics Data System (ADS)

    Gehring, Dale G.; Persons, Kenneth R.; Rothman, Melvyn L.; Felmlee, Joel P.; Gerhart, D. J.; Hangiandreou, Nicholas J.; Reardon, Frank J.; Shirk, M.; Forbes, Glenn S.; Williamson, Byrn, Jr.

    1994-05-01

    A PACS system has been developed through a multi-phase collaboration between the Mayo Clinic and IBM/Rochester. The current system has been fully integrated into the clinical practice of the Radiology Department for the primary purpose of digital image archival, retrieval, and networked workstation review. Work currently in progress includes the design and implementation of a gateway device for providing digital image data to third-party workstations, laser printers, and other devices, for users both within and outside of the Radiology Department.

  13. A scenario for a web-based radiation treatment planning structure: A new tool for quality assurance procedure?

    PubMed

    Kouloulias, V E; Ntasis, E; Poortmans, Ph; Maniatis, T A; Nikita, K S

    2003-01-01

    The desire to develop web-based platforms for remote collaboration among physicians and technologists is becoming a great challenge. In this paper we describe a web-based radiotherapy treatment planning (WBRTP) system to facilitate decentralized radiotherapy services by allowing remote treatment planning and quality assurance (QA) of treatment delivery. Significant prerequisites are digital storage of relevant data as well as efficient and reliable telecommunication system between collaborating units. The system of WBRTP includes video conferencing, display of medical images (CT scans, dose distributions etc), replication of selected data from a common database, remote treatment planning, evaluation of treatment technique and follow-up of the treated patients. Moreover the system features real-time remote operations in terms of tele-consulting like target volume delineation performed by a team of experts at different and distant units. An appraisal of its possibilities in quality assurance in radiotherapy is also discussed. As a conclusion, a WBRTP system would not only be a medium for communication between experts in oncology but mainly a tool for improving the QA in radiotherapy.

  14. Coordinating the Commons: Diversity & Dynamics in Open Collaborations

    ERIC Educational Resources Information Center

    Morgan, Jonathan T.

    2013-01-01

    The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia's history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia…

  15. An Evaluation of Internet-Based CAD Collaboration Tools

    ERIC Educational Resources Information Center

    Smith, Shana Shiang-Fong

    2004-01-01

    Due to the now widespread use of the Internet, most companies now require computer aided design (CAD) tools that support distributed collaborative design on the Internet. Such CAD tools should enable designers to share product models, as well as related data, from geographically distant locations. However, integrated collaborative design…

  16. Data Democratization - Promoting Real-Time Data Sharing and Use throughout the Americas

    NASA Astrophysics Data System (ADS)

    Yoksas, T. C.

    2006-05-01

    The Unidata Program Center (Unidata) of the University Corporation of Atmospheric Research (UCAR) is actively involved in international collaborations whose goals are real-time sharing of hydro-meteorological data by institutions of higher education throughout the Americas; in the distribution of analysis and visualization tools for those data; and in the establishment of server sites that provide easy-to-use, programmatic remote- access to a wide variety of datasets. Data sharing capabilities are being provided by Unidata's Internet Data Distribution (IDD) system, a community-based effort that has been the primary source of real-time meteorological data for approximately 150 US universities for over a decade. A collaboration among Unidata, Brazil's Centro de PreviSão de Tempo e Estudos Climáticos (CPTEC), the Universidad Federal do Rio de Janeiro (UFRJ), and the Universidade de São Paulo (USP) has resulted in the creation of a Brazilian peer of the North American IDD, the IDD-Brasil. Collaboration among Unidata, the Universidad de Costa Rica (UCR), and the University of Puerto Rico at Mayaguez (UPRM) seeks to extend IDD data sharing throughout Central America and the Caribbean in an IDD-Caribe. Collaboration between Unidata and the Caribbean Institute for Meteorology and Hydrology (CIMH), a World Meteorological Organization (WMO) Regional Meteorological Training Center (RMTC) based in Barbados, has been launched to investigate the possibility of expansion of IDD data sharing throughout Caribbean RMTC member countries. Most recently, efforts aimed at creating a data sharing network for researchers on the Antarctic continent have resulted in the establishment of the Antarctic-IDD. Data analysis and visualization capabilities are being provided by Unidata through a suite of freely-available applications: the National Centers for Environmental Prediction (NCEP) GEneral Meteorology PAcKage (GEMPAK); the Unidata Integrated Data Viewer (IDV); and University of Wisconsin, Space Science and Engineering Center (SSEC) Man-computer Interactive Data Access System (McIDAS). Remote data access capabilities are provided by Unidata's Thematic Realtime Environmental Data Services (THREDDS) servers (which incorporate Open-source Project for a Network Data Access (OPeNDAP) data services), and the Abstract Data Distribution Environment (ADDE) of McIDAS. It is envisioned that the data sharing capabilities available in the IDD, IDD-Brasil, and IDD-Caribe, remote data access capabilities available in THREDDS and ADDE, and analysis capabilities available in GEMPAK, the IDV, and McIDAS will help foster new collaborations among prominent university educators and researchers, national meteorological agencies, and WMO Regional Meteorological Training Centers throughout North, Central, and South America.

  17. High rate information systems - Architectural trends in support of the interdisciplinary investigator

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Preheim, Larry E.

    1990-01-01

    Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.

  18. Directed networks' different link formation mechanisms causing degree distribution distinction

    NASA Astrophysics Data System (ADS)

    Behfar, Stefan Kambiz; Turkina, Ekaterina; Cohendet, Patrick; Burger-Helmchen, Thierry

    2016-11-01

    Within undirected networks, scientists have shown much interest in presenting power-law features. For instance, Barabási and Albert (1999) claimed that a common property of many large networks is that vertex connectivity follows scale-free power-law distribution, and in another study Barabási et al. (2002) showed power law evolution in the social network of scientific collaboration. At the same time, Jiang et al. (2011) discussed deviation from power-law distribution; others indicated that size effect (Bagrow et al., 2008), information filtering mechanism (Mossa et al., 2002), and birth and death process (Shi et al., 2005) could account for this deviation. Within directed networks, many authors have considered that outlinks follow a similar mechanism of creation as inlinks' (Faloutsos et al., 1999; Krapivsky et al., 2001; Tanimoto, 2009) with link creation rate being the linear function of node degree, resulting in a power-law shape for both indegree and outdegree distribution. Some other authors have made an assumption that directed networks, such as scientific collaboration or citation, behave as undirected, resulting in a power-law degree distribution accordingly (Barabási et al., 2002). At the same time, we claim (1) Outlinks feature different degree distributions than inlinks; where different link formation mechanisms cause the distribution distinctions, (2) in/outdegree distribution distinction holds for different levels of system decomposition; therefore this distribution distinction is a property of directed networks. First, we emphasize in/outlink formation mechanisms as causal factors for distinction between indegree and outdegree distributions (where this distinction has already been noticed in Barker et al. (2010) and Baxter et al. (2006)) within a sample network of OSS projects as well as Java software corpus as a network. Second, we analyze whether this distribution distinction holds for different levels of system decomposition: open-source-software (OSS) project-project dependency within a cluster, package-package dependency within a project and class-class dependency within a package. We conclude that indegree and outdegree dependencies do not lead to similar type of degree distributions, implying that indegree dependencies follow overall power-law distribution (or power-law with flat-top or exponential cut-off in some cases), while outdegree dependencies do not follow heavy-tailed distribution.

  19. Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data

    PubMed Central

    Detwiler, Landon T.; Suciu, Dan; Franklin, Joshua D.; Moore, Eider B.; Poliakov, Andrew V.; Lee, Eunjung S.; Corina, David P.; Ojemann, George A.; Brinkley, James F.

    2008-01-01

    This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts. PMID:19198662

  20. Security and Policy for Group Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Foster; Carl Kesselman

    2006-07-31

    “Security and Policy for Group Collaboration” was a Collaboratory Middleware research project aimed at providing the fundamental security and policy infrastructure required to support the creation and operation of distributed, computationally enabled collaborations. The project developed infrastructure that exploits innovative new techniques to address challenging issues of scale, dynamics, distribution, and role. To reduce greatly the cost of adding new members to a collaboration, we developed and evaluated new techniques for creating and managing credentials based on public key certificates, including support for online certificate generation, online certificate repositories, and support for multiple certificate authorities. To facilitate the integration ofmore » new resources into a collaboration, we improved significantly the integration of local security environments. To make it easy to create and change the role and associated privileges of both resources and participants of collaboration, we developed community wide authorization services that provide distributed, scalable means for specifying policy. These services make it possible for the delegation of capability from the community to a specific user, class of user or resource. Finally, we instantiated our research results into a framework that makes it useable to a wide range of collaborative tools. The resulting mechanisms and software have been widely adopted within DOE projects and in many other scientific projects. The widespread adoption of our Globus Toolkit technology has provided, and continues to provide, a natural dissemination and technology transfer vehicle for our results.« less

  1. Electric Vehicles Charging Scheduling Strategy Considering the Uncertainty of Photovoltaic Output

    NASA Astrophysics Data System (ADS)

    Wei, Xiangxiang; Su, Su; Yue, Yunli; Wang, Wei; He, Luobin; Li, Hao; Ota, Yutaka

    2017-05-01

    The rapid development of electric vehicles and distributed generation bring new challenges to security and economic operation of the power system, so the collaborative research of the EVs and the distributed generation have important significance in distribution network. Under this background, an EVs charging scheduling strategy considering the uncertainty of photovoltaic(PV) output is proposed. The characteristics of EVs charging are analysed first. A PV output prediction method is proposed with a PV database then. On this basis, an EVs charging scheduling strategy is proposed with the goal to satisfy EVs users’ charging willingness and decrease the power loss in distribution network. The case study proves that the proposed PV output prediction method can predict the PV output accurately and the EVs charging scheduling strategy can reduce the power loss and stabilize the fluctuation of the load in distributed network.

  2. Traceability System For Agricultural Productsbased on Rfid and Mobile Technology

    NASA Astrophysics Data System (ADS)

    Sugahara, Koji

    In agriculture, it is required to establish and integrate food traceability systems and risk management systems in order to improve food safety in the entire food chain. The integrated traceability system for agricultural products was developed, based on innovative technology of RFID and mobile computing. In order to identify individual products on the distribution process efficiently,small RFID tags with unique ID and handy RFID readers were applied. On the distribution process, the RFID tags are checked by using the readers, and transit records of the products are stored to the database via wireless LAN.Regarding agricultural production, the recent issues of pesticides misuse affect consumer confidence in food safety. The Navigation System for Appropriate Pesticide Use (Nouyaku-navi) was developed, which is available in the fields by Internet cell-phones. Based on it, agricultural risk management systems have been developed. These systems collaborate with traceability systems and they can be applied for process control and risk management in agriculture.

  3. Using RDF and Git to Realize a Collaborative Metadata Repository.

    PubMed

    Stöhr, Mark R; Majeed, Raphael W; Günther, Andreas

    2018-01-01

    The German Center for Lung Research (DZL) is a research network with the aim of researching respiratory diseases. The participating study sites' register data differs in terms of software and coding system as well as data field coverage. To perform meaningful consortium-wide queries through one single interface, a uniform conceptual structure is required covering the DZL common data elements. No single existing terminology includes all our concepts. Potential candidates such as LOINC and SNOMED only cover specific subject areas or are not granular enough for our needs. To achieve a broadly accepted and complete ontology, we developed a platform for collaborative metadata management. The DZL data management group formulated detailed requirements regarding the metadata repository and the user interfaces for metadata editing. Our solution builds upon existing standard technologies allowing us to meet those requirements. Its key parts are RDF and the distributed version control system Git. We developed a software system to publish updated metadata automatically and immediately after performing validation tests for completeness and consistency.

  4. Bringing Federated Identity to Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teheran, Jeny

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less

  5. Wisconsin builds a distributed resources collaborative: Looking for local solutions that work

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, C.

    I`d like to tell you how I got involved in the DR Collaborative and why I`m here. John Nesbitt asked me to come, to be the public advocate, the bumblebee on the EPRI body politic. What follows is my own thought, not that of John or my fellow collaborators, who may or may not agree with me. How did I come to know John Nesbitt? In August 1991, I found that some Wisconsin utilities intended to run a 138 kV transmission line across my property, along the driveway where my kids ride their bikes, along the high ground where wemore » walk to escape the mosquitoes in the summer, where we ski cross-country and admire the snowy view in the winter. As a result, I became intensely interested in the electric power business. One thing led to another. I got on the Board of Wisconsin Demand-Side Demonstrations (WDSD), representing a group called the Citizens` Utility Board (CUB). I met Mr. Nesbitt. We shared an interest in distributed resources (DR). Along with some others, we conspired to initiate the Targeted Area Planning (TAP) collaborative. TAP is what we call DR in Wisconsin. I began to talk in acronyms. The simple truth is, I detest transmission lines. And, since transmission lines are invariably hooked up to central generation, I have no love for big power plants either. That whole system approach looks excessive and outdated to me, a vestige of the nineteenth century, Jules Verne without the romance. My opinion is, who needs it? I am aware that my opinion is not shared by everyone. I grant you that transmission lines might be a mite more acceptable if the thousands of landowners like me who presently subsidize their existence were receiving compensation, say an annual commodity transfer fee, that reflected some small portion of the value of transmission lines in the present system. That is certainly not the case, and if it were, the present system, when and if deregulated, would price itself out of existence all the more quickly.« less

  6. New tools: potential medical applications of data from new and old environmental satellites.

    PubMed

    Huh, O K; Malone, J B

    2001-04-27

    The last 40 years, beginning with the first TIROS (television infrared observational satellite) launched on 1 April 1960, has seen an explosion of earth environmental satellite systems and their capabilities. They can provide measurements in globe encircling arrays or small select areas, with increasing resolutions, and new capabilities. Concurrently there are expanding numbers of existing and emerging infectious diseases, many distributed according to areal patterns of physical conditions at the earth's surface. For these reasons, the medical and remote sensing communities can beneficially collaborate with the objective of making needed progress in public health activities by exploiting the advances of the national and international space programs. Major improvements in applicability of remotely sensed data are becoming possible with increases in the four kinds of resolution: spatial, temporal, radiometric and spectral, scheduled over the next few years. Much collaborative research will be necessary before data from these systems are fully exploited by the medical community.

  7. Ontologies and Information Systems: A Literature Survey

    DTIC Science & Technology

    2011-06-01

    Science and Technology Organisation DSTO–TN–1002 ABSTRACT An ontology captures in a computer-processable language the important con - cepts in a...knowledge shara- bility, reusability and scalability, and that support collaborative and distributed con - struction of ontologies, the DOGMA and DILIGENT...and assemble the received information). In the second stage, the designers determine how ontologies should be used in the pro - cess of adding

  8. Research and Development of Collaborative Environments for Command and Control

    DTIC Science & Technology

    2011-05-01

    at any state of building. The viewer tool presents the designed model with 360-degree perspective views even after regeneration of the design, which...and it shows the following prompt. GUM > APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...11 First initialize the microSD card by typing GUM > mmcinit Then erase the old Linux kernel and the root file system on the flash memory

  9. Hardening digital systems with distributed functionality: robust networks

    NASA Astrophysics Data System (ADS)

    Vaskova, Anna; Portela-Garcia, Marta; Garcia-Valderas, Mario; López-Ongil, Celia; Portilla, Jorge; Valverde, Juan; de la Torre, Eduardo; Riesgo, Teresa

    2013-05-01

    Collaborative hardening and hardware redundancy are nowadays the most interesting solutions in terms of fault tolerance achieved and low extra cost imposed to the project budget. Thanks to the powerful and cheap digital devices that are available in the market, extra processing capabilities can be used for redundant tasks, not only in early data processing (sensed data) but also in routing and interfacing1

  10. Physics Goals for the Planned Next Linear Collider Engineering Test Facility

    NASA Astrophysics Data System (ADS)

    Raubenheimer, T. O.

    2001-10-01

    The Next Linear Collider (NLC) Collaboration is planning to construct an Engineering Test Facility (ETF) at Fermilab. As presently envisioned, the ETF would comprise a fundamental unit of the NLC main linac to include X-band klystrons and modulators, a delay-line power-distribution system (DLDS), and NLC accelerating structures that serve as loads. The principal purpose of the ETF is to validate stable operation of the power-distribution system, first without beam, then with a beam having the NLC pulse structure. This paper concerns the possibility of configuring and using the ETF to accelerate beam with an NLC pulse structure, as well as of doing experiments to measure beam-induced wakefields in the rf structures and their influence back on the beam.

  11. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  12. Distributed Operations Planning

    NASA Technical Reports Server (NTRS)

    Fox, Jason; Norris, Jeffrey; Powell, Mark; Rabe, Kenneth; Shams, Khawaja

    2007-01-01

    Maestro software provides a secure and distributed mission planning system for long-term missions in general, and the Mars Exploration Rover Mission (MER) specifically. Maestro, the successor to the Science Activity Planner, has a heavy emphasis on portability and distributed operations, and requires no data replication or expensive hardware, instead relying on a set of services functioning on JPL institutional servers. Maestro works on most current computers with network connections, including laptops. When browsing down-link data from a spacecraft, Maestro functions similarly to being on a Web browser. After authenticating the user, it connects to a database server to query an index of data products. It then contacts a Web server to download and display the actual data products. The software also includes collaboration support based upon a highly reliable messaging system. Modifications made to targets in one instance are quickly and securely transmitted to other instances of Maestro. The back end that has been developed for Maestro could benefit many future missions by reducing the cost of centralized operations system architecture.

  13. JPL Facilities and Software for Collaborative Design: 1994 - Present

    NASA Technical Reports Server (NTRS)

    DeFlorio, Paul A.

    2004-01-01

    The viewgraph presentation provides an overview of the history of the JPL Project Design Center (PDC) and, since 2000, the Center for Space Mission Architecture and Design (CSMAD). The discussion includes PDC objectives and scope; mission design metrics; distributed design; a software architecture timeline; facility design principles; optimized design for group work; CSMAD plan view, facility design, and infrastructure; and distributed collaboration tools.

  14. Predictability and Coupled Dynamics of MJO During DYNAMO

    DTIC Science & Technology

    2013-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Predictability and Coupled Dynamics of MJO During DYNAMO ... DYNAMO time period. APPROACH We are working as a team to study MJO dynamics and predictability using several models as team members of the ONR DRI...associated with the DYNAMO experiment. This is a fundamentally collaborative proposal that involves close collaboration with Dr. Hyodae Seo of the

  15. Problem-Solving Phase Transitions During Team Collaboration.

    PubMed

    Wiltshire, Travis J; Butner, Jonathan E; Fiore, Stephen M

    2018-01-01

    Multiple theories of problem-solving hypothesize that there are distinct qualitative phases exhibited during effective problem-solving. However, limited research has attempted to identify when transitions between phases occur. We integrate theory on collaborative problem-solving (CPS) with dynamical systems theory suggesting that when a system is undergoing a phase transition it should exhibit a peak in entropy and that entropy levels should also relate to team performance. Communications from 40 teams that collaborated on a complex problem were coded for occurrence of problem-solving processes. We applied a sliding window entropy technique to each team's communications and specified criteria for (a) identifying data points that qualify as peaks and (b) determining which peaks were robust. We used multilevel modeling, and provide a qualitative example, to evaluate whether phases exhibit distinct distributions of communication processes. We also tested whether there was a relationship between entropy values at transition points and CPS performance. We found that a proportion of entropy peaks was robust and that the relative occurrence of communication codes varied significantly across phases. Peaks in entropy thus corresponded to qualitative shifts in teams' CPS communications, providing empirical evidence that teams exhibit phase transitions during CPS. Also, lower average levels of entropy at the phase transition points predicted better CPS performance. We specify future directions to improve understanding of phase transitions during CPS, and collaborative cognition, more broadly. Copyright © 2017 Cognitive Science Society, Inc.

  16. A Study of ATLAS Grid Performance for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Fine, Valery; Wenaus, Torre

    2012-12-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  17. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  18. The Design of Modular Web-Based Collaboration

    NASA Astrophysics Data System (ADS)

    Intapong, Ploypailin; Settapat, Sittapong; Kaewkamnerdpong, Boonserm; Achalakul, Tiranee

    Online collaborative systems are popular communication channels as the systems allow people from various disciplines to interact and collaborate with ease. The systems provide communication tools and services that can be integrated on the web; consequently, the systems are more convenient to use and easier to install. Nevertheless, most of the currently available systems are designed according to some specific requirements and cannot be straightforwardly integrated into various applications. This paper provides the design of a new collaborative platform, which is component-based and re-configurable. The platform is called the Modular Web-based Collaboration (MWC). MWC shares the same concept as computer supported collaborative work (CSCW) and computer-supported collaborative learning (CSCL), but it provides configurable tools for online collaboration. Each tool module can be integrated into users' web applications freely and easily. This makes collaborative system flexible, adaptable and suitable for online collaboration.

  19. Assessment of Energy Storage Alternatives in the Puget Sound Energy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balducci, Patrick J.; Jin, Chunlian; Wu, Di

    2013-12-01

    As part of an ongoing study co-funded by the Bonneville Power Administration, under its Technology Innovation Grant Program, and the U.S. Department of Energy, the Pacific Northwest National Laboratory (PNNL) has developed an approach and modeling tool for assessing the net benefits of using energy storage located close to the customer in the distribution grid to manage demand. PNNL in collaboration with PSE and Primus Power has evaluated the net benefits of placing a zinc bromide battery system at two locations in the PSE system (Baker River / Rockport and Bainbridge Island). Energy storage can provide a number of benefitsmore » to the utility through the increased flexibility it provides to the grid system. Applications evaluated in the assessment include capacity value, balancing services, arbitrage, distribution deferral and outage mitigation. This report outlines the methodology developed for this study and Phase I results.« less

  20. Analysis of scientific collaboration in Chinese psychiatry research.

    PubMed

    Wu, Ying; Jin, Xing

    2016-05-26

    In recent decades, China has changed profoundly, becoming the country with the world's second-largest economy. The proportion of the Chinese population suffering from mental disorder has grown in parallel with the rapid economic development, as social stresses have increased. The aim of this study is to shed light on the status of collaborations in the Chinese psychiatry field, of which there is currently limited research. We sampled 16,224 publications (2003-2012) from 10 core psychiatry journals from Chinese National Knowledge Infrastructure (CNKI) and WanFang Database. We used various social network analysis (SNA) methods such as centrality analysis, and Core-Periphery analysis to study collaboration. We also used hierarchical clustering analysis in this study. From 2003-2012, there were increasing collaborations at the level of authors, institutions and regions in the Chinese psychiatry field. Geographically, these collaborations were distributed unevenly. The 100 most prolific authors and institutions and 32 regions were used to construct the collaboration map, from which we detected the core author, institution and region. Collaborative behavior was affected by economic development. We should encourage collaborative behavior in the Chinese psychiatry field, as this facilitates knowledge distribution, resource sharing and information acquisition. Collaboration has also helped the field narrow its current research focus, providing further evidence to inform policymakers to fund research in order to tackle the increase in mental disorder facing modern China.

  1. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  2. 2014 Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Conference Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2015-01-27

    The climate and weather data science community met December 9–11, 2014, in Livermore, California, for the fourth annual Earth System Grid Federation (ESGF) and Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Conference, hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UVCDATremain global collaborations committed to developing a new generation of open-source software infrastructure that provides distributed access and analysis to simulated and observed data from the climate and weather communities.more » The tools and infrastructure created under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change. In addition, the F2F conference fosters a stronger climate and weather data science community and facilitates a stronger federated software infrastructure. The 2014 F2F conference detailed the progress of ESGF, UV-CDAT, and other community efforts over the year and sets new priorities and requirements for existing and impending national and international community projects, such as the Coupled Model Intercomparison Project Phase Six. Specifically discussed at the conference were project capabilities and enhancements needs for data distribution, analysis, visualization, hardware and network infrastructure, standards, and resources.« less

  3. Using Wikis to Investigate Communication, Collaboration and Engagement in Capstone Engineering Design Projects

    ERIC Educational Resources Information Center

    Berthoud, L.; Gliddon, J.

    2018-01-01

    In today's global Aerospace industry, virtual workspaces are commonly used for collaboration between geographically distributed multidisciplinary teams. This study investigated the use of wikis to look at communication, collaboration and engagement in 'Capstone' team design projects at the end of an engineering degree. Wikis were set up for teams…

  4. Structure and Evolution of Scientific Collaboration Networks in a Modern Research Collaboratory

    ERIC Educational Resources Information Center

    Pepe, Alberto

    2010-01-01

    This dissertation is a study of scientific collaboration at the Center for Embedded Networked Sensing (CENS), a modern, multi-disciplinary, distributed laboratory involved in sensor network research. By use of survey research and network analysis, this dissertation examines the collaborative ecology of CENS in terms of three networks of…

  5. Assessing Students in Human-to-Agent Settings to Inform Collaborative Problem-Solving Learning

    ERIC Educational Resources Information Center

    Rosen, Yigal

    2017-01-01

    In order to understand potential applications of collaborative problem-solving (CPS) assessment tasks, it is necessary to examine empirically the multifaceted student performance that may be distributed across collaboration methods and purposes of the assessment. Ideally, each student should be matched with various types of group members and must…

  6. Attitude and awareness of medical and dental students towards collaboration between medical and dental practice in Hong Kong.

    PubMed

    Zhang, Shinan; Lo, Edward C M; Chu, Chun-Hung

    2015-05-02

    Medical-dental collaboration is essential for improving resource efficiency and standards of care. However, few studies have been conducted on it. This study aimed to investigate the attitude and awareness of medical and dental students about collaboration between medical and dental practices in Hong Kong. All medical and dental students in Hong Kong were invited to complete a questionnaire survey at their universities, hospitals and residential halls. It contained 8 questions designed to elicit their attitudes about the collaboration between medical and dental practice. Students were also asked about their awareness of the collaboration between dentistry and medicine. The questionnaires were directly distributed to medical and dental students. The finished questionnaires were immediately collected by research assistants on site. A total of 1,857 questionnaires were distributed and 809 (44%) were returned. Their mean attitude score (SD) towards medical-dental collaboration was 6.37 (1.44). Most students (77%) were aware of the collaboration between medical and dental practice in Hong Kong. They considered that Ear, Nose & Throat, General Surgery and Family Medicine were the 3 most common medical disciplines which entailed collaboration between medical and dental practice. In this study, the medical and dental students in general demonstrated a good attitude and awareness of the collaboration between medical and dental practice in Hong Kong. This established an essential foundation for fostering medical-dental collaboration, which is vital to improving resource efficiency and standards of care.

  7. Measurement of the atmospheric muon flux with the NEMO Phase-1 detector

    NASA Astrophysics Data System (ADS)

    Aiello, S.; Ameli, F.; Amore, I.; Anghinolfi, M.; Anzalone, A.; Barbarino, G.; Battaglieri, M.; Bazzotti, M.; Bersani, A.; Beverini, N.; Biagi, S.; Bonori, M.; Bouhadef, B.; Brunoldi, M.; Cacopardo, G.; Capone, A.; Caponetto, L.; Carminati, G.; Chiarusi, T.; Circella, M.; Cocimano, R.; Coniglione, R.; Cordelli, M.; Costa, M.; D'Amico, A.; De Bonis, G.; De Marzo, C.; De Rosa, G.; De Ruvo, G.; De Vita, R.; Distefano, C.; Falchini, E.; Flaminio, V.; Fratini, K.; Gabrielli, A.; Galatà, S.; Gandolfi, E.; Giacomelli, G.; Giorgi, F.; Giovanetti, G.; Grimaldi, A.; Habel, R.; Imbesi, M.; Kulikovsky, V.; Lattuada, D.; Leonora, E.; Lonardo, A.; Lo Presti, D.; Lucarelli, F.; Marinelli, A.; Margiotta, A.; Martini, A.; Masullo, R.; Migneco, E.; Minutoli, S.; Morganti, M.; Musico, P.; Musumeci, M.; Nicolau, C. A.; Orlando, A.; Osipenko, M.; Papaleo, R.; Pappalardo, V.; Piattelli, P.; Piombo, D.; Raia, G.; Randazzo, N.; Reito, S.; Ricco, G.; Riccobene, G.; Ripani, M.; Rovelli, A.; Ruppi, M.; Russo, G. V.; Russo, S.; Sapienza, P.; Sciliberto, D.; Sedita, M.; Shirokov, E.; Simeone, F.; Sipala, V.; Spurio, M.; Taiuti, M.; Trasatti, L.; Urso, S.; Vecchi, M.; Vicini, P.; Wischnewski, R.

    2010-05-01

    The NEMO Collaboration installed and operated an underwater detector including prototypes of the critical elements of a possible underwater km 3 neutrino telescope: a four-floor tower (called Mini-Tower) and a Junction Box. The detector was developed to test some of the main systems of the km 3 detector, including the data transmission, the power distribution, the timing calibration and the acoustic positioning systems as well as to verify the capabilities of a single tridimensional detection structure to reconstruct muon tracks. We present results of the analysis of the data collected with the NEMO Mini-Tower. The position of photomultiplier tubes (PMTs) is determined through the acoustic position system. Signals detected with PMTs are used to reconstruct the tracks of atmospheric muons. The angular distribution of atmospheric muons was measured and results compared to Monte Carlo simulations.

  8. Signatures of Currency Vertices

    NASA Astrophysics Data System (ADS)

    Holme, Petter

    2009-03-01

    Many real-world networks have broad degree distributions. For some systems, this means that the functional significance of the vertices is also broadly distributed, in other cases the vertices are equally significant, but in different ways. One example of the latter case is metabolic networks, where the high-degree vertices — the currency metabolites — supply the molecular groups to the low-degree metabolites, and the latter are responsible for the higher-order biological function, of vital importance to the organism. In this paper, we propose a generalization of currency metabolites to currency vertices. We investigate the network structural characteristics of such systems, both in model networks and in some empirical systems. In addition to metabolic networks, we find that a network of music collaborations and a network of e-mail exchange could be described by a division of the vertices into currency vertices and others.

  9. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  10. DIRAC in Large Particle Physics Experiments

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Tsaregorodtsev, A.; Arrabito, L.; Sailer, A.; Hara, T.; Zhang, X.; Consortium, DIRAC

    2017-10-01

    The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. A number of High Energy Physics and Astrophysics collaborations have adopted DIRAC as the base for their computing models. DIRAC was initially developed for the LHCb experiment at LHC, CERN. Later, the Belle II, BES III and CTA experiments as well as the linear collider detector collaborations started using DIRAC for their computing systems. Some of the experiments built their DIRAC-based systems from scratch, others migrated from previous solutions, ad-hoc or based on different middlewares. Adaptation of DIRAC for a particular experiment was enabled through the creation of extensions to meet their specific requirements. Each experiment has a heterogeneous set of computing and storage resources at their disposal that were aggregated through DIRAC into a coherent pool. Users from different experiments can interact with the system in different ways depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. In this contribution we will summarize the experience of using DIRAC in particle physics collaborations. The problems of migration to DIRAC from previous systems and their solutions will be presented. An overview of specific DIRAC extensions will be given. We hope that this review will be useful for experiments considering an update, or for those designing their computing models.

  11. Collaborative Chronic Care Networks (C3Ns) to transform chronic illness care.

    PubMed

    Margolis, Peter A; Peterson, Laura E; Seid, Michael

    2013-06-01

    Despite significant gains by pediatric collaborative improvement networks, the overall US system of chronic illness care does not work well. A new paradigm is needed: a Collaborative Chronic Care Network (C3N). A C3N is a network-based production system that harnesses the collective intelligence of patients, clinicians, and researchers and distributes the production of knowledge, information, and know-how over large groups of people, dramatically accelerating the discovery process. A C3N is a platform of "operating systems" on which interconnected processes and interventions are designed, tested, and implemented. The social operating system is facilitated by community building, engaging all stakeholders and their expertise, and providing multiple ways to participate. Standard progress measures and a robust information technology infrastructure enable the technical operating system to reduce unwanted variation and adopt advances more rapidly. A structured approach to innovation design provides a scientific operating system or "laboratory" for what works and how to make it work. Data support testing and research on multiple levels: comparative effectiveness research for populations, evaluating care delivery processes at the care center level, and N-of-1 trials and other methods to select the best treatment of individual patient circumstances. Methods to reduce transactional costs to participate include a Federated IRB Model in which centers rely on a protocol approved at 1 central institutional review board and a "commons framework" for organizational copyright and intellectual property concerns. A fully realized C3N represents a discontinuous leap to a self-developing learning health system capable of producing a qualitatively different approach to improving health.

  12. The TRIDEC System-of-Systems; Choreography of large-scale concurrent tasks in Natural Crisis Management

    NASA Astrophysics Data System (ADS)

    Häner, R.; Wächter, J.

    2012-04-01

    The project Collaborative, Complex, and Critical Decision-Support in Evolving Crises (TRIDEC), co-funded by the European Commission in its Seventh Framework Programme aims at establishing a network of dedicated, autonomous legacy systems for large-scale concurrent management of natural crises utilising heterogeneous information resources. TRIDEC's architecture reflects the System-of- Systems (SoS) approach which is based on task-oriented systems, cooperatively interacting as a collective in a common environment. The design of the TRIDEC-SoS follows the principles of service-oriented and event-driven architectures (SOA & EDA) exceedingly focusing on a loose coupling of the systems. The SoS approach in combination with SOA and EDA has the distinction of being able to provide novel and coherent behaviours and features resulting from a process of dynamic self-organisation. Self-organisation is a process without the need for a central or external coordinator controlling it through orchestration. It is the result of enacted concurrent tasks in a collaborative environment of geographically distributed systems. Although the individual systems act completely autonomously, their interactions expose emergent structures of evolving nature. Particularly, the fact is important that SoS are inherently able to evolve on all facets of intelligent information management. This includes adaptive properties, e.g. seamless integration of new resource types or the adoption of new fields in natural crisis management. In the case of TRIDEC with various heterogeneous participants involved, concurrent information processing is of fundamental importance because of the achievable improvements regarding cooperative decision making. Collaboration within TRIDEC will be implemented with choreographies and conversations. Choreographies specify the expected behaviour between two or more participants; conversations describe the message exchange between all participants emphasising their logical relation. The TRIDEC choreography will be based on the definition of Behavioural Interfaces and Service Level Agreements, which describe the interactions of all participants involved in the collaborative process by binding the tasks of dedicated systems to high-level business processes. All methods of a Behavioural Interfaces can be assigned dynamically to the activities of a business process. This allows it to utilise a system during the run-time of a business process and thus, for example enabling task balancing or the delegation of responsibilities. Since the individual parts of a SoS are normally managed independently and operate autonomously because of their geographical distribution it is of vital importance to ensure the reliability (robustness and correctness) of their interactions which will be achieved by applying the Design by Contract (DbC) approach to the TRIDEC architecture. Key challenge for TRIDEC is establishing a reliable adaptive system which exposes an emergent behaviour, for example intelligent monitoring strategies or dynamic system adaptions even in case of partly system failures. It is essential for TRIDEC that for example redundant parts of the system can take over tasks from defect components in a process of re-organising its network.

  13. Collaboration technology and space science

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Brown, R. L.; Haines, R. F.

    1990-01-01

    A summary of available collaboration technologies and their applications to space science is presented as well as investigations into remote coaching paradigms and the role of a specific collaboration tool for distributed task coordination in supporting such teleoperations. The applicability and effectiveness of different communication media and tools in supporting remote coaching are investigated. One investigation concerns a distributed check-list, a computer-based tool that allows a group of people, e.g., onboard crew, ground based investigator, and mission control, to synchronize their actions while providing full flexibility for the flight crew to set the pace and remain on their operational schedule. This autonomy is shown to contribute to morale and productivity.

  14. SMUD Community Renewable Energy Deployment Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sison-Lebrilla, Elaine; Tiangco, Valentino; Lemes, Marco

    2015-06-08

    This report summarizes the completion of four renewable energy installations supported by California Energy Commission (CEC) grant number CEC Grant PIR-11-005, the US Department of Energy (DOE) Assistance Agreement, DE-EE0003070, and the Sacramento Municipal Utility District (SMUD) Community Renewable Energy Deployment (CRED) program. The funding from the DOE, combined with funding from the CEC, supported the construction of a solar power system, biogas generation from waste systems, and anaerobic digestion systems at dairy facilities, all for electricity generation and delivery to SMUD’s distribution system. The deployment of CRED projects shows that solar projects and anaerobic digesters can be successfully implementedmore » under favorable economic conditions and business models and through collaborative partnerships. This work helps other communities learn how to assess, overcome barriers, utilize, and benefit from renewable resources for electricity generation in their region. In addition to reducing GHG emissions, the projects also demonstrate that solar projects and anaerobic digesters can be readily implemented through collaborative partnerships. This work helps other communities learn how to assess, overcome barriers, utilize, and benefit from renewable resources for electricity generation in their region.« less

  15. Distributed interactive virtual environments for collaborative experiential learning and training independent of distance over Internet2.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P

    2004-01-01

    Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.

  16. Development of Early Warning System for Landslide Using Electromagnetic, Hydrological, Geotechnical, and Geological Approaches

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Hattori, K.; Chae, B.

    2011-12-01

    The Joint Research Collaboration Program (JRCP) for Chinese-Korean-Japanese (CKJ) Research Collaboration is a new cooperative scheme for joint funding from Chinese Department of International Cooperation of the Ministry of Science and Technology (DOIC), Korea Foundation for International Cooperation of Science and Technology (KICOS) and Japan Science and Technology Agency (JST). In this paper, we will introduce the funded CKJ project entitled "Development of early warning system for landslide using electromagnetic, hydrological, geotechnical, and geological approaches". The final goal of the project is to develop a simple methodology for landslide monitoring/forecasting (early warning system) using self potential method in the frame work of joint research among China, Korea, and Japan. The project is developing a new scientific and technical methodology for prevention of natural soil disasters. The outline of the project is as follows: (1) basic understanding on the relationship between resistivity distribution and moisture in soil and their visualization of their dynamical changes in space and time using tomography technique, (2) laboratory experiments of rainfall induced landslides and sandbox for practical use of the basic understanding, (3) in-situ experiments for evaluation. Annual workshops/symposia, seminars will be organized for strengthening the scientific collaborations and exchanges. In consideration of the above issues, integration of geological, hydrological, geotechnical characteristics with electromagnetic one are adopted as the key approach in this project. This study is partially supported by the Joint Research Collaboration Program, DOIC, MOST, China (2010DFA21570) and the National Natural Science Foundation of China (40974038, 41025014).

  17. Distributed collaborative decision support environments for predictive awareness

    NASA Astrophysics Data System (ADS)

    McQuay, William K.; Stilman, Boris; Yakhnis, Vlad

    2005-05-01

    The past decade has produced significant changes in the conduct of military operations: asymmetric warfare, the reliance on dynamic coalitions, stringent rules of engagement, increased concern about collateral damage, and the need for sustained air operations. Mission commanders need to assimilate a tremendous amount of information, rapidly assess the enemy"s course of action (eCOA) or possible actions and promulgate their own course of action (COA) - a need for predictive awareness. Decision support tools in a distributed collaborative environment offer the capability of decomposing complex multitask processes and distributing them over a dynamic set of execution assets that include modeling, simulations, and analysis tools. Revolutionary new approaches to strategy generation and assessment such as Linguistic Geometry (LG) permit the rapid development of COA vs. enemy COA (eCOA). LG tools automatically generate and permit the operators to take advantage of winning strategies and tactics for mission planning and execution in near real-time. LG is predictive and employs deep "look-ahead" from the current state and provides a realistic, reactive model of adversary reasoning and behavior. Collaborative environments provide the framework and integrate models, simulations, and domain specific decision support tools for the sharing and exchanging of data, information, knowledge, and actions. This paper describes ongoing research efforts in applying distributed collaborative environments to decision support for predictive mission awareness.

  18. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  19. The Use of Software Agents for Autonomous Control of a DC Space Power System

    NASA Technical Reports Server (NTRS)

    May, Ryan D.; Loparo, Kenneth A.

    2014-01-01

    In order to enable manned deep-space missions, the spacecraft must be controlled autonomously using on-board algorithms. A control architecture is proposed to enable this autonomous operation for an spacecraft electric power system and then implemented using a highly distributed network of software agents. These agents collaborate and compete with each other in order to implement each of the control functions. A subset of this control architecture is tested against a steadystate power system simulation and found to be able to solve a constrained optimization problem with competing objectives using only local information.

  20. Final Report. Montpelier District Energy Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Jessie; Motyka, Kurt; Aja, Joe

    2015-03-30

    The City of Montpelier, in collaboration with the State of Vermont, developed a central heat plant fueled with locally harvested wood-chips and a thermal energy distribution system. The project provides renewable energy to heat a complex of state buildings and a mix of commercial, private and municipal buildings in downtown Montpelier. The State of Vermont operates the central heat plant and the system to heat the connected state buildings. The City of Montpelier accepts energy from the central heat plant and operates a thermal utility to heat buildings in downtown Montpelier which elected to take heat from the system.

  1. Collaborative Data Mining

    NASA Astrophysics Data System (ADS)

    Moyle, Steve

    Collaborative Data Mining is a setting where the Data Mining effort is distributed to multiple collaborating agents - human or software. The objective of the collaborative Data Mining effort is to produce solutions to the tackled Data Mining problem which are considered better by some metric, with respect to those solutions that would have been achieved by individual, non-collaborating agents. The solutions require evaluation, comparison, and approaches for combination. Collaboration requires communication, and implies some form of community. The human form of collaboration is a social task. Organizing communities in an effective manner is non-trivial and often requires well defined roles and processes. Data Mining, too, benefits from a standard process. This chapter explores the standard Data Mining process CRISP-DM utilized in a collaborative setting.

  2. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  3. CoLeMo: A Collaborative Learning Environment for UML Modelling

    ERIC Educational Resources Information Center

    Chen, Weiqin; Pedersen, Roger Heggernes; Pettersen, Oystein

    2006-01-01

    This paper presents the design, implementation, and evaluation of a distributed collaborative UML modelling environment, CoLeMo. CoLeMo is designed for students studying UML modelling. It can also be used as a platform for collaborative design of software. We conducted formative evaluations and a summative evaluation to improve the environment and…

  4. Technology Trends in Mobile Computer Supported Collaborative Learning in Elementary Education from 2009 to 2014

    ERIC Educational Resources Information Center

    Carapina, Mia; Boticki, Ivica

    2015-01-01

    This paper analyses mobile computer supported collaborative learning in elementary education worldwide focusing on technology trends for the period from 2009 to 2014. The results present representation of device types used to support collaborative activities, their distribution per users (1:1 or 1:m) and if students are learning through or around…

  5. Scalable Technology for a New Generation of Collaborative Applications

    DTIC Science & Technology

    2007-04-01

    of the International Symposium on Distributed Computing (DISC), Cracow, Poland, September 2005. Classic Paxos vs. Fast Paxos: Caveat Emptor, Flavio...grou or able and fast multicast primitive to layer under high-level latency across dimensions as varied as group size [10, 17],abstractions such as...servers, networked via fast , dedicated interconnects. The system to subscribe to a fraction of the equities on the software stack running on a single

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NREL and the Hawaiian Electric Companies are collaborating with the solar and inverter industries to implement advanced inverters, allowing greater solar photovoltaic (PV) penetrations that will support the State of Hawaii's goal to achieve 100% renewable energy by 2045. Advanced inverters will help maintain stable grid operations by riding through grid disturbances when the PV output is needed, operating autonomously to smooth voltage fluctuations, and coordinating the start-up and reconnection of PV systems and other distributed energy resources.

  7. Context-Based Intent Understanding for Autonomous Systems in Naval and Collaborative Robot Applications

    DTIC Science & Technology

    2013-10-29

    COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...based on contextual information, 3) develop vision-based techniques for learning of contextual information, and detection and identification of...that takes into account many possible contexts. The probability distributions of these contexts will be learned from existing databases on common sense

  8. Defense Science Board Task Force Report: The Role of Autonomy in DoD Systems

    DTIC Science & Technology

    2012-07-01

    ASD(R&E) and the Military Services should schedule periodic, on-site collaborations that bring together academia, government and not-for-profit labs...expressing UxV activities, increased problem solving, planning and scheduling capabilities to enable dynamic tasking of distributed UxVs and tools for...industrial, governmental and military. Manufacturing has long exploited planning for logistics and matching product demand to production schedules

  9. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  10. Promoting A-Priori Interoperability of HLA-Based Simulations in the Space Domain: The SISO Space Reference FOM Initiative

    NASA Technical Reports Server (NTRS)

    Moller, Bjorn; Garro, Alfredo; Falcone, Alberto; Crues, Edwin Z.; Dexter, Daniel E.

    2016-01-01

    Distributed and Real-Time Simulation plays a key-role in the Space domain being exploited for missions and systems analysis and engineering as well as for crew training and operational support. One of the most popular standards is the 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA). HLA supports the implementation of distributed simulations (called Federations) in which a set of simulation entities (called Federates) can interact using a Run-Time Infrastructure (RTI). In a given Federation, a Federate can publish and/or subscribes objects and interactions on the RTI only in accordance with their structures as defined in a FOM (Federation Object Model). Currently, the Space domain is characterized by a set of incompatible FOMs that, although meet the specific needs of different organizations and projects, increases the long-term cost for interoperability. In this context, the availability of a reference FOM for the Space domain will enable the development of interoperable HLA-based simulators for related joint projects and collaborations among worldwide organizations involved in the Space domain (e.g. NASA, ESA, Roscosmos, and JAXA). The paper presents a first set of results achieved by a SISO standardization effort that aims at providing a Space Reference FOM for international collaboration on Space systems simulations.

  11. An Overview of the Distributed Space Exploration Simulation (DSES) Project

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Chung, Victoria I.; Blum, Michael G.; Bowman, James D.

    2007-01-01

    This paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which investigates technologies, and processes related to integrated, distributed simulation of complex space systems in support of NASA's Exploration Initiative. In particular, it describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. With regard to network infrastructure, DSES is developing a Distributed Simulation Network for use by all NASA centers. With regard to software, DSES is developing software models, tools and procedures that streamline distributed simulation development and provide an interoperable infrastructure for agency-wide integrated simulation. Finally, with regard to simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper presents the current status and plans for these three areas, including examples of specific simulations.

  12. Atmospheric Composition Data and Information Services Center (ACDISC)

    NASA Technical Reports Server (NTRS)

    Kempler, S.

    2005-01-01

    NASA's GSFC Earth Sciences (GES) Data and Information and Data Services Center (DISC) manages the archive, distribution and data access for atmospheric composition data from AURA'S OMI, MLS, and hopefully one day, HIRDLS instruments, as well as heritage datasets from TOMS, UARS, MODIS, and AIRS. This data is currently archived in the GES Distributed Active Archive Center (DAAC). The GES DISC has begun the development of a community driven data management system that's sole purpose is to manage and provide value added services to NASA's Atmospheric Composition (AC) Data. This system, called the Atmospheric Composition Data and Information Services Center (ACDISC) will provide access all AC datasets from the above mentioned instruments, as well as AC datasets residing at remote archive sites (e.g, LaRC DAAC) The goals of the ACDISC are to: 1) Provide a data center for Atmospheric Scientists, guided by Atmospheric Scientists; 2) Be absolutely responsive to the data and data service needs of the Atmospheric Composition (AC) community; 3) Provide services (i.e., expertise) that will facilitate the effortless access to and usage of AC data; 4) Collaborate with AC scientists to facilitate the use of data from multiple sensors for long term atmospheric research. The ACDISC is an AC specific, user driven, multi-sensor, on-line, easy access archive and distribution system employing data analysis and visualization, data mining, and other user requested techniques that facilitate science data usage. The purpose of this presentation is to provide the evolution path that the GES DISC in order to better serve AC data, and also to receive continued community feedback and further foster collaboration with AC data users and providers.

  13. Subscribe to DGIC Updates | Distributed Generation Interconnection

    Science.gov Websites

    Distributed Generation Interconnection Collaborative. Subscribe Please provide and submit the following information to subscribe. The mailing list addresses are never sold, rented, distributed, or disclosed in any

  14. Supporting Active Patient and Health Care Collaboration: A Prototype for Future Health Care Information Systems.

    PubMed

    Åhlfeldt, Rose-Mharie; Persson, Anne; Rexhepi, Hanife; Wåhlander, Kalle

    2016-12-01

    This article presents and illustrates the main features of a proposed process-oriented approach for patient information distribution in future health care information systems, by using a prototype of a process support system. The development of the prototype was based on the Visuera method, which includes five defined steps. The results indicate that a visualized prototype is a suitable tool for illustrating both the opportunities and constraints of future ideas and solutions in e-Health. The main challenges for developing and implementing a fully functional process support system concern both technical and organizational/management aspects. © The Author(s) 2015.

  15. Development and Application of the Collaborative Optimization Architecture in a Multidisciplinary Design Environment

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Kroo, I. M.

    1995-01-01

    Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.

  16. Research Collaboration across Higher Education Systems: Maturity, Language Use, and Regional Differences

    ERIC Educational Resources Information Center

    Shin, Jung Cheol; Lee, Soo Jeung; Kim, Yangson

    2013-01-01

    This study analyzed whether research collaboration patterns differ across higher education systems based on maturity of the systems, their language, and their geographical region. This study found that collaboration patterns differ across higher education systems: academics in developed systems are more collaborative than their colleagues in…

  17. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    NASA Astrophysics Data System (ADS)

    Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.

    2003-09-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.

  18. The NASA Exploration Design Team; Blueprint for a New Design Paradigm

    NASA Technical Reports Server (NTRS)

    Oberto, Robert E.; Nilsen, Erik; Cohen, Ron; Wheeler, Rebecca; DeFlorio, Paul

    2005-01-01

    NASA has chosen JPL to deliver a NASA-wide rapid-response real-time collaborative design team to perform rapid execution of program, system, mission, and technology trade studies. This team will draw on the expertise of all NASA centers and external partners necessary. The NASA Exploration Design Team (NEDT) will be led by NASA Headquarters, with field centers and partners added according to the needs of each study. Through real-time distributed collaboration we will effectively bring all NASA field centers directly inside Headquarters. JPL's Team X pioneered the technique of real time collaborative design 8 years ago. Since its inception, Team X has performed over 600 mission studies and has reduced per-study cost by a factor of 5 and per-study duration by a factor of 10 compared to conventional design processes. The Team X concept has spread to other NASA centers, industry, academia, and international partners. In this paper, we discuss the extension of the JPL Team X process to the NASA-wide collaborative design team. We discuss the architecture for such a process and elaborate on the implementation challenges of this process. We further discuss our current ideas on how to address these challenges.

  19. Development of multifunctional materials exhibiting distributed sensing and actuation inspired by fish

    NASA Astrophysics Data System (ADS)

    Philen, Michael

    2011-04-01

    This manuscript is an overview of the research that is currently being performed as part of a 2009 NSF Office of Emerging Frontiers in Research and Innnovation (EFRI) grant on BioSensing and BioActuation (BSBA). The objectives of this multi-university collaborative research are to achieve a greater understanding of the hierarchical organization and structure of the sensory, muscular, and control systems of fish, and to develop advanced biologically-inspired material systems having distributed sensing, actuation, and intelligent control. New experimental apparatus have been developed for performing experiments involving live fish and robotic devices, and new bio-inspired haircell sensors and artificial muscles are being developed using carbonaceous nanomaterials, bio-derived molecules, and composite technology. Results demonstrating flow sensing and actuation are presented.

  20. Physics Goals for the Planned Next Linear Collider Engineering Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raubenheimer, Tor O

    2001-10-02

    The Next Linear Collider (NLC) Collaboration is planning to construct an Engineering Test Facility (ETF) at Fermilab. As presently envisioned, the ETF would comprise a fundamental unit of the NLC main linac to include X-band klystrons and modulators, a delay-line power-distribution system (DLDS), and NLC accelerating structures that serve as loads. The principal purpose of the ETF is to validate stable operation of the power-distribution system, first without beam, then with a beam having the NLC pulse structure. This paper concerns the possibility of configuring and using the ETF to accelerate beam with an NLC pulse structure, as well asmore » of doing experiments to measure beam-induced wakefields in the rf structures and their influence back on the beam.« less

  1. Physics goals for the planned next linear collider engineering test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courtlandt L Bohn et al.

    2001-06-26

    The Next Linear Collider (NLC) Collaboration is planning to construct an Engineering Test Facility (ETF) at Fermilab. As presently envisioned, the ETF would comprise a fundamental unit of the NLC main linac to include X-band klystrons and modulators, a delay-line power-distribution system (DLDS), and NLC accelerating structures that serve as loads. The principal purpose of the ETF is to validate stable operation of the power-distribution system, first without beam, then with a beam having the NLC pulse structure. This paper concerns the possibility of configuring and using the ETF to accelerate beam with an NLC pulse structure, as well asmore » of doing experiments to measure beam-induced wakefields in the rf structures and their influence back on the beam.« less

  2. Physics goals for the planned next linear collider engineering test facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohn, C.; Michelotti, L.; Ostiguy, J.-F.

    2001-07-17

    The Next Linear Collider (NLC) Collaboration is planning to construct an Engineering Test Facility (ETF) at Fermilab. As presently envisioned, the ETF would comprise a fundamental unit of the NLC main linac to include X-band klystrons and modulators, a delay-line power-distribution system (DLDS), and NLC accelerating structures that serve as loads. The principal purpose of the ETF is to validate stable operation of the power-distribution system, first without beam, then with a beam having the NLC pulse structure. This paper concerns the possibility of configuring and using the ETF to accelerate beam with an NLC pulse structure, as well asmore » of doing experiments to measure beam-induced wakefields in the rf structures and their influence back on the beam.« less

  3. Analysis of Distribution of Vector-Borne Diseases Using Geographic Information Systems.

    PubMed

    Nihei, Naoko

    2017-01-01

    The distribution of vector-borne diseases is changing on a global scale owing to issues involving natural environments, socioeconomic conditions and border disputes among others. Geographic information systems (GIS) provide an important method of establishing a prompt and precise understanding of local data on disease outbreaks, from which disease eradication programs can be established. Having first defined GIS as a combination of GPS, RS and GIS, we showed the processes through which these technologies were being introduced into our research. GIS-derived geographical information attributes were interpreted in terms of point, area, line, spatial epidemiology, risk and development for generating the vector dynamic models associated with the spread of the disease. The need for interdisciplinary scientific and administrative collaboration in the use of GIS to control infectious diseases is highly warranted.

  4. Mars Polar Lander Mission Distributed Operations

    NASA Technical Reports Server (NTRS)

    Norris, J.; Backes, P.; Slostad, J.; Bonitz, R.; Tharp, G.; Tso, K.

    2000-01-01

    The Mars Polar Lander (MPL) mission is the first planetary mission to use Internet-based distributed ground operations where scientists and engineers collaborate in daily mission operations from multiple geographically distributed locations via the Internet.

  5. Not just for celebrities: collaborating with a PR representative to market library education services.

    PubMed

    Bloedel, Kimberly; Skhal, Kathryn

    2006-01-01

    Hardin Library for the Health Sciences offers an education service called Hardin House Calls. In collaboration with the University of Iowa libraries' public relations coordinator, the education team developed a marketing campaign for Hardin House Calls. Marketing strategies included designing a new logo, meeting with external relations representatives and faculty, distributing a user survey, and producing and distributing posters and advertisements. These marketing strategies greatly increased the visibility and use of Hardin House Calls. The campaign also led to a series of faculty development sessions, education collaborations with smaller health sciences departments, and collection development opportunities. Promoting an instructional service through a public relations frameworkwas found to be a highly successful strategy.

  6. Open Science Grid (OSG) Ticket Synchronization: Keeping Your Home Field Advantage In A Distributed Environment

    NASA Astrophysics Data System (ADS)

    Gross, Kyle; Hayashi, Soichi; Teige, Scott; Quick, Robert

    2012-12-01

    Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.

  7. The Earth System Grid Federation (ESGF) Project

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Denvil, Sébastien; Greenslade, Mark

    2015-04-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) enterprise system is a collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of model output and observational data. ESGF's primary goal is to facilitate advancements in Earth System Science. It is an interagency and international effort led by the US Department of Energy (DOE), and co-funded by National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), National Science Foundation (NSF), Infrastructure for the European Network of Earth System Modelling (IS-ENES) and international laboratories such as the Max Planck Institute for Meteorology (MPI-M) german Climate Computing Centre (DKRZ), the Australian National University (ANU) National Computational Infrastructure (NCI), Institut Pierre-Simon Laplace (IPSL), and the British Atmospheric Data Center (BADC). Its main mission is to support current CMIP5 activities and prepare for future assesments. The ESGF architecture is based on a system of autonomous and distributed nodes, which interoperate through common acceptance of federation protocols and trust agreements. Data is stored at multiple nodes around the world, and served through local data and metadata services. Nodes exchange information about their data holdings and services, trust each other for registering users and establishing access control decisions. The net result is that a user can use a web browser, connect to any node, and seamlessly find and access data throughout the federation. This type of collaborative working organization and distributed architecture context en-lighted the need of integration and testing processes definition to ensure the quality of software releases and interoperability. This presentation will introduce the ESGF project and demonstrate the range of tools and processes that have been set up to support release management activities.

  8. Studying Research Collaboration Patterns via Co-authorship Analysis in the Field of TeL: The Case of "Educational Technology & Society" Journal

    ERIC Educational Resources Information Center

    Zervas, Panagiotis; Tsitmidelli, Asimenia; Sampson, Demetrios G.; Chen, Nian-Shing; Kinshuk

    2014-01-01

    Research collaboration is studied in different research areas, so as to provide useful insights on how researchers combine existing distributed scientific knowledge and transform it into new knowledge. Commonly used metrics for measuring research collaborative activity include, among others, the co-authored publications (concerned with who works…

  9. The Cognitive Processes Used in Team Collaboration During Asynchronous, Distributed Decision Making

    DTIC Science & Technology

    2004-06-01

    Transfer Conventions (IPtcp) IP: Solution Alternatives (IPsa) KB: Collaborative Knowledge (KBck) KB: Shared Understanding ( KBsu ) KB: Domain...Gill.” KBsu : Knowledge Building (shared understanding) = using facts to justify a solution. “I think Eddie did it because he was hard of hearing...KB: Collaborative Knowledge (KBck) KB: Shared Understanding ( KBsu ) KB: Domain Expertise (IPde) * * ** ** ** = significant Results 15

  10. System Level Uncertainty Assessment for Collaborative RLV Design

    NASA Technical Reports Server (NTRS)

    Charania, A. C.; Bradford, John E.; Olds, John R.; Graham, Matthew

    2002-01-01

    A collaborative design process utilizing Probabilistic Data Assessment (PDA) is showcased. Given the limitation of financial resources by both the government and industry, strategic decision makers need more than just traditional point designs, they need to be aware of the likelihood of these future designs to meet their objectives. This uncertainty, an ever-present character in the design process, can be embraced through a probabilistic design environment. A conceptual design process is presented that encapsulates the major engineering disciplines for a Third Generation Reusable Launch Vehicle (RLV). Toolsets consist of aerospace industry standard tools in disciplines such as trajectory, propulsion, mass properties, cost, operations, safety, and economics. Variations of the design process are presented that use different fidelities of tools. The disciplinary engineering models are used in a collaborative engineering framework utilizing Phoenix Integration's ModelCenter and AnalysisServer environment. These tools allow the designer to join disparate models and simulations together in a unified environment wherein each discipline can interact with any other discipline. The design process also uses probabilistic methods to generate the system level output metrics of interest for a RLV conceptual design. The specific system being examined is the Advanced Concept Rocket Engine 92 (ACRE-92) RLV. Previous experience and knowledge (in terms of input uncertainty distributions from experts and modeling and simulation codes) can be coupled with Monte Carlo processes to best predict the chances of program success.

  11. Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade

    NASA Astrophysics Data System (ADS)

    Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.

    2010-07-01

    The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.

  12. Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems

    NASA Astrophysics Data System (ADS)

    Hirsch, M.

    2016-12-01

    Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.

  13. Collaboration, interdisciplinarity, and the epistemology of contemporary science.

    PubMed

    Andersen, Hanne

    2016-04-01

    Over the last decades, science has grown increasingly collaborative and interdisciplinary and has come to depart in important ways from the classical analyses of the development of science that were developed by historically inclined philosophers of science half a century ago. In this paper, I shall provide a new account of the structure and development of contemporary science based on analyses of, first, cognitive resources and their relations to domains, and second of the distribution of cognitive resources among collaborators and the epistemic dependence that this distribution implies. On this background I shall describe different ideal types of research activities and analyze how they differ. Finally, analyzing values that drive science towards different kinds of research activities, I shall sketch the main mechanisms underlying the perceived tension between disciplines and interdisciplinarity and argue for a redefinition of accountability and quality control for interdisciplinary and collaborative science. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  15. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  16. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  17. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  18. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  19. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less

  20. Distributed agile software development for the SKA

    NASA Astrophysics Data System (ADS)

    Wicenec, Andreas; Parsons, Rebecca; Kitaeff, Slava; Vinsen, Kevin; Wu, Chen; Nelson, Paul; Reed, David

    2012-09-01

    The SKA software will most probably be developed by many groups distributed across the globe and coming from dierent backgrounds, like industries and research institutions. The SKA software subsystems will have to cover a very wide range of dierent areas, but still they have to react and work together like a single system to achieve the scientic goals and satisfy the challenging data ow requirements. Designing and developing such a system in a distributed fashion requires proper tools and the setup of an environment to allow for ecient detection and tracking of interface and integration issues in particular in a timely way. Agile development can provide much faster feedback mechanisms and also much tighter collaboration between the customer (scientist) and the developer. Continuous integration and continuous deployment on the other hand can provide much faster feedback of integration issues from the system level to the subsystem developers. This paper describes the results obtained from trialing a potential SKA development environment based on existing science software development processes like ALMA, the expected distribution of the groups potentially involved in the SKA development and experience gained in the development of large scale commercial software projects.

  1. ESIP Federation: A Case Study on Enabling Collaboration Infrastructure to Support Earth Science Informatics Communities

    NASA Astrophysics Data System (ADS)

    Robinson, E.; Meyer, C. B.; Benedict, K. K.

    2013-12-01

    A critical part of effective Earth science data and information system interoperability involves collaboration across geographically and temporally distributed communities. The Federation of Earth Science Information Partners (ESIP) is a broad-based, distributed community of science, data and information technology practitioners from across science domains, economic sectors and the data lifecycle. ESIP's open, participatory structure provides a melting pot for coordinating around common areas of interest, experimenting on innovative ideas and capturing and finding best practices and lessons learned from across the network. Since much of ESIP's work is distributed, the Foundation for Earth Science was established as a non-profit home for its supportive collaboration infrastructure. The infrastructure leverages the Internet and recent advances in collaboration web services. ESIP provides neutral space for self-governed groups to emerge around common Earth science data and information issues, ebbing and flowing as the need for them arises. As a group emerges, the Foundation quickly equips the virtual workgroup with a set of ';commodity services'. These services include: web meeting technology (Webex), a wiki and an email listserv. WebEx allows the group to work synchronously, dynamically viewing and discussing shared information in real time. The wiki is the group's primary workspace and over time creates organizational memory. The listserv provides an inclusive way to email the group and archive all messages for future reference. These three services lower the startup barrier for collaboration and enable automatic content preservation to allow for future work. While many of ESIP's consensus-building activities are discussion-based, the Foundation supports an ESIP testbed environment for exploring and evaluating prototype standards, services, protocols, and best practices. After community review of testbed proposals, the Foundation provides small seed funding and a toolbox of collaborative development resources including Amazon Web Services to quickly spin-up the testbed instance and a GitHub account for maintaining testbed project code enabling reuse. Recently, the Foundation supported development of the ESIP Commons (http://commons.esipfed.org), a Drupal-based knowledge repository for non-traditional publications to preserve community products and outcomes like white papers, posters and proceedings. The ESIP Commons adds additional structured metadata, provides attribution to contributors and allows those unfamiliar with ESIP a straightforward way to find information. The success of ESIP Federation activities is difficult to measure. The ESIP Commons is a step toward quantifying sponsor return on investment and is one dataset used in network map analysis of the ESIP community network, another success metric. Over the last 15 years, ESIP has continually grown and attracted experts in the Earth science data and informatics field becoming a primary locus of research and development on the application and evolution of Earth science data standards and conventions. As funding agencies push toward a more collaborative approach, the lessons learned from ESIP and the collaboration services themselves are a crucial component of supporting science research.

  2. Representing complexity well: a story about teamwork, with implications for how we teach collaboration.

    PubMed

    Lingard, Lorelei; McDougall, Allan; Levstik, Mark; Chandok, Natasha; Spafford, Marlee M; Schryer, Catherine

    2012-09-01

    In order to be relevant and impactful, our research into health care teamwork needs to better reflect the complexity inherent to this area. This study explored the complexity of collaborative practice on a distributed transplant team. We employed the theoretical lenses of activity theory to better understand the nature of collaborative complexity and its implications for current approaches to interprofessional collaboration (IPC) and interprofessional education (IPE). Over 4 months, two trained observers conducted 162 hours of observation, 30 field interviews and 17 formal interviews with 39 members of a solid organ transplant team in a Canadian teaching hospital. Participants included consultant medical and surgical staff and postgraduate trainees, the team nurse practitioner, social worker, dietician, pharmacist, physical therapist, bedside nurses, organ donor coordinators and organ recipient coordinators. Data collection and inductive analysis for emergent themes proceeded iteratively. Daily collaborative practice involves improvisation in the face of recurring challenges on a distributed team. This paper focuses on the theme of 'interservice' challenges, which represent instances in which the 'core' transplant team (those providing daily care for transplant patients) work to engage the expertise and resources of other services in the hospital, such as those of radiology and pathology departments. We examine a single story of the core team's collaboration with cardiology, anaesthesiology and radiology services to decide whether a patient is appropriate for transplantation and use this story to consider the team's strategies in the face of conflicting expectations and preferences among these services. This story of collaboration in a distributed team calls into question two premises underpinning current models of IPC and IPE: the notion that stable professional roles exist, and the ideal of a unifying objective of 'caring for the patient'. We suggest important elaborations to these premises as they are used to conceptualise and teach IPC in order to better represent the intricacy of everyday collaborative work in health care. © Blackwell Publishing Ltd 2012.

  3. Internet-based distributed collaborative environment for engineering education and design

    NASA Astrophysics Data System (ADS)

    Sun, Qiuli

    2001-07-01

    This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.

  4. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  5. Data Transparency | Distributed Generation Interconnection Collaborative |

    Science.gov Websites

    quality and availability are increasingly vital for reducing the costs of distributed generation completion in certain areas, increasing accountability for utility application processing. As distributed PV NREL, HECO, TSRG Improving Data Transparency for the Distributed PV Interconnection Process: Emergent

  6. Collaborative Supervised Learning for Sensor Networks

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Rebbapragada, Umaa; Lane, Terran

    2011-01-01

    Collaboration methods for distributed machine-learning algorithms involve the specification of communication protocols for the learners, which can query other learners and/or broadcast their findings preemptively. Each learner incorporates information from its neighbors into its own training set, and they are thereby able to bootstrap each other to higher performance. Each learner resides at a different node in the sensor network and makes observations (collects data) independently of the other learners. After being seeded with an initial labeled training set, each learner proceeds to learn in an iterative fashion. New data is collected and classified. The learner can then either broadcast its most confident classifications for use by other learners, or can query neighbors for their classifications of its least confident items. As such, collaborative learning combines elements of both passive (broadcast) and active (query) learning. It also uses ideas from ensemble learning to combine the multiple responses to a given query into a single useful label. This approach has been evaluated against current non-collaborative alternatives, including training a single classifier and deploying it at all nodes with no further learning possible, and permitting learners to learn from their own most confident judgments, absent interaction with their neighbors. On several data sets, it has been consistently found that active collaboration is the best strategy for a distributed learner network. The main advantages include the ability for learning to take place autonomously by collaboration rather than by requiring intervention from an oracle (usually human), and also the ability to learn in a distributed environment, permitting decisions to be made in situ and to yield faster response time.

  7. Distributed Interoperable Metadata Registry; How Do Physicists Use an E-Print Archive? Implications for Institutional E-Print Services; A Framework for Building Open Digital Libraries; Implementing Digital Sanborn Maps for Ohio: OhioLINK and OPLIN Collaborative Project.

    ERIC Educational Resources Information Center

    Blanchi, Christophe; Petrone, Jason; Pinfield, Stephen; Suleman, Hussein; Fox, Edward A.; Bauer, Charly; Roddy, Carol Lynn

    2001-01-01

    Includes four articles that discuss a distributed architecture for managing metadata that promotes interoperability between digital libraries; the use of electronic print (e-print) by physicists; the development of digital libraries; and a collaborative project between two library consortia in Ohio to provide digital versions of Sanborn Fire…

  8. Distributing and storing data efficiently by means of special datasets in the ATLAS collaboration

    NASA Astrophysics Data System (ADS)

    Köneke, Karsten; ATLAS Collaboration

    2011-12-01

    With the start of the LHC physics program, the ATLAS experiment started to record vast amounts of data. This data has to be distributed and stored on the world-wide computing grid in a smart way in order to enable an effective and efficient analysis by physicists. This article describes how the ATLAS collaboration chose to create specialized reduced datasets in order to efficiently use computing resources and facilitate physics analyses.

  9. Intelligent Agents for the Digital Battlefield

    DTIC Science & Technology

    1998-11-01

    specific outcome of our long term research will be the development of a collaborative agent technology system, CATS , that will provide the underlying...software infrastructure needed to build large, heterogeneous, distributed agent applications. CATS will provide a software environment through which multiple...intelligent agents may interact with other agents, both human and computational. In addition, CATS will contain a number of intelligent agent components that will be useful for a wide variety of applications.

  10. Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations

    DTIC Science & Technology

    2007-08-31

    very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power

  11. Open source tools for large-scale neuroscience.

    PubMed

    Freeman, Jeremy

    2015-06-01

    New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.

  12. Characterizing Distributed Concurrent Engineering Teams: A Descriptive Framework for Aerospace Concurrent Engineering Design Teams

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Debarati; Hihn, Jairus; Warfield, Keith

    2011-01-01

    As aerospace missions grow larger and more technically complex in the face of ever tighter budgets, it will become increasingly important to use concurrent engineering methods in the development of early conceptual designs because of their ability to facilitate rapid assessments and trades in a cost-efficient manner. To successfully accomplish these complex missions with limited funding, it is also essential to effectively leverage the strengths of individuals and teams across government, industry, academia, and international agencies by increased cooperation between organizations. As a result, the existing concurrent engineering teams will need to increasingly engage in distributed collaborative concurrent design. This paper is an extension of a recent white paper written by the Concurrent Engineering Working Group, which details the unique challenges of distributed collaborative concurrent engineering. This paper includes a short history of aerospace concurrent engineering, and defines the terms 'concurrent', 'collaborative' and 'distributed' in the context of aerospace concurrent engineering. In addition, a model for the levels of complexity of concurrent engineering teams is presented to provide a way to conceptualize information and data flow within these types of teams.

  13. Beyond Music Sharing: An Evaluation of Peer-to-Peer Data Dissemination Techniques in Large Scientific Collaborations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ripeanu, Matei; Al-Kiswany, Samer; Iamnitchi, Adriana

    2009-03-01

    The avalanche of data from scientific instruments and the ensuing interest from geographically distributed users to analyze and interpret it accentuates the need for efficient data dissemination. A suitable data distribution scheme will find the delicate balance between conflicting requirements of minimizing transfer times, minimizing the impact on the network, and uniformly distributing load among participants. We identify several data distribution techniques, some successfully employed by today's peer-to-peer networks: staging, data partitioning, orthogonal bandwidth exploitation, and combinations of the above. We use simulations to explore the performance of these techniques in contexts similar to those used by today's data-centric scientificmore » collaborations and derive several recommendations for efficient data dissemination. Our experimental results show that the peer-to-peer solutions that offer load balancing and good fault tolerance properties and have embedded participation incentives lead to unjustified costs in today's scientific data collaborations deployed on over-provisioned network cores. However, as user communities grow and these deployments scale, peer-to-peer data delivery mechanisms will likely outperform other techniques.« less

  14. Communication Optimizations for a Wireless Distributed Prognostic Framework

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  15. Solar Resource & Meteorological Assessment Project (SOLRMAP): Observed Atmospheric and Solar Information System (OASIS); Tucson, Arizona (Data)

    DOE Data Explorer

    Wilcox, S.; Andreas, A.

    2010-11-03

    The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological Assessment Project (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

  16. Internet-enabled solutions for health care business problems.

    PubMed

    Kennedy, R; Geisler, M

    1997-01-01

    Many health care delivery organizations have built, installed, or made use of Nets. As single entities merge with others, and independent institutions become part of much larger delivery networks, the need for collaboration is critical. With the formation of such partnerships, existing platforms will become increasingly available from which it will be possible to build disparate technologies that must somehow be part of a single working "system." Nets can enable this leveraging, allowing access from multiple technological platforms. The collaboration, distribution, application integration, and messaging possibilities with the Nets are unprecedented. We believe that meeting a health care delivery organization's needs without these benefits will soon be unthinkable. While Nets are not the answer to the challenges facing health care delivery today, they certainly are a large contributor to the solution.

  17. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  18. Mother ship and physical agents collaboration

    NASA Astrophysics Data System (ADS)

    Young, Stuart H.; Budulas, Peter P.; Emmerman, Philip J.

    1999-07-01

    This paper discusses ongoing research at the U.S. Army Research Laboratory that investigates the feasibility of developing a collaboration architecture between small physical agents and a mother ship. This incudes the distribution of planning, perception, mobility, processing and communications requirements between the mother ship and the agents. Small physical agents of the future will be virtually everywhere on the battlefield of the 21st century. A mother ship that is coupled to a team of small collaborating physical agents (conducting tasks such as Reconnaissance, Surveillance, and Target Acquisition (RSTA); logistics; sentry; and communications relay) will be used to build a completely effective and mission capable intelligent system. The mother ship must have long-range mobility to deploy the small, highly maneuverable agents that will operate in urban environments and more localized areas, and act as a logistics base for the smaller agents. The mother ship also establishes a robust communications network between the agents and is the primary information disseminating and receiving point to the external world. Because of its global knowledge and processing power, the mother ship does the high-level control and planning for the collaborative physical agents. This high level control and interaction between the mother ship and its agents (including inter agent collaboration) will be software agent architecture based. The mother ship incorporates multi-resolution battlefield visualization and analysis technology, which aids in mission planning and sensor fusion.

  19. Distributed data collection for a database of radiological image interpretations

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  20. Managing uncertainty in collaborative robotics engineering projects: The influence of task structure and peer interaction

    NASA Astrophysics Data System (ADS)

    Jordan, Michelle

    Uncertainty is ubiquitous in life, and learning is an activity particularly likely to be fraught with uncertainty. Previous research suggests that students and teachers struggle in their attempts to manage the psychological experience of uncertainty and that students often fail to experience uncertainty when uncertainty may be warranted. Yet, few educational researchers have explicitly and systematically observed what students do, their behaviors and strategies, as they attempt to manage the uncertainty they experience during academic tasks. In this study I investigated how students in one fifth grade class managed uncertainty they experienced while engaged in collaborative robotics engineering projects, focusing particularly on how uncertainty management was influenced by task structure and students' interactions with their peer collaborators. The study was initiated at the beginning of instruction related to robotics engineering and preceded through the completion of several long-term collaborative robotics projects, one of which was a design project. I relied primarily on naturalistic observation of group sessions, semi-structured interviews, and collection of artifacts. My data analysis was inductive and interpretive, using qualitative discourse analysis techniques and methods of grounded theory. Three theoretical frameworks influenced the conception and design of this study: community of practice, distributed cognition, and complex adaptive systems theory. Uncertainty was a pervasive experience for the students collaborating in this instructional context. Students experienced uncertainty related to the project activity and uncertainty related to the social system as they collaborated to fulfill the requirements of their robotics engineering projects. They managed their uncertainty through a diverse set of tactics for reducing, ignoring, maintaining, and increasing uncertainty. Students experienced uncertainty from more different sources and used more and different types of uncertainty management strategies in the less structured task setting than in the more structured task setting. Peer interaction was influential because students relied on supportive social response to enact most of their uncertainty management strategies. When students could not garner socially supportive response from their peers, their options for managing uncertainty were greatly reduced.

  1. SOMWeb: a semantic web-based system for supporting collaboration of distributed medical communities of practice.

    PubMed

    Falkman, Göran; Gustafsson, Marie; Jontell, Mats; Torgersson, Olof

    2008-08-26

    Information technology (IT) support for remote collaboration of geographically distributed communities of practice (CoP) in health care must deal with a number of sociotechnical aspects of communication within the community. In the mid-1990s, participants of the Swedish Oral Medicine Network (SOMNet) began discussing patient cases in telephone conferences. The cases were distributed prior to the conferences using PowerPoint and email. For the technical support of online CoP, Semantic Web technologies can potentially fulfill needs of knowledge reuse, data exchange, and reasoning based on ontologies. However, more research is needed on the use of Semantic Web technologies in practice. The objectives of this research were to (1) study the communication of distributed health care professionals in oral medicine; (2) apply Semantic Web technologies to describe community data and oral medicine knowledge; (3) develop an online CoP, Swedish Oral Medicine Web (SOMWeb), centered on user-contributed case descriptions and meetings; and (4) evaluate SOMWeb and study how work practices change with IT support. Based on Java, and using the Web Ontology Language and Resource Description Framework for handling community data and oral medicine knowledge, SOMWeb was developed using a user-centered and iterative approach. For studying the work practices and evaluating the system, a mixed-method approach of interviews, observations, and a questionnaire was used. By May 2008, there were 90 registered users of SOMWeb, 93 cases had been added, and 18 meetings had utilized the system. The introduction of SOMWeb has improved the structure of meetings and their discussions, and a tenfold increase in the number of participants has been observed. Users submit cases to seek advice on diagnosis or treatment, to show an unusual case, or to create discussion. Identified barriers to submitting cases are lack of time, concern about whether the case is interesting enough, and showing gaps in one's own knowledge. Three levels of member participation are discernable: a core group that contributes most cases and most meeting feedback; an active group that participates often but only sometimes contribute cases and feedback; and a large peripheral group that seldom or never contribute cases or feedback. SOMWeb is beneficial for individual clinicians as well as for the SOMNet community. The system provides an opportunity for its members to share both high quality clinical practice knowledge and external evidence related to complex oral medicine cases. The foundation in Semantic Web technologies enables formalization and structuring of case data that can be used for further reasoning and research. Main success factors are the long history of collaboration between different disciplines, the user-centered development approach, the existence of a "champion" within the field, and nontechnical community aspects already being in place.

  2. SOMWeb: A Semantic Web-Based System for Supporting Collaboration of Distributed Medical Communities of Practice

    PubMed Central

    Gustafsson, Marie; Jontell, Mats; Torgersson, Olof

    2008-01-01

    Background Information technology (IT) support for remote collaboration of geographically distributed communities of practice (CoP) in health care must deal with a number of sociotechnical aspects of communication within the community. In the mid-1990s, participants of the Swedish Oral Medicine Network (SOMNet) began discussing patient cases in telephone conferences. The cases were distributed prior to the conferences using PowerPoint and email. For the technical support of online CoP, Semantic Web technologies can potentially fulfill needs of knowledge reuse, data exchange, and reasoning based on ontologies. However, more research is needed on the use of Semantic Web technologies in practice. Objectives The objectives of this research were to (1) study the communication of distributed health care professionals in oral medicine; (2) apply Semantic Web technologies to describe community data and oral medicine knowledge; (3) develop an online CoP, Swedish Oral Medicine Web (SOMWeb), centered on user-contributed case descriptions and meetings; and (4) evaluate SOMWeb and study how work practices change with IT support. Methods Based on Java, and using the Web Ontology Language and Resource Description Framework for handling community data and oral medicine knowledge, SOMWeb was developed using a user-centered and iterative approach. For studying the work practices and evaluating the system, a mixed-method approach of interviews, observations, and a questionnaire was used. Results By May 2008, there were 90 registered users of SOMWeb, 93 cases had been added, and 18 meetings had utilized the system. The introduction of SOMWeb has improved the structure of meetings and their discussions, and a tenfold increase in the number of participants has been observed. Users submit cases to seek advice on diagnosis or treatment, to show an unusual case, or to create discussion. Identified barriers to submitting cases are lack of time, concern about whether the case is interesting enough, and showing gaps in one’s own knowledge. Three levels of member participation are discernable: a core group that contributes most cases and most meeting feedback; an active group that participates often but only sometimes contribute cases and feedback; and a large peripheral group that seldom or never contribute cases or feedback. Conclusions SOMWeb is beneficial for individual clinicians as well as for the SOMNet community. The system provides an opportunity for its members to share both high quality clinical practice knowledge and external evidence related to complex oral medicine cases. The foundation in Semantic Web technologies enables formalization and structuring of case data that can be used for further reasoning and research. Main success factors are the long history of collaboration between different disciplines, the user-centered development approach, the existence of a “champion” within the field, and nontechnical community aspects already being in place. PMID:18725355

  3. Providing Health Sciences Services in a Joint-Use Distributed Learning Library System: An Organizational Case Study.

    PubMed

    Enslow, Electra; Fricke, Suzanne; Vela, Kathryn

    2017-01-01

    The purpose of this organizational case study is to describe the complexities librarians face when serving a multi-campus institution that supports both a joint-use library and expanding health sciences academic partnerships. In a system without a centralized health science library administration, liaison librarians are identifying dispersed programs and user groups and collaborating to define their unique service and outreach needs within a larger land-grant university. Using a team-based approach, health sciences librarians are communicating to integrate research and teaching support, systems differences across dispersed campuses, and future needs of a new community-based medical program.

  4. Web Based Prognostics and 24/7 Monitoring

    NASA Technical Reports Server (NTRS)

    Strautkalns, Miryam; Robinson, Peter

    2013-01-01

    We created a general framework for analysts to store and view data in a way that removes the boundaries created by operating systems, programming languages, and proximity. With the advent of HTML5 and CSS3 with JavaScript the distribution of information is limited to only those who lack a browser. We created a framework based on the methodology: one server, one web based application. Additional benefits are increased opportunities for collaboration. Today the idea of a group in a single room is antiquated. Groups will communicate and collaborate with others from other universities, organizations, as well as other continents across times zones. There are many varieties of data gathering and condition-monitoring software available as well as companies who specialize in customizing software to individual applications. One single group will depend on multiple languages, environments, and computers to oversee recording and collaborating with one another in a single lab. The heterogeneous nature of the system creates challenges for seamless exchange of data and ideas between members. To address these limitations we designed a framework to allow users seamless accessibility to their data. Our framework was deployed using the data feed on the NASA Ames' planetary rover testbed. Our paper demonstrates the process and implementation we followed on the rover.

  5. Surgical model-view-controller simulation software framework for local and collaborative applications

    PubMed Central

    Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2010-01-01

    Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933

  6. Surgical model-view-controller simulation software framework for local and collaborative applications.

    PubMed

    Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2011-07-01

    Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.

  7. The Collaborative Payer Provider Model Enhances Primary Care, Producing Triple Aim Plus One Outcomes: A Cohort Study.

    PubMed

    Doerr, Thomas; Olsen, Lisa; Zimmerman, Deborah

    2017-08-27

    Rising health care costs are threatening the fiscal solvency of patients, employers, payers, and governments. The Collaborative Payer Provider Model (CPPM) addresses this challenge by reinventing the role of the payer into a full-service collaborative ally of the physician. From 2010 through 2014, a Medicare Advantage plan prospectively deployed the CPPM, averaging 30,561 members with costs that were 73.6% of fee-for-service (FFS) Medicare ( p < 0.001). The health plan was not part of an integrated delivery system. After allocating $80 per member per month (PMPM) for primary care costs, the health plan had medical cost ratios averaging 75.1% before surplus distribution. Member benefits were the best in the market. The health plan was rated 4.5 Stars by the Centers for Medicare and Medicaid Services for years 1-4, and 5 Stars in study year 5 for quality, patient experience, access to care, and care process metrics. Primary care and specialist satisfaction were significantly better than national benchmarks. Savings resulted from shifts in spending from inpatient to outpatient settings, and from specialists to primary care physicians when appropriate. The CPPM is a scalable model that enables a win-win-win system for patients, providers, and payers.

  8. SeeStar: an open-source, low-cost imaging system for subsea observations

    NASA Astrophysics Data System (ADS)

    Cazenave, F.; Kecy, C. D.; Haddock, S.

    2016-02-01

    Scientists and engineers at the Monterey Bay Aquarium Research Institute (MBARI) have collaborated to develop SeeStar, a modular, light weight, self-contained, low-cost subsea imaging system for short- to long-term monitoring of marine ecosystems. SeeStar is composed of separate camera, battery, and LED lighting modules. Two versions of the system exist: one rated to 300 meters depth, the other rated to 1500 meters. Users can download plans and instructions from an online repository and build the system using low-cost off-the-shelf components. The system utilizes an easily programmable Arduino based controller, and the widely distributed GoPro camera. The system can be deployed in a variety of scenarios taking still images and video and can be operated either autonomously or tethered on a range of platforms, including ROVs, AUVs, landers, piers, and moorings. Several Seestar systems have been built and used for scientific studies and engineering tests. The long-term goal of this project is to have a widely distributed marine imaging network across thousands of locations, to develop baselines of biological information.

  9. Coordinating Representations

    DTIC Science & Technology

    2006-04-07

    4 COGNITIVE THEORY OF INTERSUBJECTIVITY...adaptive component that is created and that the use of that component improves their performance. 2 Project Summary Objectives Develop cognitive theory of...distributed collaboration among a heterogeneous team of actors. Theory explains how collaborators share a common understanding of their cooperative

  10. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network

    PubMed Central

    Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.

    2013-01-01

    Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567

  11. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network.

    PubMed

    Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G

    2013-01-01

    Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.

  12. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  13. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    NASA Technical Reports Server (NTRS)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  14. Threshold quantum cryptography

    NASA Astrophysics Data System (ADS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding.

  15. Cassini Information Management System in Distributed Operations Collaboration and Cassini Science Planning

    NASA Technical Reports Server (NTRS)

    Equils, Douglas J.

    2008-01-01

    Launched on October 15, 1997, the Cassini-Huygens spacecraft began its ambitious journey to the Saturnian system with a complex suite of 12 scientific instruments, and another 6 instruments aboard the European Space Agencies Huygens Probe. Over the next 6 1/2 years, Cassini would continue its relatively simplistic cruise phase operations, flying past Venus, Earth, and Jupiter. However, following Saturn Orbit Insertion (SOI), Cassini would become involved in a complex series of tasks that required detailed resource management, distributed operations collaboration, and a data base for capturing science objectives. Collectively, these needs were met through a web-based software tool designed to help with the Cassini uplink process and ultimately used to generate more robust sequences for spacecraft operations. In 2001, in conjunction with the Southwest Research Institute (SwRI) and later Venustar Software and Engineering Inc., the Cassini Information Management System (CIMS) was released which enabled the Cassini spacecraft and science planning teams to perform complex information management and team collaboration between scientists and engineers in 17 countries. Originally tailored to help manage the science planning uplink process, CIMS has been actively evolving since its inception to meet the changing and growing needs of the Cassini uplink team and effectively reduce mission risk through a series of resource management validation algorithms. These algorithms have been implemented in the web-based software tool to identify potential sequence conflicts early in the science planning process. CIMS mitigates these sequence conflicts through identification of timing incongruities, pointing inconsistencies, flight rule violations, data volume issues, and by assisting in Deep Space Network (DSN) coverage analysis. In preparation for extended mission operations, CIMS has also evolved further to assist in the planning and coordination of the dual playback redundancy of highvalue data from targets such as Titan and Enceladus. This paper will outline the critical role that CIMS has played for Cassini in the distributed ops paradigm throughout operations. This paper will also examine the evolution that CIMS has undergone in the face of new science discoveries and fluctuating operational needs. And finally, this paper will conclude with theoretical adaptation of CIMS for other projects and the potential savings in cost and risk reduction that could potentially be tapped into by future missions.

  16. Internet-enabled collaborative agent-based supply chains

    NASA Astrophysics Data System (ADS)

    Shen, Weiming; Kremer, Rob; Norrie, Douglas H.

    2000-12-01

    This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.

  17. The development and performance of smud grid-connected photovoltaic projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, D.E.; Collier, D.E.

    1995-11-01

    The utility grid-connected market has been identified as a key market to be developed to accelerate the commercialization of photovoltaics. The Sacramento Municipal Utility District (SMUD) has completed the first two years of a continuing commercialization effort based on two years of a continuing commercialization effort based on the sustained, orderly development of the grid-connected, utility PV market. This program is aimed at developing the experience needed to successfully integrate PV as distributed generation into the utility system and to stimulate the collaborative processes needed to accelerate the cost reductions necessary for PV to be cost-effective in these applications bymore » the year 2000. In the first two years, SMUD has installed over 240 residential and commercial building, grid-connected, rooftop, {open_quotes}PV Pioneer{close_quotes} systems totaling over 1MW of capacity and four substation sited, grid-support PV systems totaling 600 kW bringing the SMUD distributed PV power systems to over 3.7 MW. The 1995 SMUD PV Program will add another approximately 800 kW of PV systems to the District`s distributed PV power system. SMUD also established a partnership with its customers through the PV Pioneer {open_quotes}green pricing{close_quotes} program to advance PV commercialization.« less

  18. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  19. Secret Key Generation via a Modified Quantum Secret Sharing Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith IV, Amos M; Evans, Philip G; Lawrie, Benjamin J

    We present and experimentally show a novel protocol for distributing secret information between two and only two parties in a N-party single-qubit Quantum Secret Sharing (QSS) system. We demonstrate this new algorithm with N = 3 active parties over 6km of telecom. ber. Our experimental device is based on the Clavis2 Quantum Key Distribution (QKD) system built by ID Quantique but is generalizable to any implementation. We show that any two out of the N parties can build secret keys based on partial information from each other and with collaboration from the remaining N > 2 parties. This algorithm allowsmore » for the creation of two-party secret keys were standard QSS does not and signicantly reduces the number of resources needed to implement QKD on a highly connected network such as the electrical grid.« less

  20. Distributed Earth observation data integration and on-demand services based on a collaborative framework of geospatial data service gateway

    NASA Astrophysics Data System (ADS)

    Xie, Jibo; Li, Guoqing

    2015-04-01

    Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.

  1. Regional Educational Laboratory Electronic Network Phase 2 System

    NASA Technical Reports Server (NTRS)

    Cradler, John

    1995-01-01

    The Far West Laboratory in collaboration with the other regional educational laboratories is establishing a regionally coordinated telecommunication network to electronically interconnect each of the ten regional laboratories with educators and education stakeholders from the school to the state level. For the national distributed information database, each lab is working with mid-level networks to establish a common interface for networking throughout the country and include topics of importance to education reform as assessment and technology planning.

  2. Clouds and Water Vapor in the Climate System and Radiative Transfer in Clear Air and Cirrus Clouds in the Tropics

    NASA Technical Reports Server (NTRS)

    Anderson, James G.; DeSouza-Machado, Sergio; Strow, L. Larrabee

    2002-01-01

    Research supported under this grant was aimed at attacking unanswered scientific questions that lie at the intersection of radiation, dynamics, chemistry, and climate. Considerable emphasis was placed on scientific collaboration and the innovative development of instruments required to address these issues. Specific questions include water vapor distribution in the tropical troposphere, atmospheric radiation, thin cirrus clouds, stratosphere-troposphere exchange, and correlative science with satellite observations.

  3. Modeling and Control in Distributed Parameter Physical Systems.

    DTIC Science & Technology

    1998-05-15

    Albanese); May 5-7 (Albanese). The collaborations intensified during 1997 with the following visits: 1. June 15-August 30, 1997: C. Musante , a graduate...November 13, 1997: H.T. Banks and C. Musante visited with Jeff Fisher and colleagues in the toxicology group at Wright Patterson to discuss our progress in...May 22, 1997: NCSU team (H.T. Banks, R.C. Smith, C. Musante ) visited with Mike Stands and colleagues at Wright Patterson. 3. October 7, 1997: H.T

  4. AliEn—ALICE environment on the GRID

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration

    2003-04-01

    AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.

  5. The USAID-NREL Partnership: Delivering Clean, Reliable, and Affordable Power in the Developing World

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Andrea C; Leisch, Jennifer E

    The U.S. Agency for International Development (USAID) and the National Renewable Energy Laboratory (NREL) are partnering to support clean, reliable, and affordable power in the developing world. The USAID-NREL Partnership helps countries with policy, planning, and deployment support for advanced energy technologies. Through this collaboration, USAID is accessing advanced energy expertise and analysis pioneered by the U.S. National Laboratory system. The Partnership addresses critical aspects of advanced energy systems including renewable energy deployment, grid modernization, distributed energy resources and storage, power sector resilience, and the data and analytical tools needed to support them.

  6. [Ten years of child and adolescent psychiatry in Austria: a new medical speciality within the structures of public health services].

    PubMed

    Hartl, Charlotte; Karwautz, Andreas

    2017-09-01

    We discuss the comprehensive work for the development of child and adolescent psychiatry in Austria, summarize the current status of care in various settings and focus on further developments. Intramural care offers about 50% of the places needed and is heterogeneously distributed over the country, extramural care offers already around one quarter of care in need. We calculated a fully developed extramural care system from about 2033. Further development of the Austrian care system in child and adolescent psychiatry needs collaborative efforts of all responsible players.

  7. CCSDS Mission Operations Action Service Core Capabilities

    NASA Technical Reports Server (NTRS)

    Reynolds, Walter F.; Lucord, Steven A.; Stevens, John E.

    2009-01-01

    This slide presentation reviews the operations concepts of the command (action) services. Since the consequences of sending the wrong command are unacceptable, the command system provides a collaborative and distributed work environment for flight controllers and operators. The system prescribes a review and approval process where each command is viewed by other individuals before being sent to the vehicle. The action service needs additional capabilities to support he operations concepts of manned space flight. These are : (1) Action Service methods (2) Action attributes (3) Action parameter/argument attributes (4 ) Support for dynamically maintained action data. (5) Publish subscri be capabilities.

  8. Development and implementation of the guiding stars nutrition guidance program.

    PubMed

    Fischer, Leslie M; Sutherland, Lisa A; Kaley, Lori A; Fox, Tracy A; Hasler, Clare M; Nobel, Jeremy; Kantor, Mark A; Blumberg, Jeffrey

    2011-01-01

    PURPOSE . To describe the collaborative process between a grocery retailer and a panel of nutrition experts used to develop a nutrition guidance system (Guiding Stars) that evaluates the nutrient profile of all edible products in the supermarket, and to report the results of the food and beverage ratings. DESIGN . A collaboration between a private retailer and members of the scientific community that led to the development of a scoring algorithm used to evaluate the nutritional quality of foods and beverages. SETTING/SUBJECTS . Northeast supermarkets (n  =  160). MEASURES . Food and beverage nutrition ratings and distribution of stars across different grocery categories. ANALYSIS . Descriptive statistics for rating distributions were computed. T-tests were conducted to assess differences in mean nutrient values between foods with zero versus three stars or a dichotomized variable representing all foods with one to three stars. RESULTS . All edible grocery items (n  =  27,466) were evaluated, with 23.6% earning at least one star. Items receiving at least one star had lower mean levels of sodium, saturated fat, and sugars and higher amounts of fiber than products not earning stars. CONCLUSION . The Guiding Stars system rates edible products without regard to brand or manufacturer, and provides consumers with a simple tool to quickly identify more nutritious choices while shopping. The low percentage of products qualifying for stars reflects poorly on the food choices available to Americans.

  9. XBoard: A Framework for Integrating and Enhancing Collaborative Work Practices

    NASA Technical Reports Server (NTRS)

    Shab, Ted

    2006-01-01

    Teams typically collaborate in different modes including face-to-face meetings, meetings that are synchronous (i. e. require parties to participate at the same time) but distributed geographically, and meetings involving asynchronously working on common tasks at different times. The XBoard platform was designed to create an integrated environment for creating applications that enhance collaborative work practices. Specifically, it takes large, touch-screen enabled displays as the starting point for enhancing face-to-face meetings by providing common facilities such as whiteboarding/electronic flipcharts, laptop projection, web access, screen capture and content distribution. These capabilities are built upon by making these functions inherently distributed by allowing these sessions to be easily connected between two or more systems at different locations. Finally, an information repository is integrated into the functionality to provide facilities for work practices that involve work being done at different times, such as reports that span different shifts. The Board is designed to be extendible allowing customization of both the general functionality and by adding new functionality to the core facilities by means of a plugin architecture. This, in essence, makes it a collaborative framework for extending or integrating work practices for different mission scenarios. XBoard relies heavily on standards such as Web Services and SVG, and is built using predominately Java and well-known open-source products such as Apache and Postgres. Increasingly, organizations are geographically dispersed, and rely on "virtual teams" that are assembled from a pool of various partner organizations. These organizations often have different infrastructures of applications and workflows. The XBoard has been designed to be a good partner in these situations, providing the flexibility to integrate with typical legacy applications while providing a standards-based infrastructure that is readily accepted by most organizations. The XBoard has been used on the Mars Exploration Rovers mission at JPL, and is currently being used or considered for use in pilot projects at Johnson Space Center (JSC) Mission Control, the University of Arizona Lunar and Planetav Laboratory (Phoenix Mars Lander), and MBART (Monterey Bay Aquarium Research Institute).

  10. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  11. A collaborative framework for Distributed Privacy-Preserving Support Vector Machine learning.

    PubMed

    Que, Jialan; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2012-01-01

    A Support Vector Machine (SVM) is a popular tool for decision support. The traditional way to build an SVM model is to estimate parameters based on a centralized repository of data. However, in the field of biomedicine, patient data are sometimes stored in local repositories or institutions where they were collected, and may not be easily shared due to privacy concerns. This creates a substantial barrier for researchers to effectively learn from the distributed data using machine learning tools like SVMs. To overcome this difficulty and promote efficient information exchange without sharing sensitive raw data, we developed a Distributed Privacy Preserving Support Vector Machine (DPP-SVM). The DPP-SVM enables privacy-preserving collaborative learning, in which a trusted server integrates "privacy-insensitive" intermediary results. The globally learned model is guaranteed to be exactly the same as learned from combined data. We also provide a free web-service (http://privacy.ucsd.edu:8080/ppsvm/) for multiple participants to collaborate and complete the SVM-learning task in an efficient and privacy-preserving manner.

  12. Supporting Effective Collaboration: Using a Rearview Mirror to Look Forward

    ERIC Educational Resources Information Center

    McManus, Margaret M.; Aiken, Robert M.

    2016-01-01

    Our original research, to design and develop an Intelligent Collaborative Learning System (ICLS), yielded the creation of a Group Leader Tutor software system which utilizes a Collaborative Skills Network to monitor students working collaboratively in a networked environment. The Collaborative Skills Network was a conceptualization of…

  13. Technology Solutions | Distributed Generation Interconnection Collaborative

    Science.gov Websites

    technologies, both hardware and software, can support the wider adoption of distributed generation on the grid . As the penetration of distributed-generation photovoltaics (DGPV) has risen rapidly in recent years posed by high penetrations of distributed PV. Other promising technologies include new utility software

  14. 42 CFR § 512.510 - Downstream distribution arrangements under the EPM.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... HEALTH AND HUMAN SERVICES (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL... distribution payment it receives from the EPM collaborator only in accordance with a downstream distribution... make or receive a downstream distribution payment must not be conditioned directly or indirectly on the...

  15. Promoting scientific collaboration and research through integrated social networking capabilities within the OpenTopography Portal

    NASA Astrophysics Data System (ADS)

    Nandigam, V.; Crosby, C. J.; Baru, C.

    2009-04-01

    LiDAR (Light Distance And Ranging) topography data offer earth scientists the opportunity to study the earth's surface at very high resolutions. As a result, the popularity of these data is growing dramatically. However, the management, distribution, and analysis of community LiDAR data sets is a challenge due to their massive size (multi-billion point, mutli-terabyte). We have also found that many earth science users of these data sets lack the computing resources and expertise required to process these data. We have developed the OpenTopography Portal to democratize access to these large and computationally challenging data sets. The OpenTopography Portal uses cyberinfrastructure technology developed by the GEON project to provide access to LiDAR data in a variety of formats. LiDAR data products available range from simple Google Earth visualizations of LiDAR-derived hillshades to 1 km2 tiles of standard digital elevation model (DEM) products as well as LiDAR point cloud data and user generated custom-DEMs. We have found that the wide spectrum of LiDAR users have variable scientific applications, computing resources and technical experience and thus require a data system with multiple distribution mechanisms and platforms to serve a broader range of user communities. Because the volume of LiDAR topography data available is rapidly expanding, and data analysis techniques are evolving, there is a need for the user community to be able to communicate and interact to share knowledge and experiences. To address this need, the OpenTopography Portal enables social networking capabilities through a variety of collaboration tools, web 2.0 technologies and customized usage pattern tracking. Fundamentally, these tools offer users the ability to communicate, to access and share documents, participate in discussions, and to keep up to date on upcoming events and emerging technologies. The OpenTopography portal achieves the social networking capabilities by integrating various software technologies and platforms. These include the Expression Engine Content Management System (CMS) that comes with pre-packaged collaboration tools like blogs and wikis, the Gridsphere portal framework that contains the primary GEON LiDAR System portlet with user job monitoring capabilities and a java web based discussion forum (Jforums) application all seamlessly integrated under one portal. The OpenTopography Portal also provides integrated authentication mechanism between the various CMS collaboration tools and the core gridsphere based portlets. The integration of these various technologies allows for enhanced user interaction capabilities within the portal. By integrating popular collaboration tools like discussion forums and blogs we can promote conversation and openness among users. The ability to ask question and share expertise in forum discussions allows users to easily find information and interact with users facing similar challenges. The OpenTopography Blog enables our domain experts to post ideas, news items, commentary, and other resources in order to foster discussion and information sharing. The content management capabilities of the portal allow for easy updates to information in the form of publications, documents, and news articles. Access to the most current information fosters better decision-making. As has become the standard for web 2.0 technologies, the OpenTopography Portal is fully RSS enabled to allow users of the portal to keep track of news items, forum discussions, blog updates, and system outages. We are currently exploring how the information captured by user and job monitoring components of the Gridsphere based GEON LiDAR System can be harnessed to provide a recommender system that will help users to identify appropriate processing parameters and to locate related documents and data. By seamlessly integrating the various platforms and technologies under one single portal, we can take advantage of popular online collaboration tools that are either stand alone or software platform restricted. The availability of these collaboration tools along with the data will foster more community interaction and increase the strength and vibrancy of the LiDAR topography user community.

  16. Gender differences in scientific collaborations: Women are more egalitarian than men

    PubMed Central

    Araújo, Eduardo B.; Araújo, Nuno A. M.; Moreira, André A.; Herrmann, Hans J.; Andrade, José S.

    2017-01-01

    By analyzing a unique dataset of more than 270,000 scientists, we discovered substantial gender differences in scientific collaborations. While men are more likely to collaborate with other men, women are more egalitarian. This is consistently observed over all fields and regardless of the number of collaborators a scientist has. The only exception is observed in the field of engineering, where this gender bias disappears with increasing number of collaborators. We also found that the distribution of the number of collaborators follows a truncated power law with a cut-off that is gender dependent and related to the gender differences in the number of published papers. Considering interdisciplinary research, our analysis shows that men and women behave similarly across fields, except in the case of natural sciences, where women with many collaborators are more likely to have collaborators from other fields. PMID:28489872

  17. Gender differences in scientific collaborations: Women are more egalitarian than men.

    PubMed

    Araújo, Eduardo B; Araújo, Nuno A M; Moreira, André A; Herrmann, Hans J; Andrade, José S

    2017-01-01

    By analyzing a unique dataset of more than 270,000 scientists, we discovered substantial gender differences in scientific collaborations. While men are more likely to collaborate with other men, women are more egalitarian. This is consistently observed over all fields and regardless of the number of collaborators a scientist has. The only exception is observed in the field of engineering, where this gender bias disappears with increasing number of collaborators. We also found that the distribution of the number of collaborators follows a truncated power law with a cut-off that is gender dependent and related to the gender differences in the number of published papers. Considering interdisciplinary research, our analysis shows that men and women behave similarly across fields, except in the case of natural sciences, where women with many collaborators are more likely to have collaborators from other fields.

  18. Robotics Collaborative Technology Alliance (RCTA) 2011 Baseline Assessment Experimental Strategy

    DTIC Science & Technology

    2011-09-01

    distribution is unlimited. NOTICES Disclaimers The...Information Sciences Directorate, ARL Approved for public release; distribution is unlimited...ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved

  19. On effectiveness of network sensor-based defense framework

    NASA Astrophysics Data System (ADS)

    Zhang, Difan; Zhang, Hanlin; Ge, Linqiang; Yu, Wei; Lu, Chao; Chen, Genshe; Pham, Khanh

    2012-06-01

    Cyber attacks are increasing in frequency, impact, and complexity, which demonstrate extensive network vulnerabilities with the potential for serious damage. Defending against cyber attacks calls for the distributed collaborative monitoring, detection, and mitigation. To this end, we develop a network sensor-based defense framework, with the aim of handling network security awareness, mitigation, and prediction. We implement the prototypical system and show its effectiveness on detecting known attacks, such as port-scanning and distributed denial-of-service (DDoS). Based on this framework, we also implement the statistical-based detection and sequential testing-based detection techniques and compare their respective detection performance. The future implementation of defensive algorithms can be provisioned in our proposed framework for combating cyber attacks.

  20. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  1. Ultrascale collaborative visualization using a display-rich global cyberinfrastructure.

    PubMed

    Jeong, Byungil; Leigh, Jason; Johnson, Andrew; Renambot, Luc; Brown, Maxine; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung

    2010-01-01

    The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.

  2. Collaborative PLM - The Next Generation AKA Cars on Mars

    NASA Technical Reports Server (NTRS)

    Soderstrom, Tom; Stefanini, Mike

    2007-01-01

    In this slide presentation the importance of collaboration in developing the next systems for space exploration is stressed. The mechanism of this collaboration are reviewed, and particular emphasis is given to our planned exploration of Mars and how this will require a great deal of collaboration. A system architecture for this collaboration is shown and the diagram for the collaborative environment is conceptualized.

  3. Collaborative play in young children as a complex dynamic system: revealing gender related differences.

    PubMed

    Steenbeek, Henderien; van der Aalsvoort, Diny; van Geert, Paul

    2014-07-01

    This study was focused on the role of gender-related differences in collaborative play, by examining properties of play as a complex system, and by using micro-genetic analysis techniques. A complex dynamic systems model of dyadic play was used to make predictions with regard to duration and number of contact-episodes during play of same-sex dyads, both on the micro- (i.e., per individual session), meso- (i.e., in smoothed data), and macro time scale (i.e., the change over six consecutive play sessions). The empirical data came from a study that examined the collaborative play skills of children who experienced six twenty minute play sessions within a three week period of time. Monte Carlo permutation analyses were used to compare model predictions and empirical data. The findings point to strongly asymmetric distributions in the duration and number of contact episodes in all dyads over the six sessions, as a direct consequence of the underlying dynamics of the play system. The model prediction that girls-dyads would show longer contact episodes than boys-dyads was confirmed, but the prediction regarding the difference in number of peaks was not confirmed. In addition, the majority of the model predictions regarding changes over the course of six sessions were consistent with the data. That is, the average duration and the maximum duration of contact-episodes increases both in boys-dyads and girls-dyads, but differences occur in the strength of the increase. Contrary to expectation, the number of contact-episodes decreases both in boys-dyads and in girls-dyads.

  4. Systems and Methods for Collaboratively Controlling at Least One Aircraft

    NASA Technical Reports Server (NTRS)

    Estkowski, Regina I. (Inventor)

    2016-01-01

    An unmanned vehicle management system includes an unmanned aircraft system (UAS) control station controlling one or more unmanned vehicles (UV), a collaborative routing system, and a communication network connecting the UAS and the collaborative routing system. The collaborative routing system being configured to receive flight parameters from an operator of the UAS control station and, based on the received flight parameters, automatically present the UAS control station with flight plan options to enable the operator to operate the UV in a defined airspace.

  5. Network support for turn-taking in multimedia collaboration

    NASA Astrophysics Data System (ADS)

    Dommel, Hans-Peter; Garcia-Luna-Aceves, Jose J.

    1997-01-01

    The effectiveness of collaborative multimedia systems depends on the regulation of access to their shared resources, such as continuous media or instruments used concurrently by multiple parties. Existing applications use only simple protocols to mediate such resource contention. Their cooperative rules follow a strict agenda and are largely application-specific. The inherent problem of floor control lacks a systematic methodology. This paper presents a general model on floor control for correct, scalable, fine-grained and fair resource sharing that integrates user interaction with network conditions, and adaptation to various media types. The motion of turn-taking known from psycholinguistics in studies on discourse structure is adapted for this framework. Viewed as a computational analogy to speech communication, online collaboration revolves around dynamically allocated access permissions called floors. The control semantics of floors derives from concurrently control methodology. An explicit specification and verification of a novel distributed Floor Control Protocol are presented. Hosts assume sharing roles that allow for efficient dissemination of control information, agreeing on a floor holder which is granted mutually exclusive access to a resource. Performance analytic aspects of floor control protocols are also briefly discussed.

  6. An Application Server for Scientific Collaboration

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Luetkemeyer, Kelly G.

    1998-11-01

    Tech-X Corporation has developed SciChat, an application server for scientific collaboration. Connections are made to the server through a Java client, that can either be an application or an applet served in a web page. Once connected, the client may choose to start or join a session. A session includes not only other clients, but also an application. Any client can send a command to the application. This command is executed on the server and echoed to all clients. The results of the command, whether numerical or graphical, are then distributed to all of the clients; thus, multiple clients can interact collaboratively with a single application. The client is developed in Java, the server in C++, and the middleware is the Common Object Request Broker Architecture. In this system, the Graphical User Interface processing is on the client machine, so one does not have the disadvantages of insufficient bandwidth as occurs when running X over the internet. Because the server, client, and middleware are object oriented, new types of servers and clients specialized to particular scientific applications are more easily developed.

  7. The World Data System - Your partner in data collaboration

    NASA Astrophysics Data System (ADS)

    Gärtner-Roer, Isabelle; Harrison, Sandy; Sorvari, Sanna

    2017-04-01

    The World Data System (ICSU-WDS) is an interdisciplinary body of the International Council of Science (ICSU) with a mission to promote international collaborations for long-term preservation and provision of quality-assessed research data and data services. WDS is a membership organization federating scientific data centers, data services and data networks across all disciplines in the natural and social sciences as well as humanities. The main goals of WDS are to promote documentation and access of data, as well as to strengthen data dissemination and its proper citation. Through its certification scheme, WDS promotes the development of trusted data repositories and the continual improvement of such facilities through maturity self- assessment and information exchange. Thus, WDS is responsible for creating a globally interoperable distributed data system that incorporates emerging technologies and multidisciplinary scientific data activities. Today, WDS has 76 Regular and Network Members and 24 Partners and Associate Members as of October 2016. The community is actively involved by a number of activities, such as working groups, webinars and the bi-annual Members Forum. Current effort is to promote activities in the African and Asia-Oceanian region in order to expand the WDS community by recruiting new members in these regions. In order to introduce the role of WDS, we will present the WDS certification scheme, introducing some selected partner services and detail their collaboration with WDS, including commitments, advantages and challenges. If YOU want to know more about or want to join WDS, have a look at www.icsu-wds.org!

  8. The importance of national and international collaboration in adult congenital heart disease: A network analysis of research output.

    PubMed

    Orwat, Melanie Iris; Kempny, Aleksander; Bauer, Ulrike; Gatzoulis, Michael A; Baumgartner, Helmut; Diller, Gerhard-Paul

    2015-09-15

    The determinants of adult congenital heart disease (ACHD) research output are only partially understood. The heterogeneity of ACHD naturally calls for collaborative work; however, limited information exists on the impact of collaboration on academic performance. We aimed to examine the global topology of ACHD research, distribution of research collaboration and its association with cumulative research output. Based on publications presenting original research between 2005 and 2011, a network analysis was performed quantifying centrality measures and key players in the field of ACHD. In addition, network maps were produced to illustrate the global distribution and interconnected nature of ACHD research. The proportion of collaborative research was 35.6 % overall, with a wide variation between countries (7.1 to 62.8%). The degree of research collaboration, as well as measures of network centrality (betweenness and degree centrality), were statistically associated with cumulative research output independently of national wealth and available workforce. The global ACHD research network was found to be scale-free with a small number of central hubs and a relatively large number of peripheral nodes. In addition, we could identify potentially influential hubs based on cluster analysis and measures of centrality/key player analysis. Using network analysis methods the current study illustrates the complex and global structures of ACHD research. It suggests that collaboration between research institutions is associated with higher academic output. As a consequence national and international collaboration in ACHD research should be encouraged and the creation of an adequate supporting infrastructure should be further promoted. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  10. Collaborative observations of HDE 332077

    NASA Technical Reports Server (NTRS)

    Ake, Thomas B., III

    1995-01-01

    IUE low dispersion observations were made of the T(sub c)-deficient peculiar red giant (PRG) star, HDE 332077, to test the hypothesis that T(sub c)-poor PRG's are formed as a result of mass transfer from a binary companion rather than from internal thermal pulsing while on the asymptotic red giant branch. Previous ground-based observations of this star indicated that it is a binary, but the secondary star was too massive for an expected white dwarf. A deep, short wavelength prime (SWP) exposure was needed to search for evidence of an A-type main-sequence companion. We obtained a 120 minute LWP exposure (LWP 23479), followed by a collaborative 1230 minute SWP exposure (SWP 45113). These observations were combined with our earlier IUE and optical data on this PRG star to model the spectral energy distribution of the system.

  11. A social-level macro-governance mode for collaborative manufacturing processes

    NASA Astrophysics Data System (ADS)

    Gao, Ji; Lv, Hexin; Jin, Zhiyong; Xu, Ping

    2017-08-01

    This paper proposes the social-level macro-governance mode for innovating the popular centralized control for CoM (Collaborative Manufacturing) processes, and makes this mode depend on the support from three aspects of technologies standalone and complementary: social-level CoM process norms, CoM process supervision system, and rational agents as the brokers of enterprises. It is the close coupling of those technologies that redounds to removing effectively the uncontrollability obstacle confronted with by cross-management-domain CoM processes. As a result, this mode enables CoM applications to be implemented by uniting the centralized control of CoM partners for respective CoM activities, and therefore provides a new distributed CoM process control mode to push forward the convenient development and large-scale deployment of SME-oriented CoM applications.

  12. Promoting Knowledge to Action through the Study of Environmental Arctic Change (SEARCH) Program

    NASA Astrophysics Data System (ADS)

    Myers, B.; Wiggins, H. V.

    2016-12-01

    The Study of Environmental Arctic Change (SEARCH) is a multi-institutional collaborative U.S. program that advances scientific knowledge to inform societal responses to Arctic change. Currently, SEARCH focuses on how diminishing Arctic sea ice, thawing permafrost, and shrinking land ice impact both Arctic and global systems. Emphasizing "knowledge to action", SEARCH promotes collaborative research, synthesizes research findings, and broadly communicates the resulting knowledge to Arctic researchers, stakeholders, policy-makers, and the public. This poster presentation will highlight recent program products and findings; best practices and challenges for managing a distributed, interdisciplinary program; and plans for cross-disciplinary working groups focused on Arctic coastal erosion, synthesis of methane budgets, and development of Arctic scenarios. A specific focus will include how members of the broader research community can participate in SEARCH activities. http://www.arcus.org/search

  13. A network architecture supporting consistent rich behavior in collaborative interactive applications.

    PubMed

    Marsh, James; Glencross, Mashhuda; Pettifer, Steve; Hubbold, Roger

    2006-01-01

    Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority, and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios and, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics.

  14. Governance and assessment in a widely distributed medical education program in Australia.

    PubMed

    Solarsh, Geoff; Lindley, Jennifer; Whyte, Gordon; Fahey, Michael; Walker, Amanda

    2012-06-01

    The learning objectives, curriculum content, and assessment standards for distributed medical education programs must be aligned across the health care systems and community contexts in which their students train. In this article, the authors describe their experiences at Monash University implementing a distributed medical education program at metropolitan, regional, and rural Australian sites and an offshore Malaysian site, using four different implementation models. Standardizing learning objectives, curriculum content, and assessment standards across all sites while allowing for site-specific implementation models created challenges for educational alignment. At the same time, this diversity created opportunities to customize the curriculum to fit a variety of settings and for innovations that have enriched the educational system as a whole.Developing these distributed medical education programs required a detailed review of Monash's learning objectives and curriculum content and their relevance to the four different sites. It also required a review of assessment methods to ensure an identical and equitable system of assessment for students at all sites. It additionally demanded changes to the systems of governance and the management of the educational program away from a centrally constructed and mandated curriculum to more collaborative approaches to curriculum design and implementation involving discipline leaders at multiple sites.Distributed medical education programs, like that at Monash, in which cohorts of students undertake the same curriculum in different contexts, provide potentially powerful research platforms to compare different pedagogical approaches to medical education and the impact of context on learning outcomes.

  15. Factors of collaborative working: a framework for a collaboration model.

    PubMed

    Patel, Harshada; Pettitt, Michael; Wilson, John R

    2012-01-01

    The ability of organisations to support collaborative working environments is of increasing importance as they move towards more distributed ways of working. Despite the attention collaboration has received from a number of disparate fields, there is a lack of a unified understanding of the component factors of collaboration. As part of our work on a European Integrated Project, CoSpaces, collaboration and collaborative working and the factors which define it were examined through the literature and new empirical work with a number of partner user companies in the aerospace, automotive and construction sectors. This was to support development of a descriptive human factors model of collaboration - the CoSpaces Collaborative Working Model (CCWM). We identified seven main categories of factors involved in collaboration: Context, Support, Tasks, Interaction Processes, Teams, Individuals, and Overarching Factors, and summarised these in a framework which forms a basis for the model. We discuss supporting evidence for the factors which emerged from our fieldwork with user partners, and use of the model in activities such as collaboration readiness profiling. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. Closeout Report for CTEQ Summer School 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Tao

    The CTEQ Collaboration is an informal group of 37 experimental and theoretical high energy physicists from 20 universities and 5 national labs, engaged in a program to advance research in and understanding of QCD. This program includes the well-known collaborative project on global QCD analysis of parton distributions, the organization of a variety of workshops, periodic collaboration meetings, and the subject of this proposal: the CTEQ Summer Schools on QCD Analysis and Phenomenology.

  17. Collaborative Research and Development (CR&D). Task Order 0049: Tribological Modeling

    DTIC Science & Technology

    2008-05-01

    scratch test for TiN on stainless steel with better substrate mechanical properties. This present study was focused on the study of stress distribution...AFRL-RX-WP-TR-2010-4189 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) Task Order 0049: Tribological Modeling Young Sup Kang Universal...SUBTITLE COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) Task Order 0049: Tribological Modeling 5a. CONTRACT NUMBER F33615-03-D-5801-0049 5b

  18. XNsim: Internet-Enabled Collaborative Distributed Simulation via an Extensible Network

    NASA Technical Reports Server (NTRS)

    Novotny, John; Karpov, Igor; Zhang, Chendi; Bedrossian, Nazareth S.

    2007-01-01

    In this paper, the XNsim approach to achieve Internet-enabled, dynamically scalable collaborative distributed simulation capabilities is presented. With this approach, a complete simulation can be assembled from shared component subsystems written in different formats, that run on different computing platforms, with different sampling rates, in different geographic locations, and over singlelmultiple networks. The subsystems interact securely with each other via the Internet. Furthermore, the simulation topology can be dynamically modified. The distributed simulation uses a combination of hub-and-spoke and peer-topeer network topology. A proof-of-concept demonstrator is also presented. The XNsim demonstrator can be accessed at http://www.jsc.draver.corn/xn that hosts various examples of Internet enabled simulations.

  19. NASA Team Collaboration Pilot: Enabling NASA's Virtual Teams

    NASA Technical Reports Server (NTRS)

    Prahst, Steve

    2003-01-01

    Most NASA projects and work activities are accomplished by teams of people. These teams are often geographically distributed - across NASA centers and NASA external partners, both domestic and international. NASA "virtual" teams are stressed by the challenge of getting team work done - across geographic boundaries and time zones. To get distributed work done, teams rely on established methods - travel, telephones, Video Teleconferencing (NASA VITS), and email. Time is our most critical resource - and team members are hindered by the overhead of travel and the difficulties of coordinating work across their virtual teams. Modern, Internet based team collaboration tools offer the potential to dramatically improve the ability of virtual teams to get distributed work done.

  20. [Pharmaceutical product quality control and good manufacturing practices].

    PubMed

    Hiyama, Yukio

    2010-01-01

    This report describes the roles of Good Manufacturing Practices (GMP) in pharmaceutical product quality control. There are three keys to pharmaceutical product quality control. They are specifications, thorough product characterization during development, and adherence to GMP as the ICH Q6A guideline on specifications provides the most important principles in its background section. Impacts of the revised Pharmaceutical Affairs Law (rPAL) which became effective in 2005 on product quality control are discussed. Progress of ICH discussion for Pharmaceutical Development (Q8), Quality Risk Management (Q9) and Pharmaceutical Quality System (Q10) are reviewed. In order to reconstruct GMP guidelines and GMP inspection system in the regulatory agencies under the new paradigm by rPAL and the ICH, a series of Health Science studies were conducted. For GMP guidelines, product GMP guideline, technology transfer guideline, laboratory control guideline and change control system guideline were written. For the GMP inspection system, inspection check list, inspection memo and inspection scenario were proposed also by the Health Science study groups. Because pharmaceutical products and their raw materials are manufactured and distributed internationally, collaborations with other national authorities are highly desired. In order to enhance the international collaborations, consistent establishment of GMP inspection quality system throughout Japan will be essential.

  1. Specializing network analysis to detect anomalous insider actions

    PubMed Central

    Chen, You; Nyemba, Steve; Zhang, Wen; Malin, Bradley

    2012-01-01

    Collaborative information systems (CIS) enable users to coordinate efficiently over shared tasks in complex distributed environments. For flexibility, they provide users with broad access privileges, which, as a side-effect, leave such systems vulnerable to various attacks. Some of the more damaging malicious activities stem from internal misuse, where users are authorized to access system resources. A promising class of insider threat detection models for CIS focuses on mining access patterns from audit logs, however, current models are limited in that they assume organizations have significant resources to generate label cases for training classifiers or assume the user has committed a large number of actions that deviate from “normal” behavior. In lieu of the previous assumptions, we introduce an approach that detects when specific actions of an insider deviate from expectation in the context of collaborative behavior. Specifically, in this paper, we introduce a specialized network anomaly detection model, or SNAD, to detect such events. This approach assesses the extent to which a user influences the similarity of the group of users that access a particular record in the CIS. From a theoretical perspective, we show that the proposed model is appropriate for detecting insider actions in dynamic collaborative systems. From an empirical perspective, we perform an extensive evaluation of SNAD with the access logs of two distinct environments: the patient record access logs a large electronic health record system (6,015 users, 130,457 patients and 1,327,500 accesses) and the editing logs of Wikipedia (2,394,385 revisors, 55,200 articles and 6,482,780 revisions). We compare our model with several competing methods and demonstrate SNAD is significantly more effective: on average it achieves 20–30% greater area under an ROC curve. PMID:23399988

  2. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    NASA Astrophysics Data System (ADS)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  3. The Micropolitics of Distributed Leadership: Four Case Studies of School Federations

    ERIC Educational Resources Information Center

    Piot, Liesbeth; Kelchtermans, Geert

    2016-01-01

    This study analyses the collaboration between principals within four Flemish school federations (voluntary collaborative networks between either primary or secondary schools). Interview data from principals were analysed using a micropolitical perspective. A central idea in micropolitical theory is that organization members' actions (and…

  4. An Effective Collaborative Mobile Weighted Clustering Schemes for Energy Balancing in Wireless Sensor Networks.

    PubMed

    Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang

    2016-02-19

    Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain.

  5. Development of a site analysis tool for distributed wind projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Shawn

    The Cadmus Group, Inc., in collaboration with the National Renewable Energy Laboratory (NREL) and Encraft, was awarded a grant from the Department of Energy (DOE) to develop a site analysis tool for distributed wind technologies. As the principal investigator for this project, Mr. Shawn Shaw was responsible for overall project management, direction, and technical approach. The product resulting from this project is the Distributed Wind Site Analysis Tool (DSAT), a software tool for analyzing proposed sites for distributed wind technology (DWT) systems. This user-friendly tool supports the long-term growth and stability of the DWT market by providing reliable, realistic estimatesmore » of site and system energy output and feasibility. DSAT-which is accessible online and requires no purchase or download of software-is available in two account types; Standard: This free account allows the user to analyze a limited number of sites and to produce a system performance report for each; and Professional: For a small annual fee users can analyze an unlimited number of sites, produce system performance reports, and generate other customizable reports containing key information such as visual influence and wind resources. The tool’s interactive maps allow users to create site models that incorporate the obstructions and terrain types present. Users can generate site reports immediately after entering the requisite site information. Ideally, this tool also educates users regarding good site selection and effective evaluation practices.« less

  6. Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs.

    PubMed

    Moreno, Asier; Osaba, Eneko; Onieva, Enrique; Perallos, Asier; Iovino, Giovanni; Fernández, Pablo

    2016-07-22

    This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy).

  7. Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs †

    PubMed Central

    Moreno, Asier; Osaba, Eneko; Onieva, Enrique; Perallos, Asier; Iovino, Giovanni; Fernández, Pablo

    2016-01-01

    This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy). PMID:27455277

  8. iDEAS: A web-based system for dry eye assessment.

    PubMed

    Remeseiro, Beatriz; Barreira, Noelia; García-Resúa, Carlos; Lira, Madalena; Giráldez, María J; Yebra-Pimentel, Eva; Penedo, Manuel G

    2016-07-01

    Dry eye disease is a public health problem, whose multifactorial etiology challenges clinicians and researchers making necessary the collaboration between different experts and centers. The evaluation of the interference patterns observed in the tear film lipid layer is a common clinical test used for dry eye diagnosis. However, it is a time-consuming task with a high degree of intra- as well as inter-observer variability, which makes the use of a computer-based analysis system highly desirable. This work introduces iDEAS (Dry Eye Assessment System), a web-based application to support dry eye diagnosis. iDEAS provides a framework for eye care experts to collaboratively work using image-based services in a distributed environment. It is composed of three main components: the web client for user interaction, the web application server for request processing, and the service module for image analysis. Specifically, this manuscript presents two automatic services: tear film classification, which classifies an image into one interference pattern; and tear film map, which illustrates the distribution of the patterns over the entire tear film. iDEAS has been evaluated by specialists from different institutions to test its performance. Both services have been evaluated in terms of a set of performance metrics using the annotations of different experts. Note that the processing time of both services has been also measured for efficiency purposes. iDEAS is a web-based application which provides a fast, reliable environment for dry eye assessment. The system allows practitioners to share images, clinical information and automatic assessments between remote computers. Additionally, it save time for experts, diminish the inter-expert variability and can be used in both clinical and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. A First Look at the Upcoming SISO Space Reference FOM

    NASA Technical Reports Server (NTRS)

    Mueller, Bjorn; Crues, Edwin Z.; Dexter, Dan; Garro, Alfredo; Skuratovskiy, Anton; Vankov, Alexander

    2016-01-01

    Spaceflight is difficult, dangerous and expensive; human spaceflight even more so. In order to mitigate some of the danger and expense, professionals in the space domain have relied, and continue to rely, on computer simulation. Simulation is used at every level including concept, design, analysis, construction, testing, training and ultimately flight. As space systems have grown more complex, new simulation technologies have been developed, adopted and applied. Distributed simulation is one those technologies. Distributed simulation provides a base technology for segmenting these complex space systems into smaller, and usually simpler, component systems or subsystems. This segmentation also supports the separation of responsibilities between participating organizations. This segmentation is particularly useful for complex space systems like the International Space Station (ISS), which is composed of many elements from many nations along with visiting vehicles from many nations. This is likely to be the case for future human space exploration activities. Over the years, a number of distributed simulations have been built within the space domain. While many use the High Level Architecture (HLA) to provide the infrastructure for interoperability, HLA without a Federation Object Model (FOM) is insufficient by itself to insure interoperability. As a result, the Simulation Interoperability Standards Organization (SISO) is developing a Space Reference FOM. The Space Reference FOM Product Development Group is composed of members from several countries. They contribute experiences from projects within NASA, ESA and other organizations and represent government, academia and industry. The initial version of the Space Reference FOM is focusing on time and space and will provide the following: (i) a flexible positioning system using reference frames for arbitrary bodies in space, (ii) a naming conventions for well-known reference frames, (iii) definitions of common time scales, (iv) federation agreements for common types of time management with focus on time stepped simulation, and (v) support for physical entities, such as space vehicles and astronauts. The Space Reference FOM is expected to make collaboration politically, contractually and technically easier. It is also expected to make collaboration easier to manage and extend.

  10. Comprehensive multiplatform collaboration

    NASA Astrophysics Data System (ADS)

    Singh, Kundan; Wu, Xiaotao; Lennox, Jonathan; Schulzrinne, Henning G.

    2003-12-01

    We describe the architecture and implementation of our comprehensive multi-platform collaboration framework known as Columbia InterNet Extensible Multimedia Architecture (CINEMA). It provides a distributed architecture for collaboration using synchronous communications like multimedia conferencing, instant messaging, shared web-browsing, and asynchronous communications like discussion forums, shared files, voice and video mails. It allows seamless integration with various communication means like telephones, IP phones, web and electronic mail. In addition, it provides value-added services such as call handling based on location information and presence status. The paper discusses the media services needed for collaborative environment, the components provided by CINEMA and the interaction among those components.

  11. FIRESTORM: a collaborative network suite application for rapid sensor data processing and precise decisive responses

    NASA Astrophysics Data System (ADS)

    Kaniyantethu, Shaji

    2011-06-01

    This paper discusses the many features and composed technologies in Firestorm™ - a Distributed Collaborative Fires and Effects software. Modern response management systems capitalize on the capabilities of a plethora of sensors and its output for situational awareness. Firestorm utilizes a unique networked lethality approach by integrating unmanned air and ground vehicles to provide target handoff and sharing of data between humans and sensors. The system employs Bayesian networks for track management of sensor data, and distributed auction algorithms for allocating targets and delivering the right effect without information overload to the Warfighter. Firestorm Networked Effects Component provides joint weapon-target pairing, attack guidance, target selection standards, and other fires and effects components. Moreover, the open and modular architecture allows for easy integration with new data sources. Versatility and adaptability of the application enable it to devise and dispense a suitable response to a wide variety of scenarios. Recently, this application was used for detecting and countering a vehicle intruder with the help of radio frequency spotter sensor, command driven cameras, remote weapon system, portable vehicle arresting barrier, and an unmanned aerial vehicle - which confirmed the presence of the intruder, as well as provided lethal/non-lethal response and battle damage assessment. The completed demonstrations have proved Firestorm's™ validity and feasibility to predict, detect, neutralize, and protect key assets and/or area against a variety of possible threats. The sensors and responding assets can be deployed with numerous configurations to cover the various terrain and environmental conditions, and can be integrated to a number of platforms.

  12. UTM Data Working Group Demonstration 1: Final Report

    NASA Technical Reports Server (NTRS)

    Rios, Joseph L.; Mulfinger, Daniel G.; Smith, Irene S.; Venkatesan, Priya; Smith, David R.; Baskaran, Vijayakumar; Wang, Leo

    2017-01-01

    This document summarizes activities defining and executing the first demonstration of the NASA-FAA Research Transition Team (RTT) Data Exchange and Information Architecture (DEIA) working group (DWG). The demonstration focused on testing the interactions between two key components in the future UAS Traffic Management (UTM) System through a collaborative and distributed simulation of key scenarios. The summary incorporates written feedback from each of the participants in the demonstration. In addition to reporting the activities, this report also provides some insight into future steps of this working group.

  13. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  14. Situation Awareness of Onboard System Autonomy

    NASA Technical Reports Server (NTRS)

    Schreckenghost, Debra; Thronesbery, Carroll; Hudson, Mary Beth

    2005-01-01

    We have developed intelligent agent software for onboard system autonomy. Our approach is to provide control agents that automate crew and vehicle systems, and operations assistants that aid humans in working with these autonomous systems. We use the 3 Tier control architecture to develop the control agent software that automates system reconfiguration and routine fault management. We use the Distributed Collaboration and Interaction (DCI) System to develop the operations assistants that provide human services, including situation summarization, event notification, activity management, and support for manual commanding of autonomous system. In this paper we describe how the operations assistants aid situation awareness of the autonomous control agents. We also describe our evaluation of the DCI System to support control engineers during a ground test at Johnson Space Center (JSC) of the Post Processing System (PPS) for regenerative water recovery.

  15. What systemic factors contribute to collaboration between primary care and public health sectors? An interpretive descriptive study.

    PubMed

    Wong, Sabrina T; MacDonald, Marjorie; Martin-Misener, Ruth; Meagher-Stewart, Donna; O'Mara, Linda; Valaitis, Ruta K

    2017-12-01

    Purposefully building stronger collaborations between primary care (PC) and public health (PH) is one approach to strengthening primary health care. The purpose of this paper is to report: 1) what systemic factors influence collaborations between PC and PH; and 2) how systemic factors interact and could influence collaboration. This interpretive descriptive study used purposive and snowball sampling to recruit and conduct interviews with PC and PH key informants in British Columbia (n = 20), Ontario (n = 19), and Nova Scotia (n = 21), Canada. Other participants (n = 14) were knowledgeable about collaborations and were located in various Canadian provinces or working at a national level. Data were organized into codes and thematic analysis was completed using NVivo. The frequency of "sources" (individual transcripts), "references" (quotes), and matrix queries were used to identify potential relationships between factors. We conducted a total of 70 in-depth interviews with 74 participants working in either PC (n = 33) or PH (n = 32), both PC and PH (n = 7), or neither sector (n = 2). Participant roles included direct service providers (n = 17), senior program managers (n = 14), executive officers (n = 11), and middle managers (n = 10). Seven systemic factors for collaboration were identified: 1) health service structures that promote collaboration; 2) funding models and financial incentives supporting collaboration; 3) governmental and regulatory policies and mandates for collaboration; 4) power relations; 5) harmonized information and communication infrastructure; 6) targeted professional education; and 7) formal systems leaders as collaborative champions. Most themes were discussed with equal frequency between PC and PH. An assessment of the system level context (i.e., provincial and regional organization and funding of PC and PH, history of government in successful implementation of health care reform, etc) along with these seven system level factors could assist other jurisdictions in moving towards increased PC and PH collaboration. There was some variation in the importance of the themes across provinces. British Columbia participants more frequently discussed system structures that could promote collaboration, power relations, harmonized information and communication structures, formal systems leaders as collaboration champions and targeted professional education. Ontario participants most frequently discussed governmental and regulatory policies and mandates for collaboration.

  16. Hybrid E-Learning Tool TransLearning: Video Storytelling to Foster Vicarious Learning within Multi-Stakeholder Collaboration Networks

    ERIC Educational Resources Information Center

    van der Meij, Marjoleine G.; Kupper, Frank; Beers, Pieter J.; Broerse, Jacqueline E. W.

    2016-01-01

    E-learning and storytelling approaches can support informal vicarious learning within geographically widely distributed multi-stakeholder collaboration networks. This case study evaluates hybrid e-learning and video-storytelling approach "TransLearning" by investigation into how its storytelling e-tool supported informal vicarious…

  17. Collaborative Learning: Theoretical Foundations and Applicable Strategies to University

    ERIC Educational Resources Information Center

    Roselli, Nestor D.

    2016-01-01

    Collaborative learning is a construct that identifies a current strong field, both in face-to-face and virtual education. Firstly, three converging theoretical sources are analyzed: socio-cognitive conflict theory, intersubjectivity theory and distributed cognition theory. Secondly, a model of strategies that can be implemented by teachers to…

  18. Social Networks, Communication Styles, and Learning Performance in a CSCL Community

    ERIC Educational Resources Information Center

    Cho, Hichang; Gay, Geri; Davidson, Barry; Ingraffea, Anthony

    2007-01-01

    The aim of this study is to empirically investigate the relationships between communication styles, social networks, and learning performance in a computer-supported collaborative learning (CSCL) community. Using social network analysis (SNA) and longitudinal survey data, we analyzed how 31 distributed learners developed collaborative learning…

  19. RadioSource.NET: Case-Study of a Collaborative Land-Grant Internet Audio Project.

    ERIC Educational Resources Information Center

    Sohar, Kathleen; Wood, Ashley M.; Ramirez, Roberto

    2002-01-01

    Provides a case study of RadioSource.NET, an Internet broadcasting venture developed collaboratively by land-grant university communication departments to share resources, increase online distribution, and promote access to agricultural and natural and life science research. Describes planning, marketing, and implementation processes. (Contains 18…

  20. EVA: Collaborative Distributed Learning Environment Based in Agents.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Tellez, Rolando Quintero

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…

  1. SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.

    2004-01-01

    SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.

  2. Distributed Monte Carlo production for DZero

    NASA Astrophysics Data System (ADS)

    Snow, Joel; DØ Collaboration

    2010-04-01

    The DZero collaboration uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support DZero's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large and small, grid job enabled and not, are used to provide a production system that has adapted to newly developing technologies. This strategy has increased the event production rate by a factor of seven and the data production rate by a factor of ten in the last three years despite diminishing manpower. Common to all production facilities is the SAM (Sequential Access to Metadata) data-grid. Job submission to the grid uses SAMGrid middleware which may forward jobs to the OSG, the WLCG, or native SAMGrid sites. The distributed computing and data handling system used by DZero will be described and the results of MC production since the deployment of grid technologies will be presented.

  3. Generalized parton distributions and transversity from full lattice QCD

    NASA Astrophysics Data System (ADS)

    Göckeler, M.; Hägler, Ph.; Horsley, R.; Pleiter, D.; Rakow, P. E. L.; Schäfer, A.; Schierholz, G.; Zanotti, J. M.; Qcdsf Collaboration

    2005-06-01

    We present here the latest results from the QCDSF collaboration for moments of gener- alized parton distributions and transversity in two-flavour QCD, including a preliminary analysis of the pion mass dependence.

  4. Neutral-current x-distributions

    DOE R&D Accomplishments Database

    Friedman, J. I.; Kendall, H. W.; Bogert, D.; Burnstein, R.; Fisk, R.; Fuess, S.; Bofill, J.; Busza, W.; Eldridge, T.; Abolins, M.; Brock, R.; et al.

    1984-06-01

    The role of the semi leptonic neutral current interaction as a probe of nucleon structure is examined. Previous measurements of neutral current x-distributions are reviewed, and new results from the Fermilab - MIT - MSU collaboration are presented.

  5. Supporting interoperability of collaborative networks through engineering of a service-based Mediation Information System (MISE 2.0)

    NASA Astrophysics Data System (ADS)

    Benaben, Frederick; Mu, Wenxin; Boissel-Dallier, Nicolas; Barthe-Delanoe, Anne-Marie; Zribi, Sarah; Pingaud, Herve

    2015-08-01

    The Mediation Information System Engineering project is currently finishing its second iteration (MISE 2.0). The main objective of this scientific project is to provide any emerging collaborative situation with methods and tools to deploy a Mediation Information System (MIS). MISE 2.0 aims at defining and designing a service-based platform, dedicated to initiating and supporting the interoperability of collaborative situations among potential partners. This MISE 2.0 platform implements a model-driven engineering approach to the design of a service-oriented MIS dedicated to supporting the collaborative situation. This approach is structured in three layers, each providing their own key innovative points: (i) the gathering of individual and collaborative knowledge to provide appropriate collaborative business behaviour (key point: knowledge management, including semantics, exploitation and capitalisation), (ii) deployment of a mediation information system able to computerise the previously deduced collaborative processes (key point: the automatic generation of collaborative workflows, including connection with existing devices or services) (iii) the management of the agility of the obtained collaborative network of organisations (key point: supervision of collaborative situations and relevant exploitation of the gathered data). MISE covers business issues (through BPM), technical issues (through an SOA) and agility issues of collaborative situations (through EDA).

  6. Service collaboration and hospital cost performance: direct and moderating effects.

    PubMed

    Proenca, E Jose; Rosko, Michael D; Dismuke, Clara E

    2005-12-01

    Growing reliance on service provision through systems and networks creates the need to better understand the nature of the relationship between service collaboration and hospital performance and the conditions that affect this relationship. We examine 1) the effects of service provision through health systems and health networks on hospital cost performance and 2) the moderating effects of market conditions and service differentiation on the collaboration-cost relationship. We used moderated regression analysis to test the direct and moderating effects. Data on 1368 private hospitals came from the 1998 AHA Annual Survey, Medicare Cost Reports, and Solucient. Service collaboration was measured as the proportion of hospital services provided at the system level and at the network level. Market conditions were measured by the levels of managed care penetration and competition in the hospital's market. The proportion of hospital services provided at the system level had a negative relationship with hospital cost. The relationship was curvilinear for network use. Degree of managed care penetration moderated the relationship between network-based collaboration and hospital cost. The benefits of service collaboration through systems and networks, as measured by reduced cost, depend on degree of collaboration rather than mere membership. In loosely structured collaborations such as networks, costs reduce initially but increase later as the extent of collaboration increases. The effect of network-based collaboration is also tempered by managed care penetration. These effects are not seen in more tightly integrated forms such as systems.

  7. Study on Collaborative Object Manipulation in Virtual Environment

    NASA Astrophysics Data System (ADS)

    Mayangsari, Maria Niken; Yong-Moo, Kwon

    This paper presents comparative study on network collaboration performance in different immersion. Especially, the relationship between user collaboration performance and degree of immersion provided by the system is addressed and compared based on several experiments. The user tests on our system include several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments.

  8. A collaborative smartphone sensing platform for detecting and tracking hostile drones

    NASA Astrophysics Data System (ADS)

    Boddhu, Sanjay K.; McCartney, Matt; Ceccopieri, Oliver; Williams, Robert L.

    2013-05-01

    In recent years, not only United States Armed Services but other Law-enforcement agencies have shown increasing interest in employing drones for various surveillance and reconnaissance purposes. Further, recent advancements in autonomous drone control and navigation technology have tremendously increased the geographic extent of dronebased missions beyond the conventional line-of-sight coverage. Without any sophisticated requirement on data links to control them remotely (human-in-loop), drones are proving to be a reliable and effective means of securing personnel and soldiers operating in hostile environments. However, this autonomous breed of drones can potentially prove to be a significant threat when acquired by antisocial groups who wish to target property and life in urban settlements. To further escalate the issue, the standard detection techniques like RADARs, RF data link signature scanners, etc..., prove futile as the drones are smaller in size to evade successful detection by a RADAR based system in urban environment and being autonomous, have the capability of operating without a traceable active data link (RF). Hence, towards investigating possible practical solutions for the issue, the research team at AFRL's Tec^Edge Labs under SATE and YATE programs has developed a highly scalable, geographically distributable and easily deployable smartphone-based collaborative platform that can aid in detecting and tracking unidentified hostile drones. In its current state, this collaborative platform built on the paradigm of "Human-as-Sensors", consists primarily of an intelligent Smartphone application that leverages appropriate sensors on the device to capture a drone's attributes (flight direction, orientation, shape, color, etc..,) with real-time collaboration capabilities through a highly composable sensor cloud and an intelligent processing module (based on a Probabilistic model) that can estimate and predict the possible flight path of a hostile drone based on multiple (geographically distributed) observation data points. This developed collaborative sensing platform has been field tested and proven to be effective in providing real-time alerting mechanism for the personnel in the field to avert or subdue the potential damages caused by the detected hostile drones.

  9. A Collaborative Analysis Tool for Thermal Protection Systems for Single Stage to Orbit Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Reginald A.; Stanley, Thomas Troy

    1999-01-01

    Presented is a design tool and process that connects several disciplines which are needed in the complex and integrated design of high performance reusable single stage to orbit (SSTO) vehicles. Every system is linked to every other system and in the case of SSTO vehicles with air breathing propulsion, which is currently being studied by the National Aeronautics and Space Administration (NASA); the thermal protection system (TPS) is linked directly to almost every major system. The propulsion system pushes the vehicle to velocities on the order of 15 times the speed of sound in the atmosphere before pulling up to go to orbit which results high temperatures on the external surfaces of the vehicle. Thermal protection systems to maintain the structural integrity of the vehicle must be able to mitigate the heat transfer to the structure and be lightweight. Herein lies the interdependency, in that as the vehicle's speed increases, the TPS requirements are increased. And as TPS masses increase the effect on the propulsion system and all other systems is compounded. To adequately determine insulation masses for a vehicle such as the one described above, the aeroheating loads must be calculated and the TPS thicknesses must be calculated for the entire vehicle. To accomplish this an ascent or reentry trajectory is obtained using the computer code Program to Optimize Simulated Trajectories (POST). The trajectory is then used to calculate the convective heat rates on several locations on the vehicles using the Miniature Version of the JA70 Aerodynamic Heating Computer Program (MINIVER). Once the heat rates are defined for each body point on the vehicle, then insulation thicknesses that are required to maintain the vehicle within structural limits are calculated using Systems Improved Numerical Differencing Analyzer (SINDA) models. If the TPS masses are too heavy for the performance of the vehicle the process may be repeated altering the trajectory or some other input to reduce the TPS mass. The problem described is an example of the need for collaborative design and analysis. Analysis tools are being developed to facilitate these collaborative efforts. RECIPE is a cross-platform application capable of hosting a number of engineers and designers across the Internet for distributed and collaborative engineering environments. Such integrated system design environments allow for collaborative team design analysis for performing individual or reduced team studies. The analysis tools mentioned earlier are commonly run on different platforms and are usually run by different people. To facilitate the larger number of potential runs that may need to be made, RECIPE connects the computer codes that calculate the trajectory data, heat rate data, and TPS masses so that the output from each tool is easily transferred to the model input files that need it. This methodology is being applied to solve launch vehicle thermal design problems to shorten the design cycle, and enable the project team to evaluate design options. Results will be presented indicating the effectiveness of this as a collaborative design tool.

  10. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for Wireless Sensor Networks

    PubMed Central

    Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon

    2011-01-01

    Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values. PMID:22163687

  11. An architecture for performance optimization in a collaborative knowledge-based approach for wireless sensor networks.

    PubMed

    Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon

    2011-01-01

    Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values.

  12. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  13. 2015 ESGF Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, D. N.

    2015-06-22

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration whose purpose is to develop the software infrastructure needed to facilitate and empower the study of climate change on a global scale. ESGF’s architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. The cornerstones of its interoperability are the peer-to-peer messaging, which is continuously exchanged among all nodes in the federation; a shared architecture for search and discovery; and a security infrastructure based on industry standards. ESGF integrates popular application engines available from the open-sourcemore » community with custom components (for data publishing, searching, user interface, security, and messaging) that were developed collaboratively by the team. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP)—output used by the Intergovernmental Panel on Climate Change assessment reports. ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs of the global climate science community.« less

  14. Coordinated Collaboration between Heterogeneous Distributed Energy Resources

    DOE PAGES

    Abdollahy, Shahin; Lavrova, Olga; Mammoli, Andrea

    2014-01-01

    A power distribution feeder, where a heterogeneous set of distributed energy resources is deployed, is examined by simulation. The energy resources include PV, battery storage, natural gas GenSet, fuel cells, and active thermal storage for commercial buildings. The resource scenario considered is one that may exist in a not too distant future. Two cases of interaction between different resources are examined. One interaction involves a GenSet used to partially offset the duty cycle of a smoothing battery connected to a large PV system. The other example involves the coordination of twenty thermal storage devices, each associated with a commercial building.more » Storage devices are intended to provide maximum benefit to the building, but it is shown that this can have a deleterious effect on the overall system, unless the action of the individual storage devices is coordinated. A network based approach is also introduced to calculate some type of effectiveness metric to all available resources which take part in coordinated operation. The main finding is that it is possible to achieve synergy between DERs on a system; however this required a unified strategy to coordinate the action of all devices in a decentralized way.« less

  15. [Interagency collaboration in Spanish scientific production in nursing: social network analysis].

    PubMed

    Almero-Canet, Amparo; López-Ferrer, Mayte; Sales-Orts, Rafael

    2013-01-01

    The objectives of this paper are to analyze the Spanish scientific production in nursing, define its temporal evolution, its geographical and institutional distribution, and observe the interinstitutional collaboration. We analyze a comprehensive sample of Spanish scientific production in the nursing area extracted from the multidisciplinary database SciVerse Scopus. The nursing scientific production grows along time. The collaboration rate is 3.7 authors per paper and 61% of the authors only publish one paper. Barcelona and Madrid are the provinces with highest number of authors. Most belong to the hospitalary environment, followed closely by authors belonging to the university. The most institutions that collaborate, sharing authorship of articles are: University of Barcelona, Autonomous University of Barcelona and Clinic Hospital of Barcelona. The nursing scientific production has been increasing since her incorporation at the university. The collaboration rate found is higher than found for other papers. It shows a low decrease of occasional authors. It discusses the outlook of scientific collaboration in nursing in Spain, at the level of institutions by co-authorship of papers, through a network graph. It observes their distribution, importance and interactions or lack thereof. There is a strong need to use international databases for research, care and teaching, in addition to the national specialized information resources. Professionals are encouraged to normalization of the paper's signature, both, surnames and institutions to which they belong. It confirms the limited cooperation with foreign institutions, although there is an increasing trend of collaboration between Spanish authors in this discipline. It is observed, clearly defined three interinstitutional collaboration patterns. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  16. Grid-based implementation of XDS-I as part of image-enabled EHR for regional healthcare in Shanghai.

    PubMed

    Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Guangrong; Ling, Yun; Peng, Derong

    2011-03-01

    Due to the rapid growth of Shanghai city to 20 million residents, the balance between healthcare supply and demand has become an important issue. The local government hopes to ameliorate this problem by developing an image-enabled electronic healthcare record (EHR) sharing mechanism between certain hospitals. This system is designed to enable healthcare collaboration and reduce healthcare costs by allowing review of prior examination data obtained at other hospitals. Here, we present a design method and implementation solution of image-enabled EHRs (i-EHRs) and describe the implementation of i-EHRs in four hospitals and one regional healthcare information center, as well as their preliminary operating results. We designed the i-EHRs with service-oriented architecture (SOA) and combined the grid-based image management and distribution capability, which are compliant with IHE XDS-I integration profile. There are seven major components and common services included in the i-EHRs. In order to achieve quick response for image retrieving in low-bandwidth network environments, we use a JPEG2000 interactive protocol and progressive display technique to transmit images from a Grid Agent as Imaging Source Actor to the PACS workstation as Imaging Consumer Actor. The first phase of pilot testing of our image-enabled EHR was implemented in the Zhabei district of Shanghai for imaging document sharing and collaborative diagnostic purposes. The pilot testing began in October 2009; there have been more than 50 examinations daily transferred between the City North Hospital and the three community hospitals for collaborative diagnosis. The feedback from users at all hospitals is very positive, with respondents stating the system to be easy to use and reporting no interference with their normal radiology diagnostic operation. The i-EHR system can provide event-driven automatic image delivery for collaborative imaging diagnosis across multiple hospitals based on work flow requirements. This project demonstrated that the grid-based implementation of IHE XDS-I for image-enabled EHR could scale effectively to serve a regional healthcare solution with collaborative imaging services. The feedback from users of community hospitals and large hospital is very positive.

  17. Trans-oceanic Remote Power Hardware-in-the-Loop: Multi-site Hardware, Integrated Controller, and Electric Network Co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel

    Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less

  18. Trans-oceanic Remote Power Hardware-in-the-Loop: Multi-site Hardware, Integrated Controller, and Electric Network Co-simulation

    DOE PAGES

    Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel; ...

    2017-07-24

    Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less

  19. Cells distribution in the modeling of fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Abdel-Aty, Mahmoud

    2016-07-01

    The modeling of a complex system requires the analysis of all microscopic constituents and in particular of their interactions [1]. The interest in this research field has increased considering also recent developments in the information sciences. However interaction among scholars working in various fields of the applied sciences can be considered the true motor for the definition of a general framework for the analysis of complex systems. In particular biological systems constitute the platform where many scientists have decided to collaborate in order to gain a global description of the system. Among others, cancer-immune system competition (see [2] and the review papers [3,4]) has attracted much attention.

  20. Application description and policy model in collaborative environment for sharing of information on epidemiological and clinical research data sets.

    PubMed

    de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo

    2010-02-19

    Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.

Top