2008-10-01
Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements
On effectiveness of network sensor-based defense framework
NASA Astrophysics Data System (ADS)
Zhang, Difan; Zhang, Hanlin; Ge, Linqiang; Yu, Wei; Lu, Chao; Chen, Genshe; Pham, Khanh
2012-06-01
Cyber attacks are increasing in frequency, impact, and complexity, which demonstrate extensive network vulnerabilities with the potential for serious damage. Defending against cyber attacks calls for the distributed collaborative monitoring, detection, and mitigation. To this end, we develop a network sensor-based defense framework, with the aim of handling network security awareness, mitigation, and prediction. We implement the prototypical system and show its effectiveness on detecting known attacks, such as port-scanning and distributed denial-of-service (DDoS). Based on this framework, we also implement the statistical-based detection and sequential testing-based detection techniques and compare their respective detection performance. The future implementation of defensive algorithms can be provisioned in our proposed framework for combating cyber attacks.
Software design and implementation concepts for an interoperable medical communication framework.
Besting, Andreas; Bürger, Sebastian; Kasparick, Martin; Strathen, Benjamin; Portheine, Frank
2018-02-23
The new IEEE 11073 service-oriented device connectivity (SDC) standard proposals for networked point-of-care and surgical devices constitutes the basis for improved interoperability due to its independence of vendors. To accelerate the distribution of the standard a reference implementation is indispensable. However, the implementation of such a framework has to overcome several non-trivial challenges. First, the high level of complexity of the underlying standard must be reflected in the software design. An efficient implementation has to consider the limited resources of the underlying hardware. Moreover, the frameworks purpose of realizing a distributed system demands a high degree of reliability of the framework itself and its internal mechanisms. Additionally, a framework must provide an easy-to-use and fail-safe application programming interface (API). In this work, we address these challenges by discussing suitable software engineering principles and practical coding guidelines. A descriptive model is developed that identifies key strategies. General feasibility is shown by outlining environments in which our implementation has been utilized.
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...
Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando
2017-01-01
Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.
Reusable and Extensible High Level Data Distributions
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Chamberlain, Bradford; James, Mark L.; Zima, Hans P.
2005-01-01
This paper presents a reusable design of a data distribution framework for data parallel high performance applications. We are implementing the design in the context of the Chapel high productivity programming language. Distributions in Chapel are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on,the performance of applications, it is important that the distribution strategy can be chosen by a user. At the same time, high productivity concerns require that the user is shielded from error-prone, tedious details such as communication and synchronization. We propose an approach to distributions that enables the user to refine a language-provided distribution type and adjust it to optimize the performance of the application. Additionally, we conceal from the user low-level communication and synchronization details to increase productivity. To emphasize the generality of our distribution machinery, we present its abstract design in the form of a design pattern, which is independent of a concrete implementation. To illustrate the applicability of our distribution framework design, we outline the implementation of data distributions in terms of the Chapel language.
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
FRIEND Engine Framework: a real time neurofeedback client-server system for neuroimaging studies
Basilio, Rodrigo; Garrido, Griselda J.; Sato, João R.; Hoefle, Sebastian; Melo, Bruno R. P.; Pamplona, Fabricio A.; Zahn, Roland; Moll, Jorge
2015-01-01
In this methods article, we present a new implementation of a recently reported FSL-integrated neurofeedback tool, the standalone version of “Functional Real-time Interactive Endogenous Neuromodulation and Decoding” (FRIEND). We will refer to this new implementation as the FRIEND Engine Framework. The framework comprises a client-server cross-platform solution for real time fMRI and fMRI/EEG neurofeedback studies, enabling flexible customization or integration of graphical interfaces, devices, and data processing. This implementation allows a fast setup of novel plug-ins and frontends, which can be shared with the user community at large. The FRIEND Engine Framework is freely distributed for non-commercial, research purposes. PMID:25688193
A Conceptual Framework for Measuring R&D Product Impact.
ERIC Educational Resources Information Center
Hull, William L.; And Others
A framework to aid in estimating the impact from educational research and development (R&D) products was developed at the National Center for Research in Vocational Education at the Ohio State University. The dimensions of the framework (product development, distribution, implementation, utilization and effects) are explained in detail. The…
Assessment of the implementation fidelity of the Arctic Char Distribution Project in Nunavik, Quebec
Gautier, Lara; Pirkle, Catherine M; Furgal, Christopher; Lucas, Michel
2016-01-01
Background In September 2011, the Nunavik Regional Board of Health and Social Services began supporting the Arctic Char Distribution Project (AC/DP) for pregnant women. This initiative promoted consumption of the fish Arctic char—a traditional Inuit food—by pregnant women living in villages of Nunavik, an area in northern Quebec (Canada) inhabited predominantly by people of Inuit ethnicity. This intervention was intended to reduce exposure to contaminants and improve food security in Inuit communities. Methods We assessed the project's implementation based on data collected from background documentation, field notes and qualitative interviews with project recipients and implementers. Themes emerging from the data are critically discussed in the light of the framework for implementation fidelity developed by Carroll et al in 2007. Results Pregnant women fully embraced the initiative because of its cultural appropriateness. However, project implementation was incomplete: first because it did not cover all intended geographic areas, and second because of a recurring inconsistency in the supply and distribution of the fish. In addition, the initiative has been inconsistently funded and relies on multiple funding sources. Discussion This work highlights the extent to which project complexity can impede successful implementation, particularly in terms of communication and coordination. We provide recommendations for improving project implementation and suggest amendments to the implementation fidelity framework. PMID:28588959
Benchmark Intelligent Agent Systems for Distributed Battle Tracking
2008-06-20
services in the military and other domains, each entity in the benchmark system exposes a standard set of Web services. Jess ( Java Expert Shell...System) is a rule engine for the Java platform and is an interpreter for the Jess rule language. It is used here to implement policies that maintain...battle tracking system (DBTS), maintaining distributed situation awareness. The Java Agent DEvelopment (JADE) framework is a software framework
Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2010-01-01
Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933
Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2011-07-01
Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.
DOT National Transportation Integrated Search
2007-01-01
An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was implemented at the Virginia Department of Transportation (VDOT) in 2002 to manage geotechnical data using a distributed Geographical Information System (G...
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java
NASA Astrophysics Data System (ADS)
O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David
2011-10-01
This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.
Framework for Development of Object-Oriented Software
NASA Technical Reports Server (NTRS)
Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan
2004-01-01
The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.
Parallel Molecular Distributed Detection With Brownian Motion.
Rogers, Uri; Koh, Min-Sung
2016-12-01
This paper explores the in vivo distributed detection of an undesired biological agent's (BAs) biomarkers by a group of biological sized nanomachines in an aqueous medium under drift. The term distributed, indicates that the system information relative to the BAs presence is dispersed across the collection of nanomachines, where each nanomachine possesses limited communication, computation, and movement capabilities. Using Brownian motion with drift, a probabilistic detection and optimal data fusion framework, coined molecular distributed detection, will be introduced that combines theory from both molecular communication and distributed detection. Using the optimal data fusion framework as a guide, simulation indicates that a sub-optimal fusion method exists, allowing for a significant reduction in implementation complexity while retaining BA detection accuracy.
Array distribution in data-parallel programs
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.
1994-01-01
We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.
Mobile Autonomous Sensing Unit (MASU): A Framework That Supports Distributed Pervasive Data Sensing
Medina, Esunly; Lopez, David; Meseguer, Roc; Ochoa, Sergio F.; Royo, Dolors; Santos, Rodrigo
2016-01-01
Pervasive data sensing is a major issue that transverses various research areas and application domains. It allows identifying people’s behaviour and patterns without overwhelming the monitored persons. Although there are many pervasive data sensing applications, they are typically focused on addressing specific problems in a single application domain, making them difficult to generalize or reuse. On the other hand, the platforms for supporting pervasive data sensing impose restrictions to the devices and operational environments that make them unsuitable for monitoring loosely-coupled or fully distributed work. In order to help address this challenge this paper present a framework that supports distributed pervasive data sensing in a generic way. Developers can use this framework to facilitate the implementations of their applications, thus reducing complexity and effort in such an activity. The framework was evaluated using simulations and also through an empirical test, and the obtained results indicate that it is useful to support such a sensing activity in loosely-coupled or fully distributed work scenarios. PMID:27409617
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
NASA Astrophysics Data System (ADS)
Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping
2017-02-01
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
1986-09-01
CoOII,, e . r .,.@. .1Cc II n, oeery and iden.tify N b c 61 .mww) Based on structuration theory, organization framework and process are proposed as two...34 Modalities for the Implementation of Business-Level Strategies James Skivington and Richard Daft TR-ONR-DG- 21 September 1986 DTIC S L E C T E ...STRjRTIC.% ST..TEMEN? tl,: hip l, Approval for public release: distribution unlimited 17 ’.TRIOUTiON STAT EMEN .1 th, *, e o .... . .....in , B,, c 20
US Army Research Laboratory Visualization Framework Architecture Document
2018-01-11
this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. Citation of...release; distribution is unlimited. 14. ABSTRACT Visualization of network science experimentation results is generally achieved using stovepipe...report documents the ARL Visualization Framework system design and specific details of its implementation. 15. SUBJECT TERMS visualization
VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration.
Langner, Ricardo; Horak, Tom; Dachselt, Raimund
2017-08-29
We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2014-05-01
To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Using Ada to implement the operations management system in a community of experts
NASA Technical Reports Server (NTRS)
Frank, M. S.
1986-01-01
An architecture is described for the Space Station Operations Management System (OMS), consisting of a distributed expert system framework implemented in Ada. The motivation for such a scheme is based on the desire to integrate the very diverse elements of the OMS while taking maximum advantage of knowledge based systems technology. Part of the foundation of an Ada based distributed expert system was accomplished in the form of a proof of concept prototype for the KNOMES project (Knowledge-based Maintenance Expert System). This prototype successfully used concurrently active experts to accomplish monitoring and diagnosis for the Remote Manipulator System. The basic concept of this software architecture is named ACTORS for Ada Cognitive Task ORganization Scheme. It is when one considers the overall problem of integrating all of the OMS elements into a cooperative system that the AI solution stands out. By utilizing a distributed knowledge based system as the framework for OMS, it is possible to integrate those components which need to share information in an intelligent manner.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
Ory, Marcia G; Altpeter, Mary; Belza, Basia; Helduser, Janet; Zhang, Chen; Smith, Matthew Lee
2014-01-01
Dissemination and implementation (D&I) frameworks are increasingly being promoted in public health research. However, less is known about their uptake in the field, especially for diverse sets of programs. Limited questionnaires exist to assess the ways that frameworks can be utilized in program planning and evaluation. We present a case study from the United States that describes the implementation of the RE-AIM framework by state aging services providers and public health partners and a questionnaire that can be used to assess the utility of such frameworks in practice. An online questionnaire was developed to capture community perspectives about the utility of the RE-AIM framework. Distributed to project leads in 27 funded states in an evidence-based disease prevention initiative for older adults, 40 key stakeholders responded representing a 100% state-participation rate among the 27 funded states. Findings suggest that there is perceived utility in using the RE-AIM framework when evaluating grand-scale initiatives for older adults. The RE-AIM framework was seen as useful for planning, implementation, and evaluation with relevance for evaluators, providers, community leaders, and policy makers. Yet, the uptake was not universal, and some respondents reported difficulties in use, especially adopting the framework as a whole. This questionnaire can serve as the basis to assess ways the RE-AIM framework can be utilized by practitioners in state-wide D&I efforts. Maximal benefit can be derived from examining the assessment of RE-AIM-related knowledge and confidence as part of a continual quality assurance process. We recommend such an assessment be performed before the implementation of new funding initiatives and throughout their course to assess RE-AIM uptake and to identify areas for technical assistance.
BORAH, BIJAN J.; BASU, ANIRBAN
2014-01-01
The quantile regression (QR) framework provides a pragmatic approach in understanding the differential impacts of covariates along the distribution of an outcome. However, the QR framework that has pervaded the applied economics literature is based on the conditional quantile regression method. It is used to assess the impact of a covariate on a quantile of the outcome conditional on specific values of other covariates. In most cases, conditional quantile regression may generate results that are often not generalizable or interpretable in a policy or population context. In contrast, the unconditional quantile regression method provides more interpretable results as it marginalizes the effect over the distributions of other covariates in the model. In this paper, the differences between these two regression frameworks are highlighted, both conceptually and econometrically. Additionally, using real-world claims data from a large US health insurer, alternative QR frameworks are implemented to assess the differential impacts of covariates along the distribution of medication adherence among elderly patients with Alzheimer’s disease. PMID:23616446
A numerical framework for bubble transport in a subcooled fluid flow
NASA Astrophysics Data System (ADS)
Jareteg, Klas; Sasic, Srdjan; Vinai, Paolo; Demazière, Christophe
2017-09-01
In this paper we present a framework for the simulation of dispersed bubbly two-phase flows, with the specific aim of describing vapor-liquid systems with condensation. We formulate and implement a framework that consists of a population balance equation (PBE) for the bubble size distribution and an Eulerian-Eulerian two-fluid solver. The PBE is discretized using the Direct Quadrature Method of Moments (DQMOM) in which we include the condensation of the bubbles as an internal phase space convection. We investigate the robustness of the DQMOM formulation and the numerical issues arising from the rapid shrinkage of the vapor bubbles. In contrast to a PBE method based on the multiple-size-group (MUSIG) method, the DQMOM formulation allows us to compute a distribution with dynamic bubble sizes. Such a property is advantageous to capture the wide range of bubble sizes associated with the condensation process. Furthermore, we compare the computational performance of the DQMOM-based framework with the MUSIG method. The results demonstrate that DQMOM is able to retrieve the bubble size distribution with a good numerical precision in only a small fraction of the computational time required by MUSIG. For the two-fluid solver, we examine the implementation of the mass, momentum and enthalpy conservation equations in relation to the coupling to the PBE. In particular, we propose a formulation of the pressure and liquid continuity equations, that was shown to correctly preserve mass when computing the vapor fraction with DQMOM. In addition, the conservation of enthalpy was also proven. Therefore a consistent overall framework that couples the PBE and two-fluid solvers is achieved.
NASA Astrophysics Data System (ADS)
Nativi, S.; Santoro, M.
2009-12-01
Currently, one of the major challenges for scientific community is the study of climate change effects on life on Earth. To achieve this, it is crucial to understand how climate change will impact on biodiversity and, in this context, several application scenarios require modeling the impact of climate change on distribution of individual species. In the context of GEOSS AIP-2 (Global Earth Observation System of Systems, Architecture Implementation Pilot- Phase 2), the Climate Change & Biodiversity thematic Working Group developed three significant user scenarios. A couple of them make use of a GEOSS-based framework to study the impact of climate change factors on regional species distribution. The presentation introduces and discusses this framework which provides an interoperability infrastructures to loosely couple standard services and components to discover and access climate and biodiversity data, and run forecast and processing models. The framework is comprised of the following main components and services: a)GEO Portal: through this component end user is able to search, find and access the needed services for the scenario execution; b)Graphical User Interface (GUI): this component provides user interaction functionalities. It controls the workflow manager to perform the required operations for the scenario implementation; c)Use Scenario controller: this component acts as a workflow controller implementing the scenario business process -i.e. a typical climate change & biodiversity projection scenario; d)Service Broker implementing Mediation Services: this component realizes a distributed catalogue which federates several discovery and access components (exposing them through a unique CSW standard interface). Federated components publish climate, environmental and biodiversity datasets; e)Ecological Niche Model Server: this component is able to run one or more Ecological Niche Models (ENM) on selected biodiversity and climate datasets; f)Data Access Transaction server: this component publishes the model outputs. The framework was successfully tested in two use scenarios of the GEOSS AIP-2 Climate Change and Biodiversity WG aiming to predict species distribution changes due to Climate Change factors, with the scientific patronage of the University of Colorado and the University of Alaska. The first scenario dealt with the Pikas specie regional distribution in the Great Basin area (North America). While, the second one concerned the modeling of the Arctic Food Chain species in the North Pole area -the relationships between different environmental parameters and Polar Bears distribution was analyzed. Results are published in the GEOSS AIP-2 web site: http://www.ogcnetwork.net/AIP2develop .
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...
Harvesting implementation for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Boldrini, Enrico; Papeschi, Fabrizio; Bigagli, Lorenzo; Mazzetti, Paolo
2010-05-01
GI-cat framework implements a distributed catalog service supporting different international standards and interoperability arrangements in use by the geoscientific community. The distribution functionality in conjunction with the mediation functionality allows to seamlessly query remote heterogeneous data sources, including OGC Web Services - e.e. OGC CSW, WCS, WFS and WMS, community standards such as UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services and OpenSearch engines. In the GI-cat modular architecture a distributor component carry out the distribution functionality by query delegation to the mediator components (one for each different data source). Each of these mediator components is able to query a specific data source and convert back the results by mapping of the foreign data model to the GI-cat internal one, based on ISO 19139. In order to cope with deployment scenarios in which local data is expected, an harvesting approach has been experimented. The new strategy comes in addition to the consolidated distributed approach, allowing the user to switch between a remote and a local search at will for each federated resource; this extends GI-cat configuration possibilities. The harvesting strategy is designed in GI-cat by the use at the core of a local cache component, implemented as a native XML database and based on eXist. The different heterogeneous sources are queried for the bulk of available data; this data is then injected into the cache component after being converted to the GI-cat data model. The query and conversion steps are performed by the mediator components that were are part of the GI-cat framework. Afterward each new query can be exercised against local data that have been stored in the cache component. Considering both advantages and shortcomings that affect harvesting and query distribution approaches, it comes out that a user driven tuning is required to take the best of them. This is often related to the specific user scenarios to be implemented. GI-cat proved to be a flexible framework to address user need. The GI-cat configurator tool was updated to make such a tuning possible: each data source can be configured to enable either harvesting or query distribution approaches; in the former case an appropriate harvesting interval can be set.
Drainoni, Mari-Lynn; Koppelman, Elisa A; Feldman, James A; Walley, Alexander Y; Mitchell, Patricia M; Ellison, Jacqueline; Bernstein, Edward
2016-10-18
The increase in opioid overdose deaths has become a national public health crisis. Naloxone is an important tool in opioid overdose prevention. Distribution of nasal naloxone has been found to be a feasible, and effective intervention in community settings and may have potential high applicability in the emergency department, which is often the initial point of care for persons at high risk of overdose. One safety net hospital introduced an innovative policy to offer take-home nasal naloxone via a standing order to ensure distribution to patients at risk for overdose. The aims of this study were to examine acceptance and uptake of the policy and assess facilitators and barriers to implementation. After obtaining pre-post data on naloxone distribution, we conducted a qualitative study. The PARiHS framework steered development of the qualitative guide. We used theoretical sampling in order to include the range of types of emergency department staff (50 total). The constant comparative method was initially used to code the transcripts and identify themes; the themes that emerged from the coding were then mapped back to the evidence, context and facilitation constructs of the PARiHS framework. Acceptance of the policy was good but uptake was low. Primary themes related to facilitators included: real-world driven intervention with philosophical, clinician and leadership support; basic education and training efforts; availability of resources; and ability to leave the ED with the naloxone kit in hand. Barriers fell into five general categories: protocol and policy; workflow and logistical; patient-related; staff roles and responsibilities; and education and training. The actual implementation of a new innovation in healthcare delivery is largely driven by factors beyond acceptance. Despite support and resources, implementation was challenging, with low uptake. While the potential of this innovation is unknown, understanding the experience is important to improve uptake in this setting and offer possible solutions for other facilities to address the opioid overdose crisis. Use of the PARiHS framework allowed us to recognize and understand key evidence, contextual and facilitation barriers to the successful implementation of the policy and to identify areas for improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling Zou; Hongbin Zhang; Jess Gehin
A coupled TH/Neutronics/CRUD framework, which is able to simulate the CRUD deposits impact on CIPS phenomenon, was described in this paper. This framework includes the coupling among three essential physics, thermal-hydraulics, CRUD and neutronics. The overall framework was implemented by using the CFD software STAR-CCM+, developing CRUD codes, and using the neutronics code DeCART. The coupling was implemented by exchanging data between softwares using intermediate exchange files. A typical 3 by 3 PWR fuel pin problem was solved under this framework. The problem was solved in a 12 months length period of time. Time-dependent solutions were provided, including CRUD depositsmore » inventory and their distributions on fuels, boron hideout amount inside CRUD deposits, as well as power shape changing over time. The results clearly showed the power shape suppression in regions where CRUD deposits exist, which is a strong indication of CIPS phenomenon.« less
FleCSPH - a parallel and distributed SPH implementation based on the FleCSI framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Loiseau, Julien
2017-06-20
FleCSPH is a multi-physics compact application that exercises FleCSI parallel data structures for tree-based particle methods. In particular, FleCSPH implements a smoothed-particle hydrodynamics (SPH) solver for the solution of Lagrangian problems in astrophysics and cosmology. FleCSPH includes support for gravitational forces using the fast multipole method (FMM).
Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment
NASA Technical Reports Server (NTRS)
Lepro, Rebekah
2003-01-01
The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.
NASA Astrophysics Data System (ADS)
Kodama, Yu; Hamagami, Tomoki
Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.
The CEOS WGISS Atmospheric Composition Portal
NASA Technical Reports Server (NTRS)
Lynnes, Chris
2010-01-01
Goal: Demonstrate the feasibility of connecting distributed atmospheric composition data and analysis tools into a common and shared web framework. Initial effort focused on: a) Collaboratively creating a web application within WDC-RSAT for comparison of satellite derived atmospheric composition datasets accessed from distributed data sources. b) Implementation of data access and interoperability standards. c) Sollicit feedback from paternal users; Especially from ACC participants.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., and DAS baselines, adjustments for steaming time, etc.; modifications to capacity measures, such as... affect the implementation of AMs based upon the distribution in effect at the time of the overage that...
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Bridging the gap between Hydrologic and Atmospheric communities through a standard based framework
NASA Astrophysics Data System (ADS)
Boldrini, E.; Salas, F.; Maidment, D. R.; Mazzetti, P.; Santoro, M.; Nativi, S.; Domenico, B.
2012-04-01
Data interoperability in the study of Earth sciences is essential to performing interdisciplinary multi-scale multi-dimensional analyses (e.g. hydrologic impacts of global warming, regional urbanization, global population growth etc.). This research aims to bridge the existing gap between hydrologic and atmospheric communities both at semantic and technological levels. Within the context of hydrology, scientists are usually concerned with data organized as time series: a time series can be seen as a variable measured at a particular point in space over a period of time (e.g. the stream flow values as periodically measured by a buoy sensor in a river); atmospheric scientists instead usually organize their data as coverages: a coverage can be seen as a multidimensional data array (e.g. satellite images acquired through time). These differences make non-trivial the set up of a common framework to perform data discovery and access. A set of web services specifications and implementations is already in place in both the scientific communities to allow data discovery and access in the different domains. The CUAHSI-Hydrologic Information System (HIS) service stack lists different services types and implementations: - a metacatalog (implemented as a CSW) used to discover metadata services by distributing the query to a set of catalogs - time series catalogs (implemented as CSW) used to discover datasets published by the feature services - feature services (implemented as WFS) containing features with data access link - sensor observation services (implemented as SOS) enabling access to the stream of acquisitions Within the Unidata framework, there lies a similar service stack for atmospheric data: - the broker service (implemented as a CSW) distributes a user query to a set of heterogeneous services (i.e. catalogs services, but also inventory and access services) - the catalog service (implemented as a CSW) is able to harvest the available metadata offered by THREDDS services, and executes complex queries against the available metadata. - inventory service (implemented as a THREDDS) being able to hierarchically organize and publish a local collection of multi-dimensional arrays (e.g. NetCDF, GRIB files), as well as publish auxiliary standard services to realize the actual data access and visualization (e.g. WCS, OPeNDAP, WMS). The approach followed in this research is to build on top of the existing standards and implementations, by setting up a standard-aware interoperable framework, able to deal with the existing heterogeneity in an organic way. As a methodology, interoperability tests against real services were performed; existing problems were thus highlighted and possibly solved. The use of flexible tools, able to deal in a smart way with heterogeneity has proven to be successful, in particular experiments were carried on with both GI-cat broker and ESRI GeoPortal frameworks. GI-cat discovery broker was proven successful at implementing the CSW interface, as well as federating heterogeneous resources, such as THREDDS and WCS services published by Unidata, HydroServer, WFS and SOS services published by CUAHSI. Experiments with ESRI GeoPortal were also successful: the GeoPortal was used to deploy a web interface able to distribute searches amongst catalog implementations from both the hydrologic and the atmospheric communities, including HydroServers and GI-cat, combining results from both the domains in a seamless way.
Borah, Bijan J; Basu, Anirban
2013-09-01
The quantile regression (QR) framework provides a pragmatic approach in understanding the differential impacts of covariates along the distribution of an outcome. However, the QR framework that has pervaded the applied economics literature is based on the conditional quantile regression method. It is used to assess the impact of a covariate on a quantile of the outcome conditional on specific values of other covariates. In most cases, conditional quantile regression may generate results that are often not generalizable or interpretable in a policy or population context. In contrast, the unconditional quantile regression method provides more interpretable results as it marginalizes the effect over the distributions of other covariates in the model. In this paper, the differences between these two regression frameworks are highlighted, both conceptually and econometrically. Additionally, using real-world claims data from a large US health insurer, alternative QR frameworks are implemented to assess the differential impacts of covariates along the distribution of medication adherence among elderly patients with Alzheimer's disease. Copyright © 2013 John Wiley & Sons, Ltd.
Ory, Marcia G.; Altpeter, Mary; Belza, Basia; Helduser, Janet; Zhang, Chen; Smith, Matthew Lee
2015-01-01
Dissemination and implementation (D&I) frameworks are increasingly being promoted in public health research. However, less is known about their uptake in the field, especially for diverse sets of programs. Limited questionnaires exist to assess the ways that frameworks can be utilized in program planning and evaluation. We present a case study from the United States that describes the implementation of the RE-AIM framework by state aging services providers and public health partners and a questionnaire that can be used to assess the utility of such frameworks in practice. An online questionnaire was developed to capture community perspectives about the utility of the RE-AIM framework. Distributed to project leads in 27 funded states in an evidence-based disease prevention initiative for older adults, 40 key stakeholders responded representing a 100% state-participation rate among the 27 funded states. Findings suggest that there is perceived utility in using the RE-AIM framework when evaluating grand-scale initiatives for older adults. The RE-AIM framework was seen as useful for planning, implementation, and evaluation with relevance for evaluators, providers, community leaders, and policy makers. Yet, the uptake was not universal, and some respondents reported difficulties in use, especially adopting the framework as a whole. This questionnaire can serve as the basis to assess ways the RE-AIM framework can be utilized by practitioners in state-wide D&I efforts. Maximal benefit can be derived from examining the assessment of RE-AIM-related knowledge and confidence as part of a continual quality assurance process. We recommend such an assessment be performed before the implementation of new funding initiatives and throughout their course to assess RE-AIM uptake and to identify areas for technical assistance. PMID:25964897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan
MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W.; Benz, H.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.
A framework using cluster-based hybrid network architecture for collaborative virtual surgery.
Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann
2009-12-01
Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.
Equilibrium Field Theoretic and Dynamic Mean Field Simulations of Inhomogeneous Polymeric Materials
NASA Astrophysics Data System (ADS)
Chao, Huikuan
Inhomogeneous polymeric materials is a large family of promising materials including but limited to block copolymers (BCPs), polymer nanocomposites (PNCs) and microscopically confined polymer films. The promising application of the materials originates from the materials' unique microstructures, which offer enhanced mechanical, thermal, optical and electrical properties to the materials. Due to the complex interactions and the large parameter space, behaviors of the microstructures formed by grafted nanoparticles and nanorods in PNCs are difficult to understand. Separately, because of relatively weak interactions, the microstructures are typically achieved through rapid processing that are kinetically controlled and beyond equilibrium. However, efficient simulation framework to study nonequilbrium dynamics of the materials is currently not available. To attack the first difficulty, I extended an efficient simulation framework, polymer nanocomposite field theory (PNC-FT), to incorporate grafted nanoparticles and nanorods. This extended framework is demonstrated against existing experimental studies and implemented to study how the nanoparticle design affects the nanoparticle distribution in binary homopolymer blends. The grafted nanoparticle model is also used as a platform to adopt an advanced optimization method to inversely design nanoparticles which are able to self-assemble into targeted two dimensional lattices. The nanorod model under PNC-FT framework is used to investigate the design of nanorod and block copolymer thin films to control the nanorod distribution. To attack the second difficulty, I established an efficient framework (SCMF-LD) based on a recently proposed dynamic mean field theory and used SCMF-LD to study how to kinetically control the nanoparticle distribution at the end of solvent annealing block copolymer thin films. The framework is then extended to incorporate hydrodynamics (SCMF-DPD) and the extended framework is implemented to study morphology development in phase inversion processing polymer thin films, where hydrodynamic effects play an important role. By exploring both equilibrium and nonequilibrium properties in a spectrum of inhomogeneous polymeric material systems, I successfully extended PNC-FT and established SCMF-LD and SCMF-DPD frameworks, which are expected to be efficient and powerful tools in studies of inhomogeneous polymeric material design and processing.
NASA Astrophysics Data System (ADS)
Liu, Yaoze; Engel, Bernard A.; Flanagan, Dennis C.; Gitau, Margaret W.; McMillan, Sara K.; Chaubey, Indrajeet; Singh, Shweta
2018-05-01
Best management practices (BMPs) are popular approaches used to improve hydrology and water quality. Uncertainties in BMP effectiveness over time may result in overestimating long-term efficiency in watershed planning strategies. To represent varying long-term BMP effectiveness in hydrologic/water quality models, a high level and forward-looking modeling framework was developed. The components in the framework consist of establishment period efficiency, starting efficiency, efficiency for each storm event, efficiency between maintenance, and efficiency over the life cycle. Combined, they represent long-term efficiency for a specific type of practice and specific environmental concern (runoff/pollutant). An approach for possible implementation of the framework was discussed. The long-term impacts of grass buffer strips (agricultural BMP) and bioretention systems (urban BMP) in reducing total phosphorus were simulated to demonstrate the framework. Data gaps were captured in estimating the long-term performance of the BMPs. A Bayesian method was used to match the simulated distribution of long-term BMP efficiencies with the observed distribution with the assumption that the observed data represented long-term BMP efficiencies. The simulated distribution matched the observed distribution well with only small total predictive uncertainties. With additional data, the same method can be used to further improve the simulation results. The modeling framework and results of this study, which can be adopted in hydrologic/water quality models to better represent long-term BMP effectiveness, can help improve decision support systems for creating long-term stormwater management strategies for watershed management projects.
Tuning collective communication for Partitioned Global Address Space programming models
Nishtala, Rajesh; Zheng, Yili; Hargrove, Paul H.; ...
2011-06-12
Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues thatmore » are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.« less
Simão, Ana; Densham, Paul J; Haklay, Mordechai Muki
2009-05-01
Spatial planning typically involves multiple stakeholders. To any specific planning problem, stakeholders often bring different levels of knowledge about the components of the problem and make assumptions, reflecting their individual experiences, that yield conflicting views about desirable planning outcomes. Consequently, stakeholders need to learn about the likely outcomes that result from their stated preferences; this learning can be supported through enhanced access to information, increased public participation in spatial decision-making and support for distributed collaboration amongst planners, stakeholders and the public. This paper presents a conceptual system framework for web-based GIS that supports public participation in collaborative planning. The framework combines an information area, a Multi-Criteria Spatial Decision Support System (MC-SDSS) and an argumentation map to support distributed and asynchronous collaboration in spatial planning. After analysing the novel aspects of this framework, the paper describes its implementation, as a proof of concept, in a system for Web-based Participatory Wind Energy Planning (WePWEP). Details are provided on the specific implementation of each of WePWEP's four tiers, including technical and structural aspects. Throughout the paper, particular emphasis is placed on the need to support user learning throughout the planning process.
Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework
NASA Astrophysics Data System (ADS)
Gannon, C.
2017-12-01
As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.
NASA Technical Reports Server (NTRS)
Harper, Richard
1989-01-01
In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.
French, Simon D; Green, Sally E; O'Connor, Denise A; McKenzie, Joanne E; Francis, Jill J; Michie, Susan; Buchbinder, Rachelle; Schattner, Peter; Spike, Neil; Grimshaw, Jeremy M
2012-04-24
There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a comprehensive intervention development process.
2012-01-01
Background There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. Methods The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? Results A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. Conclusions We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a comprehensive intervention development process. PMID:22531013
NASA Astrophysics Data System (ADS)
Jin, D.; Hoagland, P.; Dalton, T. M.; Thunberg, E. M.
2012-09-01
We present an integrated economic-ecological framework designed to help assess the implementation of ecosystem-based fisheries management (EBFM) in New England. We develop the framework by linking a computable general equilibrium (CGE) model of a coastal economy to an end-to-end (E2E) model of a marine food web for Georges Bank. We focus on the New England region using coastal county economic data for a restricted set of industry sectors and marine ecological data for three top level trophic feeding guilds: planktivores, benthivores, and piscivores. We undertake numerical simulations to model the welfare effects of changes in alternative combinations of yields from feeding guilds and alternative manifestations of biological productivity. We estimate the economic and distributional effects of these alternative simulations across a range of consumer income levels. This framework could be used to extend existing methodologies for assessing the impacts on human communities of groundfish stock rebuilding strategies, such as those expected through the implementation of the sector management program in the US northeast fishery. We discuss other possible applications of and modifications and limitations to the framework.
Distributed Peer-to-Peer Target Tracking in Wireless Sensor Networks
Wang, Xue; Wang, Sheng; Bi, Dao-Wei; Ma, Jun-Jie
2007-01-01
Target tracking is usually a challenging application for wireless sensor networks (WSNs) because it is always computation-intensive and requires real-time processing. This paper proposes a practical target tracking system based on the auto regressive moving average (ARMA) model in a distributed peer-to-peer (P2P) signal processing framework. In the proposed framework, wireless sensor nodes act as peers that perform target detection, feature extraction, classification and tracking, whereas target localization requires the collaboration between wireless sensor nodes for improving the accuracy and robustness. For carrying out target tracking under the constraints imposed by the limited capabilities of the wireless sensor nodes, some practically feasible algorithms, such as the ARMA model and the 2-D integer lifting wavelet transform, are adopted in single wireless sensor nodes due to their outstanding performance and light computational burden. Furthermore, a progressive multi-view localization algorithm is proposed in distributed P2P signal processing framework considering the tradeoff between the accuracy and energy consumption. Finally, a real world target tracking experiment is illustrated. Results from experimental implementations have demonstrated that the proposed target tracking system based on a distributed P2P signal processing framework can make efficient use of scarce energy and communication resources and achieve target tracking successfully.
Distributed data mining on grids: services, tools, and applications.
Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo
2004-12-01
Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.
2017-06-01
for GIFT Cloud, the web -based application version of the Generalized Intelligent Framework for Tutoring (GIFT). GIFT is a modular, open-source...external applications. GIFT is available to users with a GIFT Account at no cost. GIFT Cloud is an implementation of GIFT. This web -based application...section. Approved for public release; distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser
NASA Technical Reports Server (NTRS)
Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt
2004-01-01
To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.
A Test Generation Framework for Distributed Fault-Tolerant Algorithms
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.
2009-01-01
Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.
On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.
Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea
2016-09-01
A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment.
NASA Astrophysics Data System (ADS)
Xie, Jibo; Li, Guoqing
2015-04-01
Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.
Dunham, Jason B.; Gallo, Kirsten
2008-01-01
In a species conservation context, translocations can be an important tool, but they frequently fail to successfully establish new populations. We consider the case of reintroductions for bull trout (Salvelinus confluentus), a federally-listed threatened species with a widespread but declining distribution in western North America. Our specific objectives in this work were to: 1) develop a general framework for assessing the feasibility of reintroduction for bull trout, 2) provide a detailed example of implementing this framework to assess the feasibility of reintroducing bull trout in the Clackamas River, Oregon, and 3) discuss the implications of this effort in the more general context of fish reintroductions as a conservation tool. Review of several case histories and our assessment of the Clackamas River suggest that an attempt to reintroduce bull trout could be successful, assuming adequate resources are committed to the subsequent stages of implementation, monitoring, and evaluation.
IDEA: Planning at the Core of Autonomous Reactive Agents
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Clancy, Daniel (Technical Monitor)
2002-01-01
Several successful autonomous systems are separated into technologically diverse functional layers operating at different levels of abstraction. This diversity makes them difficult to implement and validate. In this paper, we present IDEA (Intelligent Distributed Execution Architecture), a unified planning and execution framework. In IDEA a layered system can be implemented as separate agents, one per layer, each representing its interactions with the world in a model. At all levels, the model representation primitives and their semantics is the same. Moreover, each agent relies on a single model, plan database, plan runner and on a variety of planners, both reactive and deliberative. The framework allows the specification of agents that operate, within a guaranteed reaction time and supports flexible specification of reactive vs. deliberative agent behavior. Within the IDEA framework we are working to fully duplicate the functionalities of the DS1 Remote Agent and extend it to domains of higher complexity than autonomous spacecraft control.
Distributed cooperative control of AC microgrids
NASA Astrophysics Data System (ADS)
Bidram, Ali
In this dissertation, the comprehensive secondary control of electric power microgrids is of concern. Microgrid technical challenges are mainly realized through the hierarchical control structure, including primary, secondary, and tertiary control levels. Primary control level is locally implemented at each distributed generator (DG), while the secondary and tertiary control levels are conventionally implemented through a centralized control structure. The centralized structure requires a central controller which increases the reliability concerns by posing the single point of failure. In this dissertation, the distributed control structure using the distributed cooperative control of multi-agent systems is exploited to increase the secondary control reliability. The secondary control objectives are microgrid voltage and frequency, and distributed generators (DGs) active and reactive powers. Fully distributed control protocols are implemented through distributed communication networks. In the distributed control structure, each DG only requires its own information and the information of its neighbors on the communication network. The distributed structure obviates the requirements for a central controller and complex communication network which, in turn, improves the system reliability. Since the DG dynamics are nonlinear and non-identical, input-output feedback linearization is used to transform the nonlinear dynamics of DGs to linear dynamics. Proposed control frameworks cover the control of microgrids containing inverter-based DGs. Typical microgrid test systems are used to verify the effectiveness of the proposed control protocols.
A penalized framework for distributed lag non-linear models.
Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G
2017-09-01
Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier
2018-01-01
In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645
Product Distribution Theory and Semi-Coordinate Transformations
NASA Technical Reports Server (NTRS)
Airiau, Stephane; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for doing distributed adaptive control of a multiagent system (MAS). We introduce the technique of "coordinate transformations" in PD theory gradient descent. These transformations selectively couple a few agents with each other into "meta-agents". Intuitively, this can be viewed as a generalization of forming binding contracts between those agents. Doing this sacrifices a bit of the distributed nature of the MAS, in that there must now be communication from multiple agents in determining what joint-move is finally implemented However, as we demonstrate in computer experiments, these transformations improve the performance of the MAS.
NASA Astrophysics Data System (ADS)
Mikula, Brendon D.; Heckler, Andrew F.
2017-06-01
We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with a careful identification of target skills and the study of specific student difficulties with these skills. It then employs computer-based instruction, immediate feedback, mastery grading, and well-researched principles from cognitive psychology such as interleaved training sequences and distributed practice. We implemented this with more than 1500 students over 2 semesters. Students completed the mastery practice for an average of about 13 min /week , for a total of about 2-3 h for the whole semester. Results reveal large (>1 SD ) pretest to post-test gains in accuracy in vector skills, even compared to a control group, and these gains were retained at least 2 months after practice. We also find evidence of improved fluency, student satisfaction, and that awarding regular course credit results in higher participation and higher learning gains than awarding extra credit. In all, we find that simple computer-based mastery practice is an effective and efficient way to improve a set of basic and essential skills for introductory physics.
Telecommunications Staff Development for California's English-Language Arts Framework.
ERIC Educational Resources Information Center
Grubb, Mel; Gonzales, Phillip C.
1990-01-01
The Los Angles County Office of Education developed the Educational Communications Network (ETN) to help implement English curriculum reform mandated by the California State Board of Education in 1987. ETN has become an electronic staff development distribution system using satellite-transmitted live and interactive inservice programing. (MLH)
Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.
2002-01-01
A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.
Moullin, Joanna C; Sabater-Hernández, Daniel; Fernandez-Llimos, Fernando; Benrimoj, Shalom I
2015-03-14
Implementation science and knowledge translation have developed across multiple disciplines with the common aim of bringing innovations to practice. Numerous implementation frameworks, models, and theories have been developed to target a diverse array of innovations. As such, it is plausible that not all frameworks include the full range of concepts now thought to be involved in implementation. Users face the decision of selecting a single or combining multiple implementation frameworks. To aid this decision, the aim of this review was to assess the comprehensiveness of existing frameworks. A systematic search was undertaken in PubMed to identify implementation frameworks of innovations in healthcare published from 2004 to May 2013. Additionally, titles and abstracts from Implementation Science journal and references from identified papers were reviewed. The orientation, type, and presence of stages and domains, along with the degree of inclusion and depth of analysis of factors, strategies, and evaluations of implementation of included frameworks were analysed. Frameworks were assessed individually and grouped according to their targeted innovation. Frameworks for particular innovations had similar settings, end-users, and 'type' (descriptive, prescriptive, explanatory, or predictive). On the whole, frameworks were descriptive and explanatory more often than prescriptive and predictive. A small number of the reviewed frameworks covered an implementation concept(s) in detail, however, overall, there was limited degree and depth of analysis of implementation concepts. The core implementation concepts across the frameworks were collated to form a Generic Implementation Framework, which includes the process of implementation (often portrayed as a series of stages and/or steps), the innovation to be implemented, the context in which the implementation is to occur (divided into a range of domains), and influencing factors, strategies, and evaluations. The selection of implementation framework(s) should be based not solely on the healthcare innovation to be implemented, but include other aspects of the framework's orientation, e.g., the setting and end-user, as well as the degree of inclusion and depth of analysis of the implementation concepts. The resulting generic structure provides researchers, policy-makers, health administrators, and practitioners a base that can be used as guidance for their implementation efforts.
First Renormalized Parton Distribution Functions from Lattice QCD
NASA Astrophysics Data System (ADS)
Lin, Huey-Wen; LP3 Collaboration
2017-09-01
We present the first lattice-QCD results on the nonperturbatively renormalized parton distribution functions (PDFs). Using X.D. Ji's large-momentum effective theory (LaMET) framework, lattice-QCD hadron structure calculations are able to overcome the longstanding problem of determining the Bjorken- x dependence of PDFs. This has led to numerous additional theoretical works and exciting progress. In this talk, we will address a recent development that implements a step missing from prior lattice-QCD calculations: renormalization, its effects on the nucleon matrix elements, and the resultant changes to the calculated distributions.
Distributed information system architecture for Primary Health Care.
Grammatikou, M; Stamatelopoulos, F; Maglaris, B
2000-01-01
We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.
An Open Source Extensible Smart Energy Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rankin, Linda
Aggregated distributed energy resources are the subject of much interest in the energy industry and are expected to play an important role in meeting our future energy needs by changing how we use, distribute and generate electricity. This energy future includes an increased amount of energy from renewable resources, load management techniques to improve resiliency and reliability, and distributed energy storage and generation capabilities that can be managed to meet the needs of the grid as well as individual customers. These energy assets are commonly referred to as Distributed Energy Resources (DER). DERs rely on a means to communicate informationmore » between an energy provider and multitudes of devices. Today DER control systems are typically vendor-specific, using custom hardware and software solutions. As a result, customers are locked into communication transport protocols, applications, tools, and data formats. Today’s systems are often difficult to extend to meet new application requirements, resulting in stranded assets when business requirements or energy management models evolve. By partnering with industry advisors and researchers, an implementation DER research platform was developed called the Smart Energy Framework (SEF). The hypothesis of this research was that an open source Internet of Things (IoT) framework could play a role in creating a commodity-based eco-system for DER assets that would reduce costs and provide interoperable products. SEF is based on the AllJoynTM IoT open source framework. The demonstration system incorporated DER assets, specifically batteries and smart water heaters. To verify the behavior of the distributed system, models of water heaters and batteries were also developed. An IoT interface for communicating between the assets and a control server was defined. This interface supports a series of “events” and telemetry reporting, similar to those defined by current smart grid communication standards. The results of this effort demonstrated the feasibility and application potential of using IoT frameworks for the creation of commodity-based DER systems. All of the identified commodity-based system requirements were met by the AllJoyn framework. By having commodity solutions, small vendors can enter the market and the cost of implementation for all parties is reduced. Utilities and aggregators can choose from multiple interoperable products reducing the risk of stranded assets. Based on this research it is recommended that interfaces based on existing smart grid communication protocol standards be created for these emerging IoT frameworks. These interfaces should be standardized as part of the IoT framework allowing for interoperability testing and certification. Similarly, IoT frameworks are introducing application level security. This type of security is needed for protecting application and platforms and will be important moving forward. Recommendations are that along with DER-based data model interfaces, platform and application security requirements also be prescribed when IoT devices support DER applications.« less
Generalized Intelligent Framework for Tutoring (GIFT) Cloud/Virtual Open Campus Quick-Start Guide
2016-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This document serves as the quick-start guide for GIFT Cloud, the web -based...to users with a GIFT Account at no cost. GIFT Cloud is a new implementation of GIFT. This web -based application allows learners, authors, and...distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser. Officially, GIFT Cloud has been tested to work on
ERIC Educational Resources Information Center
Blanchi, Christophe; Petrone, Jason; Pinfield, Stephen; Suleman, Hussein; Fox, Edward A.; Bauer, Charly; Roddy, Carol Lynn
2001-01-01
Includes four articles that discuss a distributed architecture for managing metadata that promotes interoperability between digital libraries; the use of electronic print (e-print) by physicists; the development of digital libraries; and a collaborative project between two library consortia in Ohio to provide digital versions of Sanborn Fire…
Simulating galactic dust grain evolution on a moving mesh
NASA Astrophysics Data System (ADS)
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu; Kouzelis, Konstantinos; Mendaza, Iker
The gradual active load penetration in low voltage distribution grids is expected to challenge their network capacity in the near future. Distribution system operators should for this reason resort to either costly grid reinforcements or to demand side management mechanisms. Since demand side management implementation is usually cheaper, it is also the favorable solution. To this end, this article presents a framework for handling grid limit violations, both voltage and current, to ensure a secure and qualitative operation of the distribution grid. This framework consists of two steps, namely a proactive centralized and subsequently a reactive decentralized control scheme. Themore » former is employed to balance the one hour ahead load while the latter aims at regulating the consumption in real-time. In both cases, the importance of fair use of electricity demand flexibility is emphasized. Thus, it is demonstrated that this methodology aids in keeping the grid status within preset limits while utilizing flexibility from all flexibility participants.« less
Consistent second-order boundary implementations for convection-diffusion lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chew, Jia Wei
2018-02-01
In this study, an alternative second-order boundary scheme is proposed under the framework of the convection-diffusion lattice Boltzmann (LB) method for both straight and curved geometries. With the proposed scheme, boundary implementations are developed for the Dirichlet, Neumann and linear Robin conditions in a consistent way. The Chapman-Enskog analysis and the Hermite polynomial expansion technique are first applied to derive the explicit expression for the general distribution function with second-order accuracy. Then, the macroscopic variables involved in the expression for the distribution function is determined by the prescribed macroscopic constraints and the known distribution functions after streaming [see the paragraph after Eq. (29) for the discussions of the "streaming step" in LB method]. After that, the unknown distribution functions are obtained from the derived macroscopic information at the boundary nodes. For straight boundaries, boundary nodes are directly placed at the physical boundary surface, and the present scheme is applied directly. When extending the present scheme to curved geometries, a local curvilinear coordinate system and first-order Taylor expansion are introduced to relate the macroscopic variables at the boundary nodes to the physical constraints at the curved boundary surface. In essence, the unknown distribution functions at the boundary node are derived from the known distribution functions at the same node in accordance with the macroscopic boundary conditions at the surface. Therefore, the advantages of the present boundary implementations are (i) the locality, i.e., no information from neighboring fluid nodes is required; (ii) the consistency, i.e., the physical boundary constraints are directly applied when determining the macroscopic variables at the boundary nodes, thus the three kinds of conditions are realized in a consistent way. It should be noted that the present focus is on two-dimensional cases, and theoretical derivations as well as the numerical validations are performed in the framework of the two-dimensional five-velocity lattice model.
DIRAC File Replica and Metadata Catalog
NASA Astrophysics Data System (ADS)
Tsaregorodtsev, A.; Poss, S.
2012-12-01
File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb Collaboration. It is optimized for the most common patterns of the catalog usage in order to achieve maximum performance from the user perspective. The DFC supports bulk operations for replica queries and allows quick analysis of the storage usage globally and for each Storage Element separately. It supports flexible ACL rules with plug-ins for various policies that can be adopted by a particular community. The DFC catalog allows to store various types of metadata associated with files and directories and to perform efficient queries for the data based on complex metadata combinations. Definition of file ancestor-descendent relation chains is also possible. The DFC catalog is implemented in the general DIRAC distributed computing framework following the standard grid security architecture. In this paper we describe the design of the DFC and its implementation details. The performance measurements are compared with other grid file catalog implementations. The experience of the DFC Catalog usage in the CLIC detector project are discussed.
Potjans, Wiebke; Morrison, Abigail; Diesmann, Markus
2010-01-01
A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e., on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator, or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity. PMID:21151370
GEOSS AIP-2 Climate Change and Biodiversity Use Scenarios: Interoperability Infrastructures
NASA Astrophysics Data System (ADS)
Nativi, Stefano; Santoro, Mattia
2010-05-01
In the last years, scientific community is producing great efforts in order to study the effects of climate change on life on Earth. In this general framework, a key role is played by the impact of climate change on biodiversity. To assess this, several use scenarios require the modeling of climatological change impact on the regional distribution of biodiversity species. Designing and developing interoperability infrastructures which enable scientists to search, discover, access and use multi-disciplinary resources (i.e. datasets, services, models, etc.) is currently one of the main research fields for the Earth and Space Science Informatics. This presentation introduces and discusses an interoperability infrastructure which implements the discovery, access, and chaining of loosely-coupled resources in the climatology and biodiversity domains. This allows to set up and run forecast and processing models. The presented framework was successfully developed and experimented in the context of GEOSS AIP-2 (Global Earth Observation System of Systems, Architecture Implementation Pilot- Phase 2) Climate Change & Biodiversity thematic Working Group. This interoperability infrastructure is comprised of the following main components and services: a)GEO Portal: through this component end user is able to search, find and access the needed services for the scenario execution; b)Graphical User Interface (GUI): this component provides user interaction functionalities. It controls the workflow manager to perform the required operations for the scenario implementation; c)Use Scenario controller: this component acts as a workflow controller implementing the scenario business process -i.e. a typical climate change & biodiversity projection scenario; d)Service Broker implementing Mediation Services: this component realizes a distributed catalogue which federates several discovery and access components (exposing them through a unique CSW standard interface). Federated components publish climate, environmental and biodiversity datasets; e)Ecological Niche Model Server: this component is able to run one or more Ecological Niche Models (ENM) on selected biodiversity and climate datasets; f)Data Access Transaction server: this component publishes the model outputs. This framework was assessed in two use scenarios of GEOSS AIP-2 Climate Change and Biodiversity WG. Both scenarios concern the prediction of species distributions driven by climatological change forecasts. The first scenario dealt with the Pikas specie regional distribution in the Great Basin area (North America). While, the second one concerned the modeling of the Arctic Food Chain species in the North Pole area -the relationships between different environmental parameters and Polar Bears distribution was analyzed. The scientific patronage was provided by the University of Colorado and the University of Alaska, respectively. Results are published in the GEOSS AIP-2 web site: http://www.ogcnetwork.net/AIP2develop.
NASA Astrophysics Data System (ADS)
Varghese, Julian
This research work has contributed in various ways to help develop a better understanding of textile composites and materials with complex microstructures in general. An instrumental part of this work was the development of an object-oriented framework that made it convenient to perform multiscale/multiphysics analyses of advanced materials with complex microstructures such as textile composites. In addition to the studies conducted in this work, this framework lays the groundwork for continued research of these materials. This framework enabled a detailed multiscale stress analysis of a woven DCB specimen that revealed the effect of the complex microstructure on the stress and strain energy release rate distribution along the crack front. In addition to implementing an oxidation model, the framework was also used to implement strategies that expedited the simulation of oxidation in textile composites so that it would take only a few hours. The simulation showed that the tow architecture played a significant role in the oxidation behavior in textile composites. Finally, a coupled diffusion/oxidation and damage progression analysis was implemented that was used to study the mechanical behavior of textile composites under mechanical loading as well as oxidation. A parametric study was performed to determine the effect of material properties and the number of plies in the laminate on its mechanical behavior. The analyses indicated a significant effect of the tow architecture and other parameters on the damage progression in the laminates.
A Novel Trust Service Provider for Internet Based Commerce Applications.
ERIC Educational Resources Information Center
Siyal, M. Y.; Barkat, B.
2002-01-01
Presents a framework for enhancing trust in Internet commerce. Shows how trust can be provided through a network of Trust Service Providers (TSp). Identifies a set of services that should be offered by a TSp. Presents a distributed object-oriented implementation of trust services using CORBA, JAVA and XML. (Author/AEF)
ERIC Educational Resources Information Center
Corbi, Alberto; Burgos, Daniel
2017-01-01
This paper presents how virtual containers enhance the implementation of STEAM (science, technology, engineering, arts, and math) subjects as Open Educational Resources (OER). The publication initially summarizes the limitations of delivering open rich learning contents and corresponding assignments to students in college level STEAM areas. The…
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.
Probst, Dimitri; Petrovici, Mihai A; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz
2015-01-01
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons
Probst, Dimitri; Petrovici, Mihai A.; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz
2015-01-01
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems. PMID:25729361
A distributed cloud-based cyberinfrastructure framework for integrated bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.
2017-04-01
This paper describes a cloud-based cyberinfrastructure framework for the management of the diverse data involved in bridge monitoring. Bridge monitoring involves various hardware systems, software tools and laborious activities that include, for examples, a structural health monitoring (SHM), sensor network, engineering analysis programs and visual inspection. Very often, these monitoring systems, tools and activities are not coordinated, and the collected information are not shared. A well-designed integrated data management framework can support the effective use of the data and, thereby, enhance bridge management and maintenance operations. The cloud-based cyberinfrastructure framework presented herein is designed to manage not only sensor measurement data acquired from the SHM system, but also other relevant information, such as bridge engineering model and traffic videos, in an integrated manner. For the scalability and flexibility, cloud computing services and distributed database systems are employed. The information stored can be accessed through standard web interfaces. For demonstration, the cyberinfrastructure system is implemented for the monitoring of the bridges located along the I-275 Corridor in the state of Michigan.
A program for the Bayesian Neural Network in the ROOT framework
NASA Astrophysics Data System (ADS)
Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang
2011-12-01
We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.
A Framework to Describe, Analyze and Generate Interactive Motor Behaviors
Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne
2012-01-01
While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks. PMID:23226231
A framework to describe, analyze and generate interactive motor behaviors.
Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne
2012-01-01
While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks.
ESTEST: A Framework for the Verification and Validation of Electronic Structure Codes
NASA Astrophysics Data System (ADS)
Yuan, Gary; Gygi, Francois
2011-03-01
ESTEST is a verification and validation (V& V) framework for electronic structure codes that supports Qbox, Quantum Espresso, ABINIT, the Exciting Code and plans support for many more. We discuss various approaches to the electronic structure V& V problem implemented in ESTEST, that are related to parsing, formats, data management, search, comparison and analyses. Additionally, an early experiment in the distribution of V& V ESTEST servers among the electronic structure community will be presented. Supported by NSF-OCI 0749217 and DOE FC02-06ER25777.
Serhal, Eva; Arena, Amanda; Sockalingam, Sanjeev; Mohri, Linda; Crawford, Allison
2018-03-01
The Project Extension for Community Healthcare Outcomes (ECHO) model expands primary care provider (PCP) capacity to manage complex diseases by sharing knowledge, disseminating best practices, and building a community of practice. The model has expanded rapidly, with over 140 ECHO projects currently established globally. We have used validated implementation frameworks, such as Damschroder's (2009) Consolidated Framework for Implementation Research (CFIR) and Proctor's (2011) taxonomy of implementation outcomes, combined with implementation experience to (1) create a set of questions to assess organizational readiness and suitability of the ECHO model and (2) provide those who have determined ECHO is the correct model with a checklist to support successful implementation. A set of considerations was created, which adapted and consolidated CFIR constructs to create ECHO-specific organizational readiness questions, as well as a process guide for implementation. Each consideration was mapped onto Proctor's (2011) implementation outcomes, and questions relating to the constructs were developed and reviewed for clarity. The Preimplementation list included 20 questions; most questions fall within Proctor's (2001) implementation outcome domains of "Appropriateness" and "Acceptability." The Process Checklist is a 26-item checklist to help launch an ECHO project; items map onto the constructs of Planning, Engaging, Executing, Reflecting, and Evaluating. Given that fidelity to the ECHO model is associated with robust outcomes, effective implementation is critical. These tools will enable programs to work through key considerations to implement a successful Project ECHO. Next steps will include validation with a diverse sample of ECHO projects.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Data analysis environment (DASH2000) for the Subaru telescope
NASA Astrophysics Data System (ADS)
Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki
2000-06-01
New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
The SysMan monitoring service and its management environment
NASA Astrophysics Data System (ADS)
Debski, Andrzej; Janas, Ekkehard
1996-06-01
Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
Experimental quantum key distribution with source flaws
NASA Astrophysics Data System (ADS)
Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong
2015-09-01
Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.
Deterministic Design Optimization of Structures in OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Coroneos, Rula M.; Pai, Shantaram S.
2012-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.
Implementation and performance test of cloud platform based on Hadoop
NASA Astrophysics Data System (ADS)
Xu, Jingxian; Guo, Jianhong; Ren, Chunlan
2018-01-01
Hadoop, as an open source project for the Apache foundation, is a distributed computing framework that deals with large amounts of data and has been widely used in the Internet industry. Therefore, it is meaningful to study the implementation of Hadoop platform and the performance of test platform. The purpose of this subject is to study the method of building Hadoop platform and to study the performance of test platform. This paper presents a method to implement Hadoop platform and a test platform performance method. Experimental results show that the proposed test performance method is effective and it can detect the performance of Hadoop platform.
Design & implementation of distributed spatial computing node based on WPS
NASA Astrophysics Data System (ADS)
Liu, Liping; Li, Guoqing; Xie, Jibo
2014-03-01
Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.
A future-proof architecture for telemedicine using loose-coupled modules and HL7 FHIR.
Gøeg, Kirstine Rosenbeck; Rasmussen, Rune Kongsgaard; Jensen, Lasse; Wollesen, Christian Møller; Larsen, Søren; Pape-Haugaard, Louise Bilenberg
2018-07-01
Most telemedicine solutions are proprietary and disease specific which cause a heterogeneous and silo-oriented system landscape with limited interoperability. Solving the interoperability problem would require a strong focus on data integration and standardization in telemedicine infrastructures. Our objective was to suggest a future-proof architecture, that consisted of small loose-coupled modules to allow flexible integration with new and existing services, and the use of international standards to allow high re-usability of modules, and interoperability in the health IT landscape. We identified core features of our future-proof architecture as the following (1) To provide extended functionality the system should be designed as a core with modules. Database handling and implementation of security protocols are modules, to improve flexibility compared to other frameworks. (2) To ensure loosely coupled modules the system should implement an inversion of control mechanism. (3) A focus on ease of implementation requires the system should use HL7 FHIR (Fast Interoperable Health Resources) as the primary standard because it is based on web-technologies. We evaluated the feasibility of our architecture by developing an open source implementation of the system called ORDS. ORDS is written in TypeScript, and makes use of the Express Framework and HL7 FHIR DSTU2. The code is distributed on GitHub. All modules have been tested unit wise, but end-to-end testing awaits our first clinical example implementations. Our study showed that highly adaptable and yet interoperable core frameworks for telemedicine can be designed and implemented. Future work includes implementation of a clinical use case and evaluation. Copyright © 2018 Elsevier B.V. All rights reserved.
Heuner, Maike; Weber, Arnd; Schröder, Uwe; Kleinschmit, Birgit; Schröder, Boris
2016-09-15
The European Water Framework Directive requires a good ecological potential for heavily modified water bodies. This standard has not been reached for most large estuaries by 2015. Management plans for estuaries fall short in linking implementations between restoration measures and underlying spatial analyses. The distribution of emergent macrophytes - as an indicator of habitat quality - is here used to assess the ecological potential. Emergent macrophytes are capable of settling on gentle tidal flats where hydrodynamic stress is comparatively low. Analyzing their habitats based on spatial data, we set up species distribution models with 'elevation relative to mean high water', 'mean bank slope', and 'length of bottom friction' from shallow water up to the vegetation belt as key predictors representing hydrodynamic stress. Effects of restoration scenarios on habitats were assessed applying these models. Our findings endorse species distribution models as crucial spatial planning tools for implementing restoration measures in modified estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ENKI - An Open Source environmental modelling platfom
NASA Astrophysics Data System (ADS)
Kolberg, S.; Bruland, O.
2012-04-01
The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface, making it possible to run ENKI from GIS programs and other software environments. ENKI currently compiles under Windows and Visual Studio only, but ambitions exist to remove the platform and compiler dependencies.
Data distribution service-based interoperability framework for smart grid testbed infrastructure
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
2016-03-02
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
A Concept for Run-Time Support of the Chapel Language
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
Distributed Grooming in Multi-Domain IP/MPLS-DWDM Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qing
2009-12-01
This paper studies distributed multi-domain, multilayer provisioning (grooming) in IP/MPLS-DWDM networks. Although many multi-domain studies have emerged over the years, these have primarily considered 'homogeneous' network layers. Meanwhile, most grooming studies have assumed idealized settings with 'global' link state across all layers. Hence there is a critical need to develop practical distributed grooming schemes for real-world networks consisting of multiple domains and technology layers. Along these lines, a detailed hierarchical framework is proposed to implement inter-layer routing, distributed grooming, and setup signaling. The performance of this solution is analyzed in detail using simulation studies and future work directions are alsomore » high-lighted.« less
NASA Astrophysics Data System (ADS)
Smith, T.; Marshall, L.
2007-12-01
In many mountainous regions, the single most important parameter in forecasting the controls on regional water resources is snowpack (Williams et al., 1999). In an effort to bridge the gap between theoretical understanding and functional modeling of snow-driven watersheds, a flexible hydrologic modeling framework is being developed. The aim is to create a suite of models that move from parsimonious structures, concentrated on aggregated watershed response, to those focused on representing finer scale processes and distributed response. This framework will operate as a tool to investigate the link between hydrologic model predictive performance, uncertainty, model complexity, and observable hydrologic processes. Bayesian methods, and particularly Markov chain Monte Carlo (MCMC) techniques, are extremely useful in uncertainty assessment and parameter estimation of hydrologic models. However, these methods have some difficulties in implementation. In a traditional Bayesian setting, it can be difficult to reconcile multiple data types, particularly those offering different spatial and temporal coverage, depending on the model type. These difficulties are also exacerbated by sensitivity of MCMC algorithms to model initialization and complex parameter interdependencies. As a way of circumnavigating some of the computational complications, adaptive MCMC algorithms have been developed to take advantage of the information gained from each successive iteration. Two adaptive algorithms are compared is this study, the Adaptive Metropolis (AM) algorithm, developed by Haario et al (2001), and the Delayed Rejection Adaptive Metropolis (DRAM) algorithm, developed by Haario et al (2006). While neither algorithm is truly Markovian, it has been proven that each satisfies the desired ergodicity and stationarity properties of Markov chains. Both algorithms were implemented as the uncertainty and parameter estimation framework for a conceptual rainfall-runoff model based on the Probability Distributed Model (PDM), developed by Moore (1985). We implement the modeling framework in Stringer Creek watershed in the Tenderfoot Creek Experimental Forest (TCEF), Montana. The snowmelt-driven watershed offers that additional challenge of modeling snow accumulation and melt and current efforts are aimed at developing a temperature- and radiation-index snowmelt model. Auxiliary data available from within TCEF's watersheds are used to support in the understanding of information value as it relates to predictive performance. Because the model is based on lumped parameters, auxiliary data are hard to incorporate directly. However, these additional data offer benefits through the ability to inform prior distributions of the lumped, model parameters. By incorporating data offering different information into the uncertainty assessment process, a cross-validation technique is engaged to better ensure that modeled results reflect real process complexity.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
Computational statistics using the Bayesian Inference Engine
NASA Astrophysics Data System (ADS)
Weinberg, Martin D.
2013-09-01
This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.
Framework for managing mycotoxin risks in the food industry.
Baker, Robert C; Ford, Randall M; Helander, Mary E; Marecki, Janusz; Natarajan, Ramesh; Ray, Bonnie
2014-12-01
We propose a methodological framework for managing mycotoxin risks in the food processing industry. Mycotoxin contamination is a well-known threat to public health that has economic significance for the food processing industry; it is imperative to address mycotoxin risks holistically, at all points in the procurement, processing, and distribution pipeline, by tracking the relevant data, adopting best practices, and providing suitable adaptive controls. The proposed framework includes (i) an information and data repository, (ii) a collaborative infrastructure with analysis and simulation tools, (iii) standardized testing and acceptance sampling procedures, and (iv) processes that link the risk assessments and testing results to the sourcing, production, and product release steps. The implementation of suitable acceptance sampling protocols for mycotoxin testing is considered in some detail.
McGorman, Laura; Marsh, David R.; Guenther, Tanya; Gilroy, Kate; Barat, Lawrence M.; Hammamy, Diaa; Wansi, Emmanuel; Peterson, Stefan; Hamer, Davidson H.; George, Asha
2012-01-01
Integrated community case management (iCCM) of childhood illness is an increasingly popular strategy to expand life-saving health services to underserved communities. However, community health approaches vary widely across countries and do not always distribute resources evenly across local health systems. We present a harmonized framework, developed through interagency consultation and review, which supports the design of CCM by using a systems approach. To verify that the framework produces results, we also suggest a list of complementary indicators, including nine global metrics, and a menu of 39 country-specific measures. When used by program managers and evaluators, we propose that the framework and indicators can facilitate the design, implementation, and evaluation of community case management. PMID:23136280
2014-01-01
Background Mindfulness-based cognitive therapy (MBCT) is a cost-effective psychosocial prevention programme that helps people with recurrent depression stay well in the long term. It was singled out in the 2009 National Institute for Health and Clinical Excellence (NICE) Depression Guideline as a key priority for implementation. Despite good evidence and guideline recommendations, its roll-out and accessibility across the UK appears to be limited and inequitably distributed. The study aims to describe the current state of MBCT accessibility and implementation across the UK, develop an explanatory framework of what is hindering and facilitating its progress in different areas, and develop an Implementation Plan and related resources to promote better and more equitable availability and use of MBCT within the UK National Health Service. Methods/Design This project is a two-phase qualitative, exploratory and explanatory research study, using an interview survey and in-depth case studies theoretically underpinned by the Promoting Action on Implementation in Health Services (PARIHS) framework. Interviews will be conducted with stakeholders involved in commissioning, managing and implementing MBCT services in each of the four UK countries, and will include areas where MBCT services are being implemented successfully and where implementation is not working well. In-depth case studies will be undertaken on a range of MBCT services to develop a detailed understanding of the barriers and facilitators to implementation. Guided by the study’s conceptual framework, data will be synthesized across Phase 1 and Phase 2 to develop a fit for purpose implementation plan. Discussion Promoting the uptake of evidence-based treatments into routine practice and understanding what influences these processes has the potential to support the adoption and spread of nationally recommended interventions like MBCT. This study could inform a larger scale implementation trial and feed into future implementation of MBCT with other long-term conditions and associated co-morbidities. It could also inform the implementation of interventions that are acceptable and effective, but are not widely accessible or implemented. PMID:24884603
ReSTART: A Novel Framework for Resource-Based Triage in Mass-Casualty Events.
Mills, Alex F; Argon, Nilay T; Ziya, Serhan; Hiestand, Brian; Winslow, James
2014-01-01
Current guidelines for mass-casualty triage do not explicitly use information about resource availability. Even though this limitation has been widely recognized, how it should be addressed remains largely unexplored. The authors present a novel framework developed using operations research methods to account for resource limitations when determining priorities for transportation of critically injured patients. To illustrate how this framework can be used, they also develop two specific example methods, named ReSTART and Simple-ReSTART, both of which extend the widely adopted triage protocol Simple Triage and Rapid Treatment (START) by using a simple calculation to determine priorities based on the relative scarcity of transportation resources. The framework is supported by three techniques from operations research: mathematical analysis, optimization, and discrete-event simulation. The authors? algorithms were developed using mathematical analysis and optimization and then extensively tested using 9,000 discrete-event simulations on three distributions of patient severity (representing low, random, and high acuity). For each incident, the expected number of survivors was calculated under START, ReSTART, and Simple-ReSTART. A web-based decision support tool was constructed to help providers make prioritization decisions in the aftermath of mass-casualty incidents based on ReSTART. In simulations, ReSTART resulted in significantly lower mortality than START regardless of which severity distribution was used (paired t test, p<.01). Mean decrease in critical mortality, the percentage of immediate and delayed patients who die, was 8.5% for low-acuity distribution (range ?2.2% to 21.1%), 9.3% for random distribution (range ?0.2% to 21.2%), and 9.1% for high-acuity distribution (range ?0.7% to 21.1%). Although the critical mortality improvement due to ReSTART was different for each of the three severity distributions, the variation was less than 1 percentage point, indicating that the ReSTART policy is relatively robust to different severity distributions. Taking resource limitations into account in mass-casualty situations, triage has the potential to increase the expected number of survivors. Further validation is required before field implementation; however, the framework proposed in here can serve as the foundation for future work in this area. 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doris, E.
2012-04-01
There is a growing body of qualitative and a limited body of quantitative literature supporting the common assertion that policy drives development of clean energy resources. Recent work in this area indicates that the impact of policy depends on policy type, length of time in place, and economic and social contexts of implementation. This work aims to inform policymakers about the impact of different policy types and to assist in the staging of those policies to maximize individual policy effectiveness and development of the market. To do so, this paper provides a framework for policy development to support the marketmore » for distributed photovoltaic systems. Next steps include mathematical validation of the framework and development of specific policy pathways given state economic and resource contexts.« less
Smart Grid Constraint Violation Management for Balancing and Regulating Purposes
Bhattarai, Bishnu; Kouzelis, Konstantinos; Mendaza, Iker; ...
2017-03-29
The gradual active load penetration in low voltage distribution grids is expected to challenge their network capacity in the near future. Distribution system operators should for this reason resort to either costly grid reinforcements or to demand side management mechanisms. Since demand side management implementation is usually cheaper, it is also the favorable solution. To this end, this article presents a framework for handling grid limit violations, both voltage and current, to ensure a secure and qualitative operation of the distribution grid. This framework consists of two steps, namely a proactive centralized and subsequently a reactive decentralized control scheme. Themore » former is employed to balance the one hour ahead load while the latter aims at regulating the consumption in real-time. In both cases, the importance of fair use of electricity demand flexibility is emphasized. Thus, it is demonstrated that this methodology aids in keeping the grid status within preset limits while utilizing flexibility from all flexibility participants.« less
The Vehicular Information Space Framework
NASA Astrophysics Data System (ADS)
Prinz, Vivian; Schlichter, Johann; Schweiger, Benno
Vehicular networks are distributed, self-organizing and highly mobile ad hoc networks. They allow for providing drivers with up-to-the-minute information about their environment. Therefore, they are expected to be a decisive future enabler for enhancing driving comfort and safety. This article introduces the Vehicular Information Space framework (VIS). Vehicles running the VIS form a kind of distributed database. It enables them to provide information like existing hazards, parking spaces or traffic densities in a location aware and fully distributed manner. In addition, vehicles can retrieve, modify and delete these information items. The underlying algorithm is based on features derived from existing structured Peer-to-Peer algorithms and extended to suit the specific characteristics of highly mobile ad hoc networks. We present, implement and simulate the VIS using a motorway and an urban traffic environment. Simulation studies on VIS message occurrence show that the VIS implies reasonable traffic overhead. Also, overall VIS message traffic is independent from the number of information items provided.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
Exclusive photoproduction of vector mesons in proton-lead ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-02-01
Rapidity distributions of vector mesons are computed in dipole model proton-lead ultraperipheral collisions (UPCs) at the CERN Larger Hadron Collider (LHC). The dipole model framework is implemented in the calculations of cross sections in the photon-hadron interaction. The bCGC model and Boosted Gaussian wave functions are employed in the scattering amplitude. We obtain predictions of rapidity distributions of J / ψ meson proton-lead ultraperipheral collisions. The predictions give a good description to the experimental data of ALICE. The rapidity distributions of ϕ, ω and ψ (2 s) mesons in proton-lead ultraperipheral collisions are also presented in this paper.
A distributed, hierarchical and recurrent framework for reward-based choice
Hunt, Laurence T.; Hayden, Benjamin Y.
2017-01-01
Many accounts of reward-based choice argue for distinct component processes that are serial and functionally localized. In this article, we argue for an alternative viewpoint, in which choices emerge from repeated computations that are distributed across many brain regions. We emphasize how several features of neuroanatomy may support the implementation of choice, including mutual inhibition in recurrent neural networks and the hierarchical organisation of timescales for information processing across the cortex. This account also suggests that certain correlates of value may be emergent rather than represented explicitly in the brain. PMID:28209978
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; van der Perk, Marcel; Karssenberg, Derek; Häring, Tim; Jene, Bernhard
2017-04-01
The modelling of pesticide transport through the soil and estimating its leaching to groundwater is essential for an appropriate environmental risk assessment. Pesticide leaching models commonly used in regulatory processes often lack the capability of providing a comprehensive spatial view, as they are implemented as non-spatial point models or only use a few combinations of representative soils to simulate specific plots. Furthermore, their handling of spatial input and output data and interaction with available Geographical Information Systems tools is limited. Therefore, executing several scenarios simulating and assessing the potential leaching on national or continental scale at high resolution is rather inefficient and prohibits the straightforward identification of areas prone to leaching. We present a new pesticide leaching model component of the PyCatch framework developed in PCRaster Python, an environmental modelling framework tailored to the development of spatio-temporal models (http://www.pcraster.eu). To ensure a feasible computational runtime of large scale models, we implemented an elementary field capacity approach to model soil water. Currently implemented processes are evapotranspiration, advection, dispersion, sorption, degradation and metabolite transformation. Not yet implemented relevant additional processes such as surface runoff, snowmelt, erosion or other lateral flows can be integrated with components already implemented in PyCatch. A preliminary version of the model executes a 20-year simulation of soil water processes for Germany (20 soil layers, 1 km2 spatial resolution, and daily timestep) within half a day using a single CPU. A comparison of the soil moisture and outflow obtained from the PCRaster implementation and PELMO, a commonly used pesticide leaching model, resulted in an R2 of 0.98 for the FOCUS Hamburg scenario. We will further discuss the validation of the pesticide transport processes and show case studies applied to European countries.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-01-16
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution,more » diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'« less
An Automated Design Framework for Multicellular Recombinase Logic.
Guiziou, Sarah; Ulliana, Federico; Moreau, Violaine; Leclere, Michel; Bonnet, Jerome
2018-05-18
Tools to systematically reprogram cellular behavior are crucial to address pressing challenges in manufacturing, environment, or healthcare. Recombinases can very efficiently encode Boolean and history-dependent logic in many species, yet current designs are performed on a case-by-case basis, limiting their scalability and requiring time-consuming optimization. Here we present an automated workflow for designing recombinase logic devices executing Boolean functions. Our theoretical framework uses a reduced library of computational devices distributed into different cellular subpopulations, which are then composed in various manners to implement all desired logic functions at the multicellular level. Our design platform called CALIN (Composable Asynchronous Logic using Integrase Networks) is broadly accessible via a web server, taking truth tables as inputs and providing corresponding DNA designs and sequences as outputs (available at http://synbio.cbs.cnrs.fr/calin ). We anticipate that this automated design workflow will streamline the implementation of Boolean functions in many organisms and for various applications.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
NASA Astrophysics Data System (ADS)
Olasz, A.; Nguyen Thai, B.; Kristóf, D.
2016-06-01
Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.
Vickers, A; Bali, S; Baxter, A; Bruce, G; England, J; Heafield, R; Langford, R; Makin, R; Power, I; Trim, J
2009-10-01
There has been considerable investment in efforts to improve postoperative pain management, including the introduction of acute pain teams. There have also been a number of guidelines published on postoperative pain management and there is widespread agreement on how pain should be practically managed. Despite these advances, there is no apparent improvement in the number of patients experiencing moderately severe or extreme pain after surgery. This highlights significant scope for improvement in acute postoperative pain management. In January 2009, a multidisciplinary UK expert panel met to define and agree a practical framework to encourage implementation of the numerous guidelines and fundamentals of pain management at a local level. The panel recognised that to do this, there was a need to organise the information and guidelines into a simplified, accessible and easy-to-implement system based on their practical clinical experience. Given the volume of literature in this area, the Chair recommended that key international guidelines from professional bodies should be distributed and then reviewed during the meeting to form the basis of the framework. Consensus was reached by unanimous agreement of all ten participants. This report provides a framework for the key themes, including consensus recommendations based upon practical experience agreed during the meeting, with the aim of consolidating the key guidelines to provide a fundamental framework which is simple to teach and implement in all areas. Key priorities that emerged were: Responsibility, Anticipation, Discussion, Assessment and Response. This formed the basis of RADAR, a novel framework to help pain specialists educate the wider care team on understanding and prioritising the management of acute pain. Acute postoperative pain can be more effectively managed if it is prioritised and anticipated by a well-informed care team who are educated with regard to appropriate analgesic options and understand what the long-term benefits of pain relief are. The principles of RADAR provide structure to help with training and implementation of good practice, to achieve effective postoperative pain management.
Trebble, Timothy M; Paul, Maureen; Hockey, Peter M; Heyworth, Nicola; Humphrey, Rachael; Powell, Timothy; Clarke, Nicholas
2015-03-01
Improving the quality and activity of clinicians' practice improves patient care. Performance-related human resource management (HRM) is an established approach to improving individual practice but with limited use among clinicians. A framework for performance-related HRM was developed from successful practice in non-healthcare organisations centred on distributive leadership and locally provided, validated and interpreted performance measurement. This study evaluated the response of medical and non-clinical managers to its implementation into a large secondary healthcare organisation. A semistructured qualitative questionnaire was developed from themes identified during framework implementation and included attitudes to previous approaches to measuring doctors' performance, and the structure and response to implementation of the performance-related HRM framework. Responses were analysed through a process of data summarising and categorising. A total of 29, from an invited cohort of 31, medical and non-clinical managers from departmental to executive level were interviewed. Three themes were identified: (1) previous systems of managing clinical performance were considered to be ineffective due to insufficient empowerment of medical managers and poor quality of available performance data; (2) the implemented framework was considered to address these needs and was positively received by medical and non-clinical managers; (3) introduction of performance-related HRM required the involvement of the whole organisation to executive level and inclusion within organisational strategy, structure and training. This study suggests that a performance-related HRM framework may facilitate the management of clinical performance in secondary healthcare, but is dependent on the design and methods of application used. Such approaches contrast with those currently proposed for clinicians in secondary healthcare in the UK and suggest that alternative strategies should be considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker
2017-01-01
In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.
A Bayesian alternative for multi-objective ecohydrological model specification
NASA Astrophysics Data System (ADS)
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia Mae; Culley, Dennis E.
2014-01-01
Distributed Engine Control (DEC) is an enabling technology that has the potential to advance the state-of-the-art in gas turbine engine control. To analyze the capabilities that DEC offers, a Hardware-In-the-Loop (HIL) test bed is being developed at NASA Glenn Research Center. This test bed will support a systems-level analysis of control capabilities in closed-loop engine simulations. The structure of the HIL emulates a virtual test cell by implementing the operator functions, control system, and engine on three separate computers. This implementation increases the flexibility and extensibility of the HIL. Here, a method is discussed for implementing these interfaces by connecting the three platforms over a dedicated Local Area Network (LAN). This approach is verified using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k), which is typically implemented on one computer. There are marginal differences between the results from simulation of the typical and the three-computer implementation. Additional analysis of the LAN network, including characterization of network load, packet drop, and latency, is presented. The three-computer setup supports the incorporation of complex control models and proprietary engine models into the HIL framework.
A general modeling framework for describing spatially structured population dynamics
Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan
2017-01-01
Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
NASA Astrophysics Data System (ADS)
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.; Berdnarcyk, Brett A.; Arnold, Steven M.; Collier, Craig S.
2009-01-01
This preliminary report demonstrates the capabilities of the recently developed software implementation that links the Generalized Method of Cells to explicit finite element analysis by extending a previous development which tied the generalized method of cells to implicit finite elements. The multiscale framework, which uses explicit finite elements at the global-scale and the generalized method of cells at the microscale is detailed. This implementation is suitable for both dynamic mechanics problems and static problems exhibiting drastic and sudden changes in material properties, which often encounter convergence issues with commercial implicit solvers. Progressive failure analysis of stiffened and un-stiffened fiber-reinforced laminates subjected to normal blast pressure loads was performed and is used to demonstrate the capabilities of this framework. The focus of this report is to document the development of the software implementation; thus, no comparison between the results of the models and experimental data is drawn. However, the validity of the results are assessed qualitatively through the observation of failure paths, stress contours, and the distribution of system energies.
ERIC Educational Resources Information Center
Ebrahimi, Nabi A.; Eskandari, Zahra; Rahimi, Ali
2013-01-01
This study aims to explore the effects of implementing a CALL framework on the students' perceptions of their communication classroom environments. The What Is Happening In This Class? (WIHIC) questionnaire was distributed twice among 34 (F=14 and M=20) Iranian EFL students, the first time after a ten-session-long regular no-tech communication…
Strategy for Intelligence, Surveillance, and Reconnaissance
2013-02-14
Marine Corps School of Advanced Warfighting. He was the commander of the 13th Intelligence Squadron (Distributed Ground System – 2) and served in the...military campaigns and major operations. The root cause of these difficulties is adherence to a centralized, Cold War collection management doctrine...current collection management doctrine creates for implementing ISR strategy. It will then propose an alternative framework for ISR strategy using a
Hemingway, Steve; White, Jacqueline; Baxter, Hazel; Smith, George; Turner, James; McCann, Terence
2012-10-01
Medicine administration is a high risk activity that most nurses undertake frequently. In this paper, the views of registered mental health nurses and final year student nurses are evaluated about the usefulness of the Medicines with Respect Assessment of the Administration of Medicines Competency Framework. A questionnaire using 22 items with closed and open response questions was distributed to 827 practising mental health nurses and 44 final year mental health nursing students. This article presents a content analysis of written replies to the open response questions. Four overlapping themes were identified in response to the open questions posed in the survey: (1) reasons for undertaking the Medicines with Respect Framework; (2) positive aspects; (3) negative aspects; and (4) service user benefits.
Research on retailer data clustering algorithm based on Spark
NASA Astrophysics Data System (ADS)
Huang, Qiuman; Zhou, Feng
2017-03-01
Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.
Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N.; Meng, Fande
2015-01-01
Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program–FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ‘‘best approach” depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds. PMID:26313561
Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande
2015-01-01
Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2018-04-05
The concordance correlation coefficient (CCC) is a widely used scaled index in the study of agreement. In this article, we propose estimating the CCC by a unified Bayesian framework that can (1) accommodate symmetric or asymmetric and light- or heavy-tailed data; (2) select model from several candidates; and (3) address other issues frequently encountered in practice such as confounding covariates and missing data. The performance of the proposal was studied and demonstrated using simulated as well as real-life biomarker data from a clinical study of an insomnia drug. The implementation of the proposal is accessible through a package in the Comprehensive R Archive Network.
Legrand, D
2008-11-01
The Alps-Mediterranean division of the French blood establishment (EFS Alpes-Mediterranée) has implemented a risk management program. Within this framework, the labile blood product distribution process was assessed to identify critical steps. Subsequently, safety measures were instituted including computer-assisted decision support, detailed written instructions and control checks at each step. Failure of these measures to prevent an incident underlines the vulnerability of the process to the human factor. Indeed root cause analysis showed that the incident was due to underestimation of the danger by one individual. Elimination of this type of risk will require continuous training, testing and updating of personnel. Identification and reporting of nonconformities will allow personnel at all levels (local, regional, and national) to share lessons and implement appropriate risk mitigation strategies.
NASA Astrophysics Data System (ADS)
Hamza, Karim; Piqueras, Jesús; Wickman, Per-Olof; Angelin, Marcus
2017-06-01
We present analyses of teacher professional growth during collaboration between science teachers and science education researchers, with special focus on how the differential assumption of responsibility between teachers and researchers affected the growth processes. The collaboration centered on a new conceptual framework introduced by the researchers, which aimed at empowering teachers to plan teaching in accordance with perceived purposes. Seven joint planning meetings between teachers and researchers were analyzed, both quantitatively concerning the extent to which the introduced framework became part of the discussions and qualitatively through the interconnected model of teacher professional growth. The collaboration went through three distinct phases characterized by how and the extent to which the teachers made use of the new framework. The change sequences identified in relation to each phase show that teacher recognition of salient outcomes from the framework was important for professional growth to occur. Moreover, our data suggest that this recognition may have been facilitated because the researchers, in initial phases of the collaboration, took increased responsibility for the implementation of the new framework. We conclude that although this differential assumption of responsibility may result in unequal distribution of power between teachers and researchers, it may at the same time mean more equal distribution of concrete work required as well as the inevitable risks associated with pedagogical innovation and introduction of research-based knowledge into science teachers' practice.
Implementing Value-Based Payment Reform: A Conceptual Framework and Case Examples.
Conrad, Douglas A; Vaughn, Matthew; Grembowski, David; Marcus-Smith, Miriam
2016-08-01
This article develops a conceptual framework for implementation of value-based payment (VBP) reform and then draws on that framework to systematically examine six distinct multi-stakeholder coalition VBP initiatives in three different regions of the United States. The VBP initiatives deploy the following payment models: reference pricing, "shadow" primary care capitation, bundled payment, pay for performance, shared savings within accountable care organizations, and global payment. The conceptual framework synthesizes prior models of VBP implementation. It describes how context, project objectives, payment and care delivery strategies, and the barriers and facilitators to translating strategy into implementation affect VBP implementation and value for patients. We next apply the framework to six case examples of implementation, and conclude by discussing the implications of the case examples and the conceptual framework for future practice and research. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
NASA Astrophysics Data System (ADS)
Zhang, Zhong
In this work, motivated by the need to coordinate transmission maintenance scheduling among a multiplicity of self-interested entities in restructured power industry, a distributed decision support framework based on multiagent negotiation systems (MANS) is developed. An innovative risk-based transmission maintenance optimization procedure is introduced. Several models for linking condition monitoring information to the equipment's instantaneous failure probability are presented, which enable quantitative evaluation of the effectiveness of maintenance activities in terms of system cumulative risk reduction. Methodologies of statistical processing, equipment deterioration evaluation and time-dependent failure probability calculation are also described. A novel framework capable of facilitating distributed decision-making through multiagent negotiation is developed. A multiagent negotiation model is developed and illustrated that accounts for uncertainty and enables social rationality. Some issues of multiagent negotiation convergence and scalability are discussed. The relationships between agent-based negotiation and auction systems are also identified. A four-step MAS design methodology for constructing multiagent systems for power system applications is presented. A generic multiagent negotiation system, capable of inter-agent communication and distributed decision support through inter-agent negotiations, is implemented. A multiagent system framework for facilitating the automated integration of condition monitoring information and maintenance scheduling for power transformers is developed. Simulations of multiagent negotiation-based maintenance scheduling among several independent utilities are provided. It is shown to be a viable alternative solution paradigm to the traditional centralized optimization approach in today's deregulated environment. This multiagent system framework not only facilitates the decision-making among competing power system entities, but also provides a tool to use in studying competitive industry relative to monopolistic industry.
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called “big data” challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields. PMID:25383096
Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields.
Fading vision: knowledge translation in the implementation of a public health policy intervention
2013-01-01
Background In response to several high profile public health crises, public health renewal is underway in Canada. In the province of British Columbia, the Ministry of Health initiated a collaborative evidence-informed process involving a steering committee of representatives from the six health authorities. A Core Functions (CF) Framework was developed, identifying 21 core public health programs. For each core program, an evidence review was conducted and a model core program paper developed. These documents were distributed to health authorities to guide development of their own renewed public health services. The CF implementation was conceptualized as an embedded knowledge translation process. A CF coordinator in each health authority was to facilitate a gap analysis and development of a performance improvement plan for each core program, and post these publically on the health authority website. Methods Interviews (n = 19) and focus groups (n = 8) were conducted with a total of 56 managers and front line staff from five health authorities working in the Healthy Living and Sexually Transmitted Infection Prevention core programs. All interviews and focus groups were digitally recorded, transcribed and verified by the project coordinator. Five members of the research team used NVivo 9 to manage data and conducted a thematic analysis. Results Four main themes emerged concerning implementation of the CF Framework generally, and the two programs specifically. The themes were: ‘you’ve told me what, now tell me how’; ‘the double bind’; ‘but we already do that’; and the ‘selling game.’ Findings demonstrate the original vision of the CF process was lost in the implementation process and many participants were unaware of the CF framework or process. Conclusions Results are discussed with respect to a well-known framework on the adoption, assimilation, and implementation of innovations in health services organizations. Despite attempts of the Ministry of Health and the Steering Committee to develop and implement a collaborative, evidence-informed policy intervention, there were several barriers to the realization of the vision for core public health functions implementation, at least in the early stages. In neglecting the implementation process, it seems unlikely that the expected benefits of the public health renewal process will be realized. PMID:23734672
Singh, Karandeep; Ahn, Chang-Won; Paik, Euihyun; Bae, Jang Won; Lee, Chun-Hee
2018-01-01
Artificial life (ALife) examines systems related to natural life, its processes, and its evolution, using simulations with computer models, robotics, and biochemistry. In this article, we focus on the computer modeling, or "soft," aspects of ALife and prepare a framework for scientists and modelers to be able to support such experiments. The framework is designed and built to be a parallel as well as distributed agent-based modeling environment, and does not require end users to have expertise in parallel or distributed computing. Furthermore, we use this framework to implement a hybrid model using microsimulation and agent-based modeling techniques to generate an artificial society. We leverage this artificial society to simulate and analyze population dynamics using Korean population census data. The agents in this model derive their decisional behaviors from real data (microsimulation feature) and interact among themselves (agent-based modeling feature) to proceed in the simulation. The behaviors, interactions, and social scenarios of the agents are varied to perform an analysis of population dynamics. We also estimate the future cost of pension policies based on the future population structure of the artificial society. The proposed framework and model demonstrates how ALife techniques can be used by researchers in relation to social issues and policies.
A Simulation Framework for Battery Cell Impact Safety Modeling Using LS-DYNA
Marcicki, James; Zhu, Min; Bartlett, Alexander; ...
2017-02-04
The development process of electrified vehicles can benefit significantly from computer-aided engineering tools that predict themultiphysics response of batteries during abusive events. A coupled structural, electrical, electrochemical, and thermal model framework has been developed within the commercially available LS-DYNA software. The finite element model leverages a three-dimensional mesh structure that fully resolves the unit cell components. The mechanical solver predicts the distributed stress and strain response with failure thresholds leading to the onset of an internal short circuit. In this implementation, an arbitrary compressive strain criterion is applied locally to each unit cell. A spatially distributed equivalent circuit model providesmore » an empirical representation of the electrochemical responsewith minimal computational complexity.The thermalmodel provides state information to index the electrical model parameters, while simultaneously accepting irreversible and reversible sources of heat generation. The spatially distributed models of the electrical and thermal dynamics allow for the localization of current density and corresponding temperature response. The ability to predict the distributed thermal response of the cell as its stored energy is completely discharged through the short circuit enables an engineering safety assessment. A parametric analysis of an exemplary model is used to demonstrate the simulation capabilities.« less
MADANALYSIS 5, a user-friendly framework for collider phenomenology
NASA Astrophysics Data System (ADS)
Conte, Eric; Fuks, Benjamin; Serret, Guillaume
2013-01-01
We present MADANALYSIS 5, a new framework for phenomenological investigations at particle colliders. Based on a C++ kernel, this program allows us to efficiently perform, in a straightforward and user-friendly fashion, sophisticated physics analyses of event files such as those generated by a large class of Monte Carlo event generators. MADANALYSIS 5 comes with two modes of running. The first one, easier to handle, uses the strengths of a powerful PYTHON interface in order to implement physics analyses by means of a set of intuitive commands. The second one requires one to implement the analyses in the C++ programming language, directly within the core of the analysis framework. This opens unlimited possibilities concerning the level of complexity which can be reached, being only limited by the programming skills and the originality of the user. Program summaryProgram title: MadAnalysis 5 Catalogue identifier: AENO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Permission to use, copy, modify and distribute this program is granted under the terms of the GNU General Public License. No. of lines in distributed program, including test data, etc.: 31087 No. of bytes in distributed program, including test data, etc.: 399105 Distribution format: tar.gz Programming language: PYTHON, C++. Computer: All platforms on which Python version 2.7, Root version 5.27 and the g++ compiler are available. Compatibility with newer versions of these programs is also ensured. However, the Python version must be below version 3.0. Operating system: Unix, Linux and Mac OS operating systems on which the above-mentioned versions of Python and Root, as well as g++, are available. Classification: 11.1. External routines: ROOT (http://root.cern.ch/drupal/) Nature of problem: Implementing sophisticated phenomenological analyses in high-energy physics through a flexible, efficient and straightforward fashion, starting from event files such as those produced by Monte Carlo event generators. The event files can have been matched or not to parton-showering and can have been processed or not by a (fast) simulation of a detector. According to the sophistication level of the event files (parton-level, hadron-level, reconstructed-level), one must note that several input formats are possible. Solution method: We implement an interface allowing the production of predefined as well as user-defined histograms for a large class of kinematical distributions after applying a set of event selection cuts specified by the user. This therefore allows us to devise robust and novel search strategies for collider experiments, such as those currently running at the Large Hadron Collider at CERN, in a very efficient way. Restrictions: Unsupported event file format. Unusual features: The code is fully based on object representations for events, particles, reconstructed objects and cuts, which facilitates the implementation of an analysis. Running time: It depends on the purposes of the user and on the number of events to process. It varies from a few seconds to the order of the minute for several millions of events.
Massive Signal Analysis with Hadoop (Invited)
NASA Astrophysics Data System (ADS)
Addair, T.
2013-12-01
The Geophysical Monitoring Program (GMP) at Lawrence Livermore National Laboratory is in the process of transitioning from a primarily human-driven analysis pipeline to a more automated and exploratory system. Waveform correlation represents a significant part of this effort, and the results that come out of this processing could lead to the development of more sophisticated event detection and analysis systems that require less human interaction, and address fundamental shortcomings in existing systems. Furthermore, use of distributed IO systems fundamentally addresses a scalability concern for the GMP as our data holdings continue to grow rapidly. As the data volume increases, it becomes less reasonable to rely upon human analysts to sift through all the information. Not only is more automation essential to keeping up with the ingestion rate, but so too do we require faster and more sophisticated tools for visualizing and interacting with the data. These issues of scalability are not unique to GMP or the seismic domain. All across the lab, and throughout industry, we hear about the promise of 'big data' to address the need of quickly analyzing vast amounts of data in fundamentally new ways. Our waveform correlation system finds and correlates nearby seismic events across the entire Earth. In our original implementation of the system, we processed some 50 TB of data on an in-house traditional HPC cluster (44 cores, 1 filesystem) over the span of 42 days. Having determined the primary bottleneck in the performance to be reading waveforms off a single BlueArc file server, we began investigating distributed IO solutions like Hadoop. As a test case, we took a 1 TB subset of our data and ported it to Livermore Computing's development Hadoop cluster. Through a pilot project sponsored by Livermore Computing (LC), the GMP successfully implemented the waveform correlation system in the Hadoop distributed MapReduce computing framework. Hadoop is an open source implementation of the MapReduce distributed programming framework. We used the Hadoop scripting framework known as Pig for putting together the multi-job MapReduce pipeline used to extract as much parallelism as possible from the algorithms. We also made use the Sqoop data ingestion tool to pull metadata tables from our Oracle database into HDFS (the Hadoop Distributed Filesystem). Running on our in-house HPC cluster, processing this test dataset took 58 hours to complete. In contrast, running our Hadoop implementation on LC's 10 node (160 core) cluster, we were able to cross-correlate the 1 TB of nearby seismic events in just under 3 hours, over a factor of 19 improvement from our existing implementation. This project is one of the first major data mining and analysis tasks performed at the lab or anywhere else correlating the entire Earth's seismicity. Through the success of this project, we believe we've shown that a MapReduce solution can be appropriate for many large-scale Earth science data analysis and exploration problems. Given Hadoop's position as the dominant data analytics solution in industry, we believe Hadoop can be applied to many previously intractable Earth science problems.
Monitoring performance of a highly distributed and complex computing infrastructure in LHCb
NASA Astrophysics Data System (ADS)
Mathe, Z.; Haen, C.; Stagni, F.
2017-10-01
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.
Pfadenhauer, Lisa M; Gerhardus, Ansgar; Mozygemba, Kati; Lysdahl, Kristin Bakke; Booth, Andrew; Hofmann, Bjørn; Wahlster, Philip; Polus, Stephanie; Burns, Jacob; Brereton, Louise; Rehfuess, Eva
2017-02-15
The effectiveness of complex interventions, as well as their success in reaching relevant populations, is critically influenced by their implementation in a given context. Current conceptual frameworks often fail to address context and implementation in an integrated way and, where addressed, they tend to focus on organisational context and are mostly concerned with specific health fields. Our objective was to develop a framework to facilitate the structured and comprehensive conceptualisation and assessment of context and implementation of complex interventions. The Context and Implementation of Complex Interventions (CICI) framework was developed in an iterative manner and underwent extensive application. An initial framework based on a scoping review was tested in rapid assessments, revealing inconsistencies with respect to the underlying concepts. Thus, pragmatic utility concept analysis was undertaken to advance the concepts of context and implementation. Based on these findings, the framework was revised and applied in several systematic reviews, one health technology assessment (HTA) and one applicability assessment of very different complex interventions. Lessons learnt from these applications and from peer review were incorporated, resulting in the CICI framework. The CICI framework comprises three dimensions-context, implementation and setting-which interact with one another and with the intervention dimension. Context comprises seven domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, ethical, legal, political); implementation consists of five domains (i.e., implementation theory, process, strategies, agents and outcomes); setting refers to the specific physical location, in which the intervention is put into practise. The intervention and the way it is implemented in a given setting and context can occur on a micro, meso and macro level. Tools to operationalise the framework comprise a checklist, data extraction tools for qualitative and quantitative reviews and a consultation guide for applicability assessments. The CICI framework addresses and graphically presents context, implementation and setting in an integrated way. It aims at simplifying and structuring complexity in order to advance our understanding of whether and how interventions work. The framework can be applied in systematic reviews and HTA as well as primary research and facilitate communication among teams of researchers and with various stakeholders.
Intelligent and robust optimization frameworks for smart grids
NASA Astrophysics Data System (ADS)
Dhansri, Naren Reddy
A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.
Local Alignment Tool Based on Hadoop Framework and GPU Architecture
Hung, Che-Lun; Hua, Guan-Jie
2014-01-01
With the rapid growth of next generation sequencing technologies, such as Slex, more and more data have been discovered and published. To analyze such huge data the computational performance is an important issue. Recently, many tools, such as SOAP, have been implemented on Hadoop and GPU parallel computing architectures. BLASTP is an important tool, implemented on GPU architectures, for biologists to compare protein sequences. To deal with the big biology data, it is hard to rely on single GPU. Therefore, we implement a distributed BLASTP by combining Hadoop and multi-GPUs. The experimental results present that the proposed method can improve the performance of BLASTP on single GPU, and also it can achieve high availability and fault tolerance. PMID:24955362
Local alignment tool based on Hadoop framework and GPU architecture.
Hung, Che-Lun; Hua, Guan-Jie
2014-01-01
With the rapid growth of next generation sequencing technologies, such as Slex, more and more data have been discovered and published. To analyze such huge data the computational performance is an important issue. Recently, many tools, such as SOAP, have been implemented on Hadoop and GPU parallel computing architectures. BLASTP is an important tool, implemented on GPU architectures, for biologists to compare protein sequences. To deal with the big biology data, it is hard to rely on single GPU. Therefore, we implement a distributed BLASTP by combining Hadoop and multi-GPUs. The experimental results present that the proposed method can improve the performance of BLASTP on single GPU, and also it can achieve high availability and fault tolerance.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
CAGE IIIA Distributed Simulation Design Methodology
2014-05-01
2 VHF Very High Frequency VLC Video LAN Codec – an Open-source cross-platform multimedia player and framework VM Virtual Machine VOIP Voice Over...Implementing Defence Experimentation (GUIDEx). The key challenges for this methodology are with understanding how to: • design it o define the...operation and to be available in the other nation’s simulations. The challenge for the CAGE campaign of experiments is to continue to build upon this
High-performance reconfigurable hardware architecture for restricted Boltzmann machines.
Ly, Daniel Le; Chow, Paul
2010-11-01
Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.
Liang, Wanjie; Cao, Jing; Fan, Yan; Zhu, Kefeng; Dai, Qiwei
2015-01-01
In recent years, traceability systems have been developed as effective tools for improving the transparency of supply chains, thereby guaranteeing the quality and safety of food products. In this study, we proposed a cattle/beef supply chain traceability model and a traceability system based on radio frequency identification (RFID) technology and the EPCglobal network. First of all, the transformations of traceability units were defined and analyzed throughout the cattle/beef chain. Secondly, we described the internal and external traceability information acquisition, transformation, and transmission processes throughout the beef supply chain in detail, and explained a methodology for modeling traceability information using the electronic product code information service (EPCIS) framework. Then, the traceability system was implemented based on Fosstrak and FreePastry software packages, and animal ear tag code and electronic product code (EPC) were employed to identify traceability units. Finally, a cattle/beef supply chain included breeding business, slaughter and processing business, distribution business and sales outlet was used as a case study to evaluate the beef supply chain traceability system. The results demonstrated that the major advantages of the traceability system are the effective sharing of information among business and the gapless traceability of the cattle/beef supply chain.
Collaborative environments for capability-based planning
NASA Astrophysics Data System (ADS)
McQuay, William K.
2005-05-01
Distributed collaboration is an emerging technology for the 21st century that will significantly change how business is conducted in the defense and commercial sectors. Collaboration involves two or more geographically dispersed entities working together to create a "product" by sharing and exchanging data, information, and knowledge. A product is defined broadly to include, for example, writing a report, creating software, designing hardware, or implementing robust systems engineering and capability planning processes in an organization. Collaborative environments provide the framework and integrate models, simulations, domain specific tools, and virtual test beds to facilitate collaboration between the multiple disciplines needed in the enterprise. The Air Force Research Laboratory (AFRL) is conducting a leading edge program in developing distributed collaborative technologies targeted to the Air Force's implementation of systems engineering for a simulation-aided acquisition and capability-based planning. The research is focusing on the open systems agent-based framework, product and process modeling, structural architecture, and the integration technologies - the glue to integrate the software components. In past four years, two live assessment events have been conducted to demonstrate the technology in support of research for the Air Force Agile Acquisition initiatives. The AFRL Collaborative Environment concept will foster a major cultural change in how the acquisition, training, and operational communities conduct business.
Liang, Wanjie; Cao, Jing; Fan, Yan; Zhu, Kefeng; Dai, Qiwei
2015-01-01
In recent years, traceability systems have been developed as effective tools for improving the transparency of supply chains, thereby guaranteeing the quality and safety of food products. In this study, we proposed a cattle/beef supply chain traceability model and a traceability system based on radio frequency identification (RFID) technology and the EPCglobal network. First of all, the transformations of traceability units were defined and analyzed throughout the cattle/beef chain. Secondly, we described the internal and external traceability information acquisition, transformation, and transmission processes throughout the beef supply chain in detail, and explained a methodology for modeling traceability information using the electronic product code information service (EPCIS) framework. Then, the traceability system was implemented based on Fosstrak and FreePastry software packages, and animal ear tag code and electronic product code (EPC) were employed to identify traceability units. Finally, a cattle/beef supply chain included breeding business, slaughter and processing business, distribution business and sales outlet was used as a case study to evaluate the beef supply chain traceability system. The results demonstrated that the major advantages of the traceability system are the effective sharing of information among business and the gapless traceability of the cattle/beef supply chain. PMID:26431340
Distributed collaborative environments for virtual capability-based planning
NASA Astrophysics Data System (ADS)
McQuay, William K.
2003-09-01
Distributed collaboration is an emerging technology that will significantly change how decisions are made in the 21st century. Collaboration involves two or more geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions. The marriage of information, collaboration, and simulation technologies provides the decision maker with a collaborative virtual environment for planning and decision support. This paper reviews research that is focusing on the applying open standards agent-based framework with integrated modeling and simulation to a new Air Force initiative in capability-based planning and the ability to implement it in a distributed virtual environment. Virtual Capability Planning effort will provide decision-quality knowledge for Air Force resource allocation and investment planning including examining proposed capabilities and cost of alternative approaches, the impact of technologies, identification of primary risk drivers, and creation of executable acquisition strategies. The transformed Air Force business processes are enabled by iterative use of constructive and virtual modeling, simulation, and analysis together with information technology. These tools are applied collaboratively via a technical framework by all the affected stakeholders - warfighter, laboratory, product center, logistics center, test center, and primary contractor.
NASA Astrophysics Data System (ADS)
Sheng, Zheng
2013-02-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias
2016-10-01
Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.
Operational Research: Evaluating Multimodel Implementations for 24/7 Runtime Environments
NASA Astrophysics Data System (ADS)
Burkhart, J. F.; Helset, S.; Abdella, Y. S.; Lappegard, G.
2016-12-01
We present a new open source framework for operational hydrologic rainfall-runoff modeling. The Statkraft Hydrologic Forecasting Toolbox (Shyft) is unique from existing frameworks in that two primary goals are to provide: i) modern, professionally developed source code, and ii) a platform that is robust and ready for operational deployment. Developed jointly between Statkraft AS and The University of Oslo, the framework is currently in operation in both private and academic environments. The hydrology presently available in the distribution is simple and proven. Shyft provides a platform for distributed hydrologic modeling in a highly efficient manner. In it's current operational deployment at Statkraft, Shyft is used to provide daily 10-day forecasts for critical reservoirs. In a research setting, we have developed a novel implementation of the SNICAR model to assess the impact of aerosol deposition on snow packs. Several well known rainfall-runoff algorithms are available for use, allowing for intercomparing different approaches based on available data and the geographical environment. The well known HBV model is a default option, and other routines with more localized methods handling snow and evapotranspiration, or simplifications of catchment scale processes are included. For the latter, we have implemented the Kirchner response routine. Being developed in Norway, a variety snow-melt routines, including simplified degree day models or more advanced energy balance models, may be selected. Ensemble forecasts, multi-model implementations, and statistical post-processing routines enable a robust toolbox for investigating optimal model configurations in an operational setting. The Shyft core is written in modern templated C++ and has Python wrappers developed for easy access to module sub-routines. The code is developed such that the modules that make up a "method stack" are easy to modify and customize, allowing one to create new methods and test them rapidly. Due to the simple architecture and ease of access to the module routines, we see Shyft as an optimal choice to evaluate new hydrologic routines in an environment requiring robust, professionally developed software and welcome further community participation.
Integrating biodiversity distribution knowledge: toward a global map of life.
Jetz, Walter; McPherson, Jana M; Guralnick, Robert P
2012-03-01
Global knowledge about the spatial distribution of species is orders of magnitude coarser in resolution than other geographically-structured environmental datasets such as topography or land cover. Yet such knowledge is crucial in deciphering ecological and evolutionary processes and in managing global change. In this review, we propose a conceptual and cyber-infrastructure framework for refining species distributional knowledge that is novel in its ability to mobilize and integrate diverse types of data such that their collective strengths overcome individual weaknesses. The ultimate aim is a public, online, quality-vetted 'Map of Life' that for every species integrates and visualizes available distributional knowledge, while also facilitating user feedback and dynamic biodiversity analyses. First milestones toward such an infrastructure have now been implemented. Copyright © 2011 Elsevier Ltd. All rights reserved.
Distributional Cost-Effectiveness Analysis
Asaria, Miqdad; Griffin, Susan; Cookson, Richard
2015-01-01
Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564
An Open Source modular platform for hydrological model implementation
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Bruland, Oddbjørn
2010-05-01
An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not: Write or compile computer code, handle file IO for each modules, • Routine implementation and testing. Implementation of new process-simulating methods/equations, specialised objective functions or quality control routines, testing of these in an existing framework. o Need not: Implement user or model interface for the new routine, IO handling, administration of model setup and run, calibration and validation routines etc. From being developed for Norway's largest hydropower producer Statkraft, ENKI is now being turned into an Open Source project. At the time of writing, the licence and the project administration is not established. Also, it remains to port the application to other compilers and computer platforms. However, we hope that ENKI will prove useful for both academic and operational users.
Collaborative Information Retrieval Method among Personal Repositories
NASA Astrophysics Data System (ADS)
Kamei, Koji; Yukawa, Takashi; Yoshida, Sen; Kuwabara, Kazuhiro
In this paper, we describe a collaborative information retrieval method among personal repositorie and an implementation of the method on a personal agent framework. We propose a framework for personal agents that aims to enable the sharing and exchange of information resources that are distributed unevenly among individuals. The kernel of a personal agent framework is an RDF(resource description framework)-based information repository for storing, retrieving and manipulating privately collected information, such as documents the user read and/or wrote, email he/she exchanged, web pages he/she browsed, etc. The repository also collects annotations to information resources that describe relationships among information resources and records of interaction between the user and information resources. Since the information resources in a personal repository and their structure are personalized, information retrieval from other users' is an important application of the personal agent. A vector space model with a personalized concept-base is employed as an information retrieval mechanism in a personal repository. Since a personalized concept-base is constructed from information resources in a personal repository, it reflects its user's knowledge and interests. On the other hand, it leads to another problem while querying other users' personal repositories; that is, simply transferring query requests does not provide desirable results. To solve this problem, we propose a query equalization scheme based on a relevance feedback method for collaborative information retrieval between personalized concept-bases. In this paper, we describe an implementation of the collaborative information retrieval method and its user interface on the personal agent framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F
In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented usingmore » the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.« less
Urquhart, Robin; Sargeant, Joan; Grunfeld, Eva
2013-01-01
Moving knowledge into practice and the implementation of innovations in health care remain significant challenges. Few researchers adequately address the influence of organizations on the implementation of innovations in health care. The aims of this article are to (1) present 2 conceptual frameworks for understanding the organizational factors important to the successful implementation of innovations in health care settings; (2) discuss each in relation to the literature; and (3) briefly demonstrate how each may be applied to 3 initiatives involving the implementation of a specific innovation-synoptic reporting tools-in cancer care. Synoptic reporting tools capture information from diagnostic tests, surgeries, and pathology examinations in a standardized, structured manner and contain only the information necessary for patient care. The frameworks selected were the Promoting Action on Research Implementation in Health Services framework and an organizational framework of innovation implementation; these frameworks arise from different disciplines (nursing and management, respectively). The constructs from each framework are examined in relation to the literature, with each construct applied to synoptic reporting tool implementation to demonstrate how each may be used to inform both practice and research in this area. By improving our understanding of existing frameworks, we enhance our ability to more effectively study and target implementation processes. Copyright © 2013 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education.
Market-Based Decision Guidance Framework for Power and Alternative Energy Collaboration
NASA Astrophysics Data System (ADS)
Altaleb, Hesham
With the introduction of power energy markets deregulation, innovations have transformed once a static network into a more flexible grid. Microgrids have also been deployed to serve various purposes (e.g., reliability, sustainability, etc.). With the rapid deployment of smart grid technologies, it has become possible to measure and record both, the quantity and time of the consumption of electrical power. In addition, capabilities for controlling distributed supply and demand have resulted in complex systems where inefficiencies are possible and where improvements can be made. Electric power like other volatile resources cannot be stored efficiently, therefore, managing such resource requires considerable attention. Such complex systems present a need for decisions that can streamline consumption, delay infrastructure investments, and reduce costs. When renewable power resources and the need for limiting harmful emissions are added to the equation, the search space for decisions becomes increasingly complex. As a result, the need for a comprehensive decision guidance system for electrical power resources consumption and productions becomes evident. In this dissertation, I formulate and implement a comprehensive framework that addresses different aspect of the electrical power generation and consumption using optimization models and utilizing collaboration concepts. Our solution presents a two-prong approach: managing interaction in real-time for the short-term immediate consumption of already allocated resources; and managing the operational planning for the long-run consumption. More specifically, in real-time, we present and implement a model of how to organize a secondary market for peak-demand allocation and describe the properties of the market that guarantees efficient execution and a method for the fair distribution of collaboration gains. We also propose and implement a primary market for peak demand bounds determination problem with the assumption that participants of this market have the ability to collaborate in real-time. Moreover, proposed in this dissertation is an extensible framework to facilitate C&I entities forming a consortium to collaborate on their electric power supply and demand. The collaborative framework includes the structure of market setting, bids, and market resolution that produces a schedule of how power components are controlled as well as the resulting payment. The market resolution must satisfy a number of desirable properties (i.e., feasibility, Nash equilibrium, Pareto optimality, and equal collaboration profitability) which are formally defined in the dissertation. Furthermore, to support the extensible framework components' library, power components such as utility contract, back-up power generator, renewable resource, and power consuming service are formally modeled. Finally, the validity of this framework is evaluated by a case study using simulated load scenarios to examine the ability of the framework to efficiently operate at the specified time intervals with minimal overhead cost.
BioContainers: an open-source and community-driven framework for software standardization
da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset
2017-01-01
Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341
A development framework for semantically interoperable health information systems.
Lopez, Diego M; Blobel, Bernd G M E
2009-02-01
Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.
Query Health: standards-based, cross-platform population health surveillance
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371
Query Health: standards-based, cross-platform population health surveillance.
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Giannini, Tereza C; Tambosi, Leandro R; Acosta, André L; Jaffé, Rodolfo; Saraiva, Antonio M; Imperatriz-Fonseca, Vera L; Metzger, Jean Paul
2015-01-01
Ecosystem services provided by mobile agents are increasingly threatened by the loss and modification of natural habitats and by climate change, risking the maintenance of biodiversity, ecosystem functions, and human welfare. Research oriented towards a better understanding of the joint effects of land use and climate change over the provision of specific ecosystem services is therefore essential to safeguard such services. Here we propose a methodological framework, which integrates species distribution forecasts and graph theory to identify key conservation areas, which if protected or restored could improve habitat connectivity and safeguard ecosystem services. We applied the proposed framework to the provision of pollination services by a tropical stingless bee (Melipona quadrifasciata), a key pollinator of native flora from the Brazilian Atlantic Forest and important agricultural crops. Based on the current distribution of this bee and that of the plant species used to feed and nest, we projected the joint distribution of bees and plants in the future, considering a moderate climate change scenario (following IPPC). We then used this information, the bee's flight range, and the current mapping of Atlantic Forest remnants to infer habitat suitability and quantify local and regional habitat connectivity for 2030, 2050 and 2080. Our results revealed north to south and coastal to inland shifts in the pollinator distribution during the next 70 years. Current and future connectivity maps unraveled the most important corridors, which if protected or restored, could facilitate the dispersal and establishment of bees during distribution shifts. Our results also suggest that coffee plantations from eastern São Paulo and southern Minas Gerais States could suffer a pollinator deficit in the future, whereas pollination services seem to be secured in southern Brazil. Landowners and governmental agencies could use this information to implement new land use schemes. Overall, our proposed methodological framework could help design novel conservational and agricultural practices that can be crucial to conserve ecosystem services by buffering the joint effect of habitat configuration and climate change.
Integrating Remote and Social Sensing Data for a Scenario on Secure Societies in Big Data Platform
NASA Astrophysics Data System (ADS)
Albani, Sergio; Lazzarini, Michele; Koubarakis, Manolis; Taniskidou, Efi Karra; Papadakis, George; Karkaletsis, Vangelis; Giannakopoulos, George
2016-08-01
In the framework of the Horizon 2020 project BigDataEurope (Integrating Big Data, Software & Communities for Addressing Europe's Societal Challenges), a pilot for the Secure Societies Societal Challenge was designed considering the requirements coming from relevant stakeholders. The pilot is focusing on the integration in a Big Data platform of data coming from remote and social sensing.The information on land changes coming from the Copernicus Sentinel 1A sensor (Change Detection workflow) is integrated with information coming from selected Twitter and news agencies accounts (Event Detection workflow) in order to provide the user with multiple sources of information.The Change Detection workflow implements a processing chain in a distributed parallel manner, exploiting the Big Data capabilities in place; the Event Detection workflow implements parallel and distributed social media and news agencies monitoring as well as suitable mechanisms to detect and geo-annotate the related events.
Computation of free energy profiles with parallel adaptive dynamics
NASA Astrophysics Data System (ADS)
Lelièvre, Tony; Rousset, Mathias; Stoltz, Gabriel
2007-04-01
We propose a formulation of an adaptive computation of free energy differences, in the adaptive biasing force or nonequilibrium metadynamics spirit, using conditional distributions of samples of configurations which evolve in time. This allows us to present a truly unifying framework for these methods, and to prove convergence results for certain classes of algorithms. From a numerical viewpoint, a parallel implementation of these methods is very natural, the replicas interacting through the reconstructed free energy. We demonstrate how to improve this parallel implementation by resorting to some selection mechanism on the replicas. This is illustrated by computations on a model system of conformational changes.
NASA Astrophysics Data System (ADS)
Noh, S.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-12-01
Applications of the sequential data assimilation methods have been increasing in hydrology to reduce uncertainty in the model prediction. In a distributed hydrologic model, there are many types of state variables and each variable interacts with each other based on different time scales. However, the framework to deal with the delayed response, which originates from different time scale of hydrologic processes, has not been thoroughly addressed in the hydrologic data assimilation. In this study, we propose the lagged filtering scheme to consider the lagged response of internal states in a distributed hydrologic model using two filtering schemes; particle filtering (PF) and ensemble Kalman filtering (EnKF). The EnKF is one of the widely used sub-optimal filters implementing an efficient computation with limited number of ensemble members, however, still based on Gaussian approximation. PF can be an alternative in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions involved. In case of PF, advanced particle regularization scheme is implemented together to preserve the diversity of the particle system. In case of EnKF, the ensemble square root filter (EnSRF) are implemented. Each filtering method is parallelized and implemented in the high performance computing system. A distributed hydrologic model, the water and energy transfer processes (WEP) model, is applied for the Katsura River catchment, Japan to demonstrate the applicability of proposed approaches. Forecasted results via PF and EnKF are compared and analyzed in terms of the prediction accuracy and the probabilistic adequacy. Discussions are focused on the prospects and limitations of each data assimilation method.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
Hyder, Adnan A; Alonge, Olakunle; He, Siran; Wadhwaniya, Shirin; Rahman, Fazlur; El Arifeen, Shams
2014-12-01
Drowning is the commonest cause of injury-related deaths among under-five children worldwide, and 95% of deaths occur in low- and middle-income countries (LMICs) where there are implementation gaps in the drowning prevention interventions. This article reviews common interventions for drowning prevention, introduces a framework for effective implementation of such interventions, and describes the Saving of Lives from Drowning (SoLiD) Project in Bangladesh, which is based on this framework. A review of the systematic reviews on drowning interventions was conducted, and original research articles were pulled and summarized into broad prevention categories. The implementation framework builds upon two existing frameworks and categorizes the implementing process for drowning prevention interventions into four phases: planning, engaging, executing, and evaluating. Eleven key characteristics are mapped in these phases. The framework was applied to drowning prevention projects that have been undertaken in some LMICs to illustrate major challenges to implementation. The implementation process for the SoLiD Project in Bangladesh is used as an example to illustrate the practical utilization of the framework. Drowning interventions, such as pool fencing and covering of water hazards, are effective in high-income countries; however, most of these interventions have not been tested in LMICs. The critical components of the four phases of implementing drowning prevention interventions may include: (i) planning-global funding, political will, scale, sustainability, and capacity building; (ii) engaging-coordination, involvement of appropriate individuals; (iii) executing-focused action, multisectoral actions, quality of execution; and (iv) evaluating-rigorous monitoring and evaluation. Some of the challenges to implementing drowning prevention interventions in LMICs include insufficient funds, lack of technical capacity, and limited coordination among stakeholders and implementers. The SoLiD Project in Bangladesh incorporates some of these lessons and key features of the proposed framework. The framework presented in this paper was a useful tool for implementing drowning prevention interventions in Bangladesh and may be useful for adaptation in drowning and injury prevention programmes of other LMIC settings.
Photochemical Phenomenology Model for the New Millennium
NASA Technical Reports Server (NTRS)
Bishop, James; Evans, J. Scott
2001-01-01
The "Photochemical Phenomenology Model for the New Millennium" project tackles the issue of reengineering and extension of validated physics-based modeling capabilities ("legacy" computer codes) to application-oriented software for use in science and science-support activities. While the design and architecture layouts are in terms of general particle distributions involved in scattering, impact, and reactive interactions, initial Photochemical Phenomenology Modeling Tool (PPMT) implementations are aimed at construction and evaluation of photochemical transport models with rapid execution for use in remote sensing data analysis activities in distributed systems. Current focus is on the Composite Infrared Spectrometer (CIRS) data acquired during the CASSINI flyby of Jupiter. Overall, the project has stayed on the development track outlined in the Year 1 annual report and most Year 2 goals have been met. The issues that have required the most attention are: implementation of the core photochemistry algorithms; implementation of a functional Java Graphical User Interface; completion of a functional CORBA Component Model framework; and assessment of performance issues. Specific accomplishments and the difficulties encountered are summarized in this report. Work to be carried out in the next year center on: completion of testing of the initial operational implementation; its application to analysis of the CASSINI/CIRS Jovian flyby data; extension of the PPMT to incorporate additional phenomenology algorithms; and delivery of a mature operational implementation.
Creating the Future: Changing Culture Through Leadership Capacity Development
NASA Astrophysics Data System (ADS)
Lefoe, Geraldine
Leadership for change is key to universities finding new ways to meet the needs of their future students. This chapter describes an innovative framework for leadership capacity development which has been implemented in a number of Australian universities. The framework, underpinned by a distributive approach to leadership, prepares a new generation of leaders for formal positions of leadership in all aspects of teaching and learning. The faculty scholars implemented projects, including a number of them using innovative technologies, to establish strategic change within their faculties. They shared their outcomes annually through national roundtables, which focussed on methods for improving assessment practice. Five critical factors for success are discussed including implemenation of strategic faculty-based projects; formal leadership training and related activities; opportunities for dialog about leadership practice and experiences; and activities that expanded current professional networks. The model can be adapted to have a specific focus on leadership for e-Learning, and some examples of faculty based strategic initiatives are described.
Distributed visualization framework architecture
NASA Astrophysics Data System (ADS)
Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger
2010-01-01
An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
Nims, Robert J; Durney, Krista M; Cigan, Alexander D; Dusséaux, Antoine; Hung, Clark T; Ateshian, Gerard A
2016-02-06
This study presents a damage mechanics framework that employs observable state variables to describe damage in isotropic or anisotropic fibrous tissues. In this mixture theory framework, damage is tracked by the mass fraction of bonds that have broken. Anisotropic damage is subsumed in the assumption that multiple bond species may coexist in a material, each having its own damage behaviour. This approach recovers the classical damage mechanics formulation for isotropic materials, but does not appeal to a tensorial damage measure for anisotropic materials. In contrast with the classical approach, the use of observable state variables for damage allows direct comparison of model predictions to experimental damage measures, such as biochemical assays or Raman spectroscopy. Investigations of damage in discrete fibre distributions demonstrate that the resilience to damage increases with the number of fibre bundles; idealizing fibrous tissues using continuous fibre distribution models precludes the modelling of damage. This damage framework was used to test and validate the hypothesis that growth of cartilage constructs can lead to damage of the synthesized collagen matrix due to excessive swelling caused by synthesized glycosaminoglycans. Therefore, alternative strategies must be implemented in tissue engineering studies to prevent collagen damage during the growth process.
Nims, Robert J.; Durney, Krista M.; Cigan, Alexander D.; Hung, Clark T.; Ateshian, Gerard A.
2016-01-01
This study presents a damage mechanics framework that employs observable state variables to describe damage in isotropic or anisotropic fibrous tissues. In this mixture theory framework, damage is tracked by the mass fraction of bonds that have broken. Anisotropic damage is subsumed in the assumption that multiple bond species may coexist in a material, each having its own damage behaviour. This approach recovers the classical damage mechanics formulation for isotropic materials, but does not appeal to a tensorial damage measure for anisotropic materials. In contrast with the classical approach, the use of observable state variables for damage allows direct comparison of model predictions to experimental damage measures, such as biochemical assays or Raman spectroscopy. Investigations of damage in discrete fibre distributions demonstrate that the resilience to damage increases with the number of fibre bundles; idealizing fibrous tissues using continuous fibre distribution models precludes the modelling of damage. This damage framework was used to test and validate the hypothesis that growth of cartilage constructs can lead to damage of the synthesized collagen matrix due to excessive swelling caused by synthesized glycosaminoglycans. Therefore, alternative strategies must be implemented in tissue engineering studies to prevent collagen damage during the growth process. PMID:26855751
2004-08-01
release; distribution is unlimited. ENGINEERING SERVICE CENTER Port Hueneme, California 93043-4370 (This page intentionally blank.) REPORT DOCUMENTATION...responses. For example, an extended period of drought may greatly retard, set back, or even prevent establishment of a desired plant species or...Categories of Potential Chemical Monitoring Dataa Monitoring Variable Habitat Type Water Quality pH REDOX DO Salinity Freshwater wetlands S−M S–M M S–M
Chua, Jonathan Raphacis; Chu, Chi Meng; Yim, Grace; Chong, Dominic; Teoh, Jennifer
2014-11-02
The Risk-Need-Responsivity (RNR) framework is regarded as the forefront of offender rehabilitation in guiding youth offender risk assessment and interventions. This article discusses the juvenile justice system in Singapore and the local research that has been conducted in relation to the RNR framework and the associated Youth Level of Service (YLS) measures. It describes a journey that saw the implementation of the RNR framework across the juvenile justice agencies and highlights the challenges that were faced during the implementation process on the ground. Finally, the article concludes by providing future directions for the implementation of the RNR framework in Singapore.
Chua, Jonathan Raphacis; Chu, Chi Meng; Yim, Grace; Chong, Dominic; Teoh, Jennifer
2014-01-01
The Risk–Need–Responsivity (RNR) framework is regarded as the forefront of offender rehabilitation in guiding youth offender risk assessment and interventions. This article discusses the juvenile justice system in Singapore and the local research that has been conducted in relation to the RNR framework and the associated Youth Level of Service (YLS) measures. It describes a journey that saw the implementation of the RNR framework across the juvenile justice agencies and highlights the challenges that were faced during the implementation process on the ground. Finally, the article concludes by providing future directions for the implementation of the RNR framework in Singapore. PMID:25866464
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing
NASA Astrophysics Data System (ADS)
Klems, Markus; Nimis, Jens; Tai, Stefan
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
FunBlocks. A modular framework for AmI system development.
Baquero, Rafael; Rodríguez, José; Mendoza, Sonia; Decouchant, Dominique; Papis, Alfredo Piero Mateos
2012-01-01
The last decade has seen explosive growth in the technologies required to implement Ambient Intelligence (AmI) systems. Technologies such as facial and speech recognition, home networks, household cleaning robots, to name a few, have become commonplace. However, due to the multidisciplinary nature of AmI systems and the distinct requirements of different user groups, integrating these developments into full-scale systems is not an easy task. In this paper we propose FunBlocks, a minimalist modular framework for the development of AmI systems based on the function module abstraction used in the IEC 61499 standard for distributed control systems. FunBlocks provides a framework for the development of AmI systems through the integration of modules loosely joined by means of an event-driven middleware and a module and sensor/actuator catalog. The modular design of the FunBlocks framework allows the development of AmI systems which can be customized to a wide variety of usage scenarios.
FunBlocks. A Modular Framework for AmI System Development
Baquero, Rafael; Rodríguez, José; Mendoza, Sonia; Decouchant, Dominique; Papis, Alfredo Piero Mateos
2012-01-01
The last decade has seen explosive growth in the technologies required to implement Ambient Intelligence (AmI) systems. Technologies such as facial and speech recognition, home networks, household cleaning robots, to name a few, have become commonplace. However, due to the multidisciplinary nature of AmI systems and the distinct requirements of different user groups, integrating these developments into full-scale systems is not an easy task. In this paper we propose FunBlocks, a minimalist modular framework for the development of AmI systems based on the function module abstraction used in the IEC 61499 standard for distributed control systems. FunBlocks provides a framework for the development of AmI systems through the integration of modules loosely joined by means of an event-driven middleware and a module and sensor/actuator catalog. The modular design of the FunBlocks framework allows the development of AmI systems which can be customized to a wide variety of usage scenarios. PMID:23112599
Quality Implementation in Transition: A Framework for Specialists and Administrators.
ERIC Educational Resources Information Center
Wald, Judy L.; Repetto, Jeanne B.
1995-01-01
Quality Implementation in Transition is a framework designed to guide transition specialists and administrators in the implementation of total quality management. The framework uses the tenets set forth by W. Edwards Deming and is intended to help professionals facilitate change within transition programs. (Author/JOW)
Richardson, Joshua E; Abramson, Erika L; Pfoh, Elizabeth R; Kaushal, Rainu
2012-01-01
Effective electronic health record (EHR) implementations in community settings are critical to promoting safe and reliable EHR use as well as mitigating provider dissatisfaction that often results. The implementation challenge is compounded given the scale and scope of EHR installations that are occurring and will continue to occur over the next five years. However, when compared to EHR evaluations relatively few biomedical informatics researchers have published on evaluating EHR implementations. Fewer still have evaluated EHR implementations in community settings. We report on the methods we used to achieve a novel application of an implementation science framework in informatics to qualitatively evaluate community-based EHR implementations. We briefly provide an overview of the implementation science framework, our methods for adapting it to informatics, the effects the framework had on our qualitative methods of inquiry and analysis, and discuss its potential value for informatics research.
Generic framework for vessel detection and tracking based on distributed marine radar image data
NASA Astrophysics Data System (ADS)
Siegert, Gregor; Hoth, Julian; Banyś, Paweł; Heymann, Frank
2018-04-01
Situation awareness is understood as a key requirement for safe and secure shipping at sea. The primary sensor for maritime situation assessment is still the radar, with the AIS being introduced as supplemental service only. In this article, we present a framework to assess the current situation picture based on marine radar image processing. Essentially, the framework comprises a centralized IMM-JPDA multi-target tracker in combination with a fully automated scheme for track management, i.e., target acquisition and track depletion. This tracker is conditioned on measurements extracted from radar images. To gain a more robust and complete situation picture, we are exploiting the aspect angle diversity of multiple marine radars, by fusing them a priori to the tracking process. Due to the generic structure of the proposed framework, different techniques for radar image processing can be implemented and compared, namely the BLOB detector and SExtractor. The overall framework performance in terms of multi-target state estimation will be compared for both methods based on a dedicated measurement campaign in the Baltic Sea with multiple static and mobile targets given.
NASA Astrophysics Data System (ADS)
Bosse, Stefan
2013-05-01
Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.
Development of a web service for analysis in a distributed network.
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.
Development of a Web Service for Analysis in a Distributed Network
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586
The NGEE Arctic Data Archive -- Portal for Archiving and Distributing Data and Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boden, Thomas A; Palanisamy, Giri; Devarakonda, Ranjeet
2014-01-01
The Next-Generation Ecosystem Experiments (NGEE Arctic) project is committed to implementing a rigorous and high-quality data management program. The goal is to implement innovative and cost-effective guidelines and tools for collecting, archiving, and sharing data within the project, the larger scientific community, and the public. The NGEE Arctic web site is the framework for implementing these data management and data sharing tools. The open sharing of NGEE Arctic data among project researchers, the broader scientific community, and the public is critical to meeting the scientific goals and objectives of the NGEE Arctic project and critical to advancing the mission ofmore » the Department of Energy (DOE), Office of Science, Biological and Environmental (BER) Terrestrial Ecosystem Science (TES) program.« less
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids
NASA Astrophysics Data System (ADS)
Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying
2018-03-01
Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.
Intelligent Control of Micro Grid: A Big Data-Based Control Center
NASA Astrophysics Data System (ADS)
Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng
2018-01-01
In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.
NASA Technical Reports Server (NTRS)
Filman, Robert E.
2003-01-01
This viewgraph presentation provides information on Object Infrastructure Framework (OIF), an Aspect-Oriented Programming (AOP) system. The presentation begins with an introduction to the difficulties and requirements of distributed computing, including functional and non-functional requirements (ilities). The architecture of Distributed Object Technology includes stubs, proxies for implementation objects, and skeletons, proxies for client applications. The key OIF ideas (injecting behavior, annotated communications, thread contexts, and pragma) are discussed. OIF is an AOP mechanism; AOP is centered on: 1) Separate expression of crosscutting concerns; 2) Mechanisms to weave the separate expressions into a unified system. AOP is software engineering technology for separately expressing systematic properties while nevertheless producing running systems that embody these properties.
Livermore Big Artificial Neural Network Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Essen, Brian Van; Jacobs, Sam; Kim, Hyojin
2016-07-01
LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.
Improving Programs and Outcomes: Implementation Frameworks and Organization Change
ERIC Educational Resources Information Center
Bertram, Rosalyn M.; Blase, Karen A.; Fixsen, Dean L.
2015-01-01
This article presents recent refinements to implementation constructs and frameworks. It updates and clarifies the frequently cited study conducted by the National Implementation Research Network that introduced these frameworks for application in diverse endeavors. As such, it may serve as a historical marker in the rapidly developing science and…
A conceptual framework for implementation fidelity
Carroll, Christopher; Patterson, Malcolm; Wood, Stephen; Booth, Andrew; Rick, Jo; Balain, Shashi
2007-01-01
Background Implementation fidelity refers to the degree to which an intervention or programme is delivered as intended. Only by understanding and measuring whether an intervention has been implemented with fidelity can researchers and practitioners gain a better understanding of how and why an intervention works, and the extent to which outcomes can be improved. Discussion The authors undertook a critical review of existing conceptualisations of implementation fidelity and developed a new conceptual framework for understanding and measuring the process. The resulting theoretical framework requires testing by empirical research. Summary Implementation fidelity is an important source of variation affecting the credibility and utility of research. The conceptual framework presented here offers a means for measuring this variable and understanding its place in the process of intervention implementation. PMID:18053122
An Incentive-based Online Optimization Framework for Distribution Grids
Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...
2017-10-09
This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less
An Incentive-based Online Optimization Framework for Distribution Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun
This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Simonetto, Andrea
This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less
NASA Astrophysics Data System (ADS)
Wang, Jia Jie; Wriedt, Thomas; Han, Yi Ping; Mädler, Lutz; Jiao, Yong Chang
2018-05-01
Light scattering of a radially inhomogeneous droplet, which is modeled by a multilayered sphere, is investigated within the framework of Generalized Lorenz-Mie Theory (GLMT), with particular efforts devoted to the analysis of the internal field distribution in the cases of shaped beam illumination. To circumvent numerical difficulties in the computation of internal field for an absorbing/non-absorbing droplet with pretty large size parameter, a recursive algorithm is proposed by reformulation of the equations for the expansion coefficients. Two approaches are proposed for the prediction of the internal field distribution, namely a rigorous method and an approximation method. The developed computer code is tested to be stable in a wide range of size parameters. Numerical computations are implemented to simulate the internal field distributions of a radially inhomogeneous droplet illuminated by a focused Gaussian beam.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
The blackboard model - A framework for integrating multiple cooperating expert systems
NASA Technical Reports Server (NTRS)
Erickson, W. K.
1985-01-01
The use of an artificial intelligence (AI) architecture known as the blackboard model is examined as a framework for designing and building distributed systems requiring the integration of multiple cooperating expert systems (MCXS). Aerospace vehicles provide many examples of potential systems, ranging from commercial and military aircraft to spacecraft such as satellites, the Space Shuttle, and the Space Station. One such system, free-flying, spaceborne telerobots to be used in construction, servicing, inspection, and repair tasks around NASA's Space Station, is examined. The major difficulties found in designing and integrating the individual expert system components necessary to implement such a robot are outlined. The blackboard model, a general expert system architecture which seems to address many of the problems found in designing and building such a system, is discussed. A progress report on a prototype system under development called DBB (Distributed BlackBoard model) is given. The prototype will act as a testbed for investigating the feasibility, utility, and efficiency of MCXS-based designs developed under the blackboard model.
Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao V; Berndt, Markus; Coon, Ethan
2017-04-13
Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less
Georgiou, Andrew; Westbrook, Johanna I; Braithwaite, Jeffrey
2012-07-12
The purpose of this paper is to illustrate the Elementally Entangled Organisational Communication (EEOC) framework by drawing on a set of three case studies which assessed the impact of new Health Information Technology (HIT) on a pathology service. The EEOC framework was empirically developed as a tool to tackle organisational communication challenges in the implementation and evaluation of health information systems. The framework was synthesised from multiple research studies undertaken across a major metropolitan hospital pathology service during the period 2005 to 2008. These studies evaluated the impact of new HIT systems in pathology departments (Laboratory Information System) and an Emergency Department (Computerised Provider Order Entry) located in Sydney, Australia. Key dimensions of EEOC are illustrated by the following case studies: 1) the communication infrastructure between the Blood Bank and the ward for the coordination and distribution of blood products; 2) the organisational environment in the Clinical Chemistry and Haematology departments and their attempts to organise, plan and control the processing of laboratory specimens; and 3) the temporal make up of the organisation as revealed in changes to the way the Central Specimen Reception allocated, sequenced and synchronised work tasks. The case studies not only highlight the pre-existing communication architecture within the organisation but also the constitutive role communication plays in the way organisations go about addressing their requirements. HIT implementation involves a mutual transformation of the organisation and the technology. This is a vital consideration because of the dangers associated with poor organisational planning and implementation of HIT, and the potential for unintended adverse consequences, workarounds and risks to the quality and safety of patient care. The EEOC framework aims to account for the complex range of contextual factors and triggers that play a role in the success or otherwise of new HITs, and in the realisation of their innovation potential.
2012-01-01
Background The purpose of this paper is to illustrate the Elementally Entangled Organisational Communication (EEOC) framework by drawing on a set of three case studies which assessed the impact of new Health Information Technology (HIT) on a pathology service. The EEOC framework was empirically developed as a tool to tackle organisational communication challenges in the implementation and evaluation of health information systems. Methods The framework was synthesised from multiple research studies undertaken across a major metropolitan hospital pathology service during the period 2005 to 2008. These studies evaluated the impact of new HIT systems in pathology departments (Laboratory Information System) and an Emergency Department (Computerised Provider Order Entry) located in Sydney, Australia. Results Key dimensions of EEOC are illustrated by the following case studies: 1) the communication infrastructure between the Blood Bank and the ward for the coordination and distribution of blood products; 2) the organisational environment in the Clinical Chemistry and Haematology departments and their attempts to organise, plan and control the processing of laboratory specimens; and 3) the temporal make up of the organisation as revealed in changes to the way the Central Specimen Reception allocated, sequenced and synchronised work tasks. Conclusions The case studies not only highlight the pre-existing communication architecture within the organisation but also the constitutive role communication plays in the way organisations go about addressing their requirements. HIT implementation involves a mutual transformation of the organisation and the technology. This is a vital consideration because of the dangers associated with poor organisational planning and implementation of HIT, and the potential for unintended adverse consequences, workarounds and risks to the quality and safety of patient care. The EEOC framework aims to account for the complex range of contextual factors and triggers that play a role in the success or otherwise of new HITs, and in the realisation of their innovation potential. PMID:22788698
A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations
NASA Technical Reports Server (NTRS)
Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos
2009-01-01
A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.
A generalized threshold model for computing bed load grain size distribution
NASA Astrophysics Data System (ADS)
Recking, Alain
2016-12-01
For morphodynamic studies, it is important to compute not only the transported volumes of bed load, but also the size of the transported material. A few bed load equations compute fractional transport (i.e., both the volume and grain size distribution), but many equations compute only the bulk transport (a volume) with no consideration of the transported grain sizes. To fill this gap, a method is proposed to compute the bed load grain size distribution separately to the bed load flux. The method is called the Generalized Threshold Model (GTM), because it extends the flow competence method for threshold of motion of the largest transported grain size to the full bed surface grain size distribution. This was achieved by replacing dimensional diameters with their size indices in the standard hiding function, which offers a useful framework for computation, carried out for each indices considered in the range [1, 100]. New functions are also proposed to account for partial transport. The method is very simple to implement and is sufficiently flexible to be tested in many environments. In addition to being a good complement to standard bulk bed load equations, it could also serve as a framework to assist in analyzing the physics of bed load transport in future research.
NASA Astrophysics Data System (ADS)
Glynn, P. D.; Jones, J. W.; Liu, S. B.; Shapiro, C. D.; Jenter, H. L.; Hogan, D. M.; Govoni, D. L.; Poore, B. S.
2014-12-01
We describe a conceptual framework for Citizen Science that can be applied to improve the understanding and management of natural resources and environments. For us, Citizen Science represents an engagement from members of the public, usually volunteers, in collaboration with paid professionals and technical experts to observe and understand natural resources and environments for the benefit of science and society. Our conceptual framework for Citizen Science includes crowdsourcing of observations (or sampling). It considers a wide range of activities, including volunteer and professional monitoring (e.g. weather and climate variables, water availability and quality, phenology, biota, image capture and remote sensing), as well as joint fact finding and analyses, and participatory mapping and modeling. Spatial distribution and temporal dynamics of the biophysical processes that control natural resources and environments are taken into account within this conceptual framework, as are the availability, scaling and diversity of tools and efforts that are needed to properly describe these biophysical processes. Opportunities are sought within the framework to properly describe, QA/QC, archive, and make readily accessible, the large amounts of information and traceable knowledge required to better understand and manage natural resources and environments. The framework also considers human motivational needs, primarily through a modern version of Maslow's hierarchy of needs. We examine several USGS-based Citizen Science efforts within the context of our framework, including the project called "iCoast - Did the Coast Change?", to understand the utility of the framework, its costs and benefits, and to offer concrete examples of how to expand and sustain specific projects. We make some recommendations that could aid its implementation on a national or larger scale. For example, implementation might be facilitated (1) through greater engagement of paid professionals, and (2) through the involvement of integrating entities, including institutions of learning and agencies with broad science responsibilities.
Template-Based Geometric Simulation of Flexible Frameworks
Wells, Stephen A.; Sartbaeva, Asel
2012-01-01
Specialised modelling and simulation methods implementing simplified physical models are valuable generators of insight. Template-based geometric simulation is a specialised method for modelling flexible framework structures made up of rigid units. We review the background, development and implementation of the method, and its applications to the study of framework materials such as zeolites and perovskites. The “flexibility window” property of zeolite frameworks is a particularly significant discovery made using geometric simulation. Software implementing geometric simulation of framework materials, “GASP”, is freely available to researchers. PMID:28817055
Framework for Leading Next Generation Science Standards Implementation
ERIC Educational Resources Information Center
Stiles, Katherine; Mundry, Susan; DiRanna, Kathy
2017-01-01
In response to the need to develop leaders to guide the implementation of the Next Generation Science Standards (NGSS), the Carnegie Corporation of New York provided funding to WestEd to develop a framework that defines the leadership knowledge and actions needed to effectively implement the NGSS. The development of the framework entailed…
NASA Astrophysics Data System (ADS)
Nogueira, Juan Manuel; Romero, David; Espadas, Javier; Molina, Arturo
2013-02-01
With the emergence of new enterprise models, such as technology-based enterprises, and the large quantity of information generated through technological advances, the Zachman framework continues to represent a modelling tool of great utility and value to construct an enterprise architecture (EA) that can integrate and align the IT infrastructure and business goals. Nevertheless, implementing an EA requires an important effort within an enterprise. Small technology-based enterprises and start-ups can take advantage of EAs and frameworks but, because these enterprises have limited resources to allocate for this task, an enterprise framework implementation is not feasible in most cases. This article proposes a new methodology based on action-research for the implementation of the business, system and technology models of the Zachman framework to assist and facilitate its implementation. Following the explanation of cycles of the proposed methodology, a case study is presented to illustrate the results of implementing the Zachman framework in a technology-based enterprise: PyME CREATIVA, using action-research approach.
[A Blueprint for Iowa's Young Children.
ERIC Educational Resources Information Center
Iowa Kids Count Initiative, Des Moines.
Two booklets, "A Blueprint for Iowa's Young: Implementation Directions for the Framework Paper," and "Investing in Families, Prevention and School Readiness: Working Draft of a Framework Paper" present a framework for creation of a blueprint for implementation and management of community investment initiatives. The framework is…
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Knepley, Matthew G.; Karpeev, Dmitry A.
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmore » only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.« less
Alonge, Olakunle; He, Siran; Wadhwaniya, Shirin; Rahman, Fazlur; Rahman, Aminur; Arifeen, Shams El
2014-01-01
ABSTRACT Drowning is the commonest cause of injury-related deaths among under-five children worldwide, and 95% of deaths occur in low- and middle-income countries (LMICs) where there are implementation gaps in the drowning prevention interventions. This article reviews common interventions for drowning prevention, introduces a framework for effective implementation of such interventions, and describes the Saving of Lives from Drowning (SoLiD) Project in Bangladesh, which is based on this framework. A review of the systematic reviews on drowning interventions was conducted, and original research articles were pulled and summarized into broad prevention categories. The implementation framework builds upon two existing frameworks and categorizes the implementing process for drowning prevention interventions into four phases: planning, engaging, executing, and evaluating. Eleven key characteristics are mapped in these phases. The framework was applied to drowning prevention projects that have been undertaken in some LMICs to illustrate major challenges to implementation. The implementation process for the SoLiD Project in Bangladesh is used as an example to illustrate the practical utilization of the framework. Drowning interventions, such as pool fencing and covering of water hazards, are effective in high-income countries; however, most of these interventions have not been tested in LMICs. The critical components of the four phases of implementing drowning prevention interventions may include: (i) planning—global funding, political will, scale, sustainability, and capacity building; (ii) engaging—coordination, involvement of appropriate individuals; (iii) executing—focused action, multisectoral actions, quality of execution; and (iv) evaluating—rigorous monitoring and evaluation. Some of the challenges to implementing drowning prevention interventions in LMICs include insufficient funds, lack of technical capacity, and limited coordination among stakeholders and implementers. The SoLiD Project in Bangladesh incorporates some of these lessons and key features of the proposed framework. The framework presented in this paper was a useful tool for implementing drowning prevention interventions in Bangladesh and may be useful for adaptation in drowning and injury prevention programmes of other LMIC settings. PMID:25895188
Walker, Daniel M; Hefner, Jennifer L; Sova, Lindsey N; Hilligoss, Brian; Song, Paula H; McAlearney, Ann Scheck
Accountable care organizations (ACOs) are emerging across the healthcare marketplace and now include Medicare, Medicaid, and private sector payers covering more than 24 million lives. However, little is known about the process of organizational change required to achieve cost savings and quality improvements from the ACO model. This study applies the complex innovation implementation framework to understand the challenges and facilitators associated with the ACO implementation process. We conducted four case studies of private sector ACOs, selected to achieve variation in terms of geography and organizational maturity. Across sites, we used semistructured interviews with 68 key informants to elicit information regarding ACO implementation. Our analysis found challenges and facilitators across all domains in the conceptual framework. Notably, our findings deviated from the framework in two ways. First, findings from the financial resource availability domain revealed both financial and nonfinancial (i.e., labor) resources that contributed to implementation effectiveness. Second, a new domain, patient engagement, emerged as an important factor in implementation effectiveness. We present these deviations in an adapted framework. As the ACO model proliferates, these findings can support implementation efforts, and they highlight the importance of focusing on patients throughout the process. Importantly, this study extends the complex innovation implementation framework to incorporate consumers into the implementation framework, making it more patient centered and aiding future efforts.
Saluja, Saurabh; Silverstein, Allison; Mukhopadhyay, Swagoto; Lin, Yihan; Raykar, Nakul; Keshavjee, Salmaan; Samad, Lubna; Meara, John G
2017-01-01
The Lancet Commission on Global Surgery defined six surgical indicators and a framework for a national surgical plan that aimed to incorporate surgical care as a part of global public health. Multiple countries have since begun national surgical planning; each faces unique challenges in doing so. Implementation science can be used to more systematically explain this heterogeneous process, guide implementation efforts and ultimately evaluate progress. We describe our intervention using the Consolidated Framework for Implementation Research. This framework requires identifying characteristics of the intervention, the individuals involved, the inner and outer setting of the intervention, and finally describing implementation processes. By hosting a consultative symposium with clinicians and policy makers from around the world, we are able to specify key aspects of each element of this framework. We define our intervention as the incorporation of surgical care into public health planning, identify local champions as the key individuals involved, and describe elements of the inner and outer settings. Ultimately we describe top-down and bottom-up models that are distinct implementation processes. With the Consolidated Framework for Implementation Research, we are able to identify specific strategic models that can be used by implementers in various settings. While the integration of surgical care into public health throughout the world may seem like an insurmountable challenge, this work adds to a growing effort that seeks to find a way forward. PMID:29225930
Moullin, Joanna C; Sabater-Hernández, Daniel; Benrimoj, Shalom I
2016-08-25
Multiple studies have explored the implementation process and influences, however it appears there is no study investigating these influences across the stages of implementation. Community pharmacy is attempting to implement professional services (pharmaceutical care and other health services). The use of implementation theory may assist the achievement of widespread provision, support and integration. The objective was to investigate professional service implementation in community pharmacy to contextualise and advance the concepts of a generic implementation framework previously published. Purposeful sampling was used to investigate implementation across a range of levels of implementation in community pharmacies in Australia. Twenty-five semi-structured interviews were conducted and analysed using a framework methodology. Data was charted using implementation stages as overarching themes and each stage was thematically analysed, to investigate the implementation process, the influences and their relationships. Secondary analyses were performed of the factors (barriers and facilitators) using an adapted version of the Consolidated Framework for Implementation Research (CFIR), and implementation strategies and interventions, using the Expert Recommendations for Implementing Change (ERIC) discrete implementation strategy compilation. Six stages emerged, labelled as development or discovery, exploration, preparation, testing, operation and sustainability. Within the stages, a range of implementation activities/steps and five overarching influences (pharmacys' direction and impetus, internal communication, staffing, community fit and support) were identified. The stages and activities were not applied strictly in a linear fashion. There was a trend towards the greater the number of activities considered, the greater the apparent integration into the pharmacy organization. Implementation factors varied over the implementation stages, and additional factors were added to the CFIR list and definitions modified/contextualised for pharmacy. Implementation strategies employed by pharmacies varied widely. Evaluations were lacking. The process of implementation and five overarching influences of professional services implementation in community pharmacy have been outlined. Framework analysis revealed, outside of the five overarching influences, factors influencing implementation varied across the implementation stages. It is proposed at each stage, for each domain, the factors, strategies and evaluations should be considered. The Framework for the Implementation of Services in Pharmacy incorporates the contextualisation of implementation science for pharmacy.
Biologically based modeling of multimedia, multipathway, multiroute population exposures to arsenic
Georgopoulos, Panos G.; Wang, Sheng-Wei; Yang, Yu-Ching; Xue, Jianping; Zartarian, Valerie G.; Mccurdy, Thomas; Özkaynak, Halûk
2011-01-01
This article presents an integrated, biologically based, source-to-dose assessment framework for modeling multimedia/multipathway/multiroute exposures to arsenic. Case studies demonstrating this framework are presented for three US counties (Hunderton County, NJ; Pima County, AZ; and Franklin County, OH), representing substantially different conditions of exposure. The approach taken utilizes the Modeling ENvironment for TOtal Risk studies (MENTOR) in an implementation that incorporates and extends the approach pioneered by Stochastic Human Exposure and Dose Simulation (SHEDS), in conjunction with a number of available databases, including NATA, NHEXAS, CSFII, and CHAD, and extends modeling techniques that have been developed in recent years. Model results indicate that, in most cases, the food intake pathway is the dominant contributor to total exposure and dose to arsenic. Model predictions are evaluated qualitatively by comparing distributions of predicted total arsenic amounts in urine with those derived using biomarker measurements from the NHEXAS — Region V study: the population distributions of urinary total arsenic levels calculated through MENTOR and from the NHEXAS measurements are in general qualitative agreement. Observed differences are due to various factors, such as interindividual variation in arsenic metabolism in humans, that are not fully accounted for in the current model implementation but can be incorporated in the future, in the open framework of MENTOR. The present study demonstrates that integrated source-to-dose modeling for arsenic can not only provide estimates of the relative contributions of multipathway exposure routes to the total exposure estimates, but can also estimate internal target tissue doses for speciated organic and inorganic arsenic, which can eventually be used to improve evaluation of health risks associated with exposures to arsenic from multiple sources, routes, and pathways. PMID:18073786
AliEn—ALICE environment on the GRID
NASA Astrophysics Data System (ADS)
Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration
2003-04-01
AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
System Software Framework for System of Systems Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Peterson, Benjamin L; Thompson, Hiram C.
2005-01-01
Project Constellation implements NASA's vision for space exploration to expand human presence in our solar system. The engineering focus of this project is developing a system of systems architecture. This architecture allows for the incremental development of the overall program. Systems can be built and connected in a "Lego style" manner to generate configurations supporting various mission objectives. The development of the avionics or control systems of such a massive project will result in concurrent engineering. Also, each system will have software and the need to communicate with other (possibly heterogeneous) systems. Fortunately, this design problem has already been solved during the creation and evolution of systems such as the Internet and the Department of Defense's successful effort to standardize distributed simulation (now IEEE 1516). The solution relies on the use of a standard layered software framework and a communication protocol. A standard framework and communication protocol is suggested for the development and maintenance of Project Constellation systems. The ARINC 653 standard is a great start for such a common software framework. This paper proposes a common system software framework that uses the Real Time Publish/Subscribe protocol for framework-to-framework communication to extend ARINC 653. It is highly recommended that such a framework be established before development. This is important for the success of concurrent engineering. The framework provides an infrastructure for general system services and is designed for flexibility to support a spiral development effort.
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
Developments and applications of DAQ framework DABC v2
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, J.; Kurz, N.; Linev, S.
2015-12-01
The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring.
Wallace, Lauren; Kapirir, Lydia
2017-04-08
To date, research on priority-setting for new vaccines has not adequately explored the influence of the global, national and sub-national levels of decision-making or contextual issues such as political pressure and stakeholder influence and power. Using Kapiriri and Martin's conceptual framework, this paper evaluates priority setting for new vaccines in Uganda at national and sub-national levels, and considers how global priorities can influence country priorities. This study focuses on 2 specific vaccines, the human papilloma virus (HPV) vaccine and the pneumococcal conjugate vaccine (PCV). This was a qualitative study that involved reviewing relevant Ugandan policy documents and media reports, as well as 54 key informant interviews at the global level and national and sub-national levels in Uganda. Kapiriri and Martin's conceptual framework was used to evaluate the prioritization process. Priority setting for PCV and HPV was conducted by the Ministry of Health (MoH), which is considered to be a legitimate institution. While respondents described the priority setting process for PCV process as transparent, participatory, and guided by explicit relevant criteria and evidence, the prioritization of HPV was thought to have been less transparent and less participatory. Respondents reported that neither process was based on an explicit priority setting framework nor did it involve adequate representation from the districts (program implementers) or publicity. The priority setting process for both PCV and HPV was negatively affected by the larger political and economic context, which contributed to weak institutional capacity as well as power imbalances between development assistance partners and the MoH. Priority setting in Uganda would be improved by strengthening institutional capacity and leadership and ensuring a transparent and participatory processes in which key stakeholders such as program implementers (the districts) and beneficiaries (the public) are involved. Kapiriri and Martin's framework has the potential to guide priority setting evaluation efforts, however, evaluation should be built into the priority setting process a priori such that information on priority setting is gathered throughout the implementation cycle. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A Distributed Prognostic Health Management Architecture
NASA Technical Reports Server (NTRS)
Bhaskar, Saha; Saha, Sankalita; Goebel, Kai
2009-01-01
This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.
Adopting a Resilience Practice Framework: A Case Study in What to Select and How to Implement
ERIC Educational Resources Information Center
Antcliff, Greg; Mildon, Robyn; Baldwin, Laura; Michaux, Annette; Nay, Cherie
2014-01-01
This paper describes the collaborative application of three theoretical models for supporting service planning (Hunter, 2006), programme planning (Chorpita et al, 2005a), and implementation (Meyers et al, 2012) to develop and implement a Resilience Practice Framework (RPF). Specifically, we (1) describe a theory of change framework (Hunter, 2006)…
Data provenance assurance in the cloud using blockchain
NASA Astrophysics Data System (ADS)
Shetty, Sachin; Red, Val; Kamhoua, Charles; Kwiat, Kevin; Njilla, Laurent
2017-05-01
Ever increasing adoption of cloud technology scales up the activities like creation, exchange, and alteration of cloud data objects, which create challenges to track malicious activities and security violations. Addressing this issue requires implementation of data provenance framework so that each data object in the federated cloud environment can be tracked and recorded but cannot be modified. The blockchain technology gives a promising decentralized platform to build tamper-proof systems. Its incorruptible distributed ledger/blockchain complements the need of maintaining cloud data provenance. In this paper, we present a cloud based data provenance framework using block chain which traces data record operations and generates provenance data. We anchor provenance data records into block chain transactions, which provide validation on provenance data and preserve user privacy at the same time. Once the provenance data is uploaded to the global block chain network, it is extremely challenging to tamper the provenance data. Besides, the provenance data uses hashed user identifiers prior to uploading so the blockchain nodes cannot link the operations to a particular user. The framework ensures that the privacy is preserved. We implemented the architecture on ownCloud, uploaded records to blockchain network, stored records in a provenance database and developed a prototype in form of a web service.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
Højberg, Helene; Rasmussen, Charlotte Diana Nørregaard; Osborne, Richard H; Jørgensen, Marie Birk
2018-02-01
Our aim was to identify implementation components for sustainable working environment interventions in the nursing assistant sector to generate a framework to optimize the implementation of workplace improvement initiatives. The implementation framework was informed by: 1) an industry advisory group, 2) interviews with key stakeholder, 3) concept mapping workshops, and 4) an e-mail survey. Thirty five stakeholders were interviewed and contributed in the concept mapping workshops. Eleven implementation components were derived across four domains: 1) A supportive organizational platform, 2) An engaged workplace with mutual goals, 3) The intervention is sustainably fitted to the workplace, and 4) the intervention is an attractive choice. The highest rated component was "Engaged and Active Management" (mean 4.1) and the lowest rated was "Delivered in an Attractive Form" (mean 2.8). The framework provides new insights into implementation in an evolving working environment and is aiming to assist with addressing gaps in effectiveness of workplace interventions and implementation success. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
QuTiP: An open-source Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2012-08-01
We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.
Ward, Marcia M; Baloh, Jure; Zhu, Xi; Stewart, Greg L
A particularly useful model for examining implementation of quality improvement interventions in health care settings is the PARIHS (Promoting Action on Research Implementation in Health Services) framework developed by Kitson and colleagues. The PARIHS framework proposes three elements (evidence, context, and facilitation) that are related to successful implementation. An evidence-based program focused on quality enhancement in health care, termed TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety), has been widely promoted by the Agency for Healthcare Research and Quality, but research is needed to better understand its implementation. We apply the PARIHS framework in studying TeamSTEPPS implementation to identify elements that are most closely related to successful implementation. Quarterly interviews were conducted over a 9-month period in 13 small rural hospitals that implemented TeamSTEPPS. Interview quotes that were related to each of the PARIHS elements were identified using directed content analysis. Transcripts were also scored quantitatively, and bivariate regression analysis was employed to explore relationships between PARIHS elements and successful implementation related to planning activities. The current findings provide support for the PARIHS framework and identified two of the three PARIHS elements (context and facilitation) as important contributors to successful implementation. This study applies the PARIHS framework to TeamSTEPPS, a widely used quality initiative focused on improving health care quality and patient safety. By focusing on small rural hospitals that undertook this quality improvement activity of their own accord, our findings represent effectiveness research in an understudied segment of the health care delivery system. By identifying context and facilitation as the most important contributors to successful implementation, these analyses provide a focus for efficient and effective sustainment of TeamSTEPPS efforts.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Practical implementation science: developing and piloting the quality implementation tool.
Meyers, Duncan C; Katz, Jason; Chien, Victoria; Wandersman, Abraham; Scaccia, Jonathan P; Wright, Annie
2012-12-01
According to the Interactive Systems Framework for Dissemination and Implementation, implementation is a major mechanism and concern in bridging research and practice. The growing number of implementation frameworks need to be synthesized and translated so that the science and practice of quality implementation can be furthered. In this article, we: (1) use the synthesis of frameworks developed by Meyers et al. (Am J Commun Psychol, 2012) and translate the results into a practical implementation science tool to use for improving quality of implementation (i.e., the Quality Implementation Tool; QIT), and (2) present some of the benefits and limitations of the tool by describing how the QIT was implemented in two different pilot projects. We discuss how the QIT can be used to guide collaborative planning, monitoring, and evaluation of how an innovation is implemented.
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento.
This booklet describes the characteristics and role of curriculum frameworks and describes how they can be used in developing educational programs. It is designed as a guide for writers of frameworks, for educators who are responsible for implementing frameworks, or for evaluators of educational programs. It provides a concise description of the…
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Evaluating the Potential of Commercial GIS for Accelerator Configuration Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Larrieu; Y.R. Roblin; K. White
2005-10-10
The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less
Health information systems: a survey of frameworks for developing countries.
Marcelo, A B
2010-01-01
The objective of this paper is to perform a survey of excellent research on health information systems (HIS) analysis and design, and their underlying theoretical frameworks. It classifies these frameworks along major themes, and analyzes the different approaches to HIS development that are practical in resource-constrained environments. Literature review based on PubMed citations and conference proceedings, as well as Internet searches on information systems in general, and health information systems in particular. The field of health information systems development has been studied extensively. Despite this, failed implementations are still common. Theoretical frameworks for HIS development are available that can guide implementers. As awareness, acceptance, and demand for health information systems increase globally, the variety of approaches and strategies will also follow. For developing countries with scarce resources, a trial-and-error approach can be very costly. Lessons from the successes and failures of initial HIS implementations have been abstracted into theoretical frameworks. These frameworks organize complex HIS concepts into methodologies that standardize techniques in implementation. As globalization continues to impact healthcare in the developing world, demand for more responsive health systems will become urgent. More comprehensive frameworks and practical tools to guide HIS implementers will be imperative.
AQUATIC STRESSORS: FRAMEWORK AND IMPLEMENTATION PLAN FOR EFFECTS RESEARCH
This document describes the framework and research implementation plans for ecological effects research on aquatic stressors within the National Health and Environmental Effects Laboratory. The context for the research identified within the framework is the common management goal...
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
Multiple intensity distributions from a single optical element
NASA Astrophysics Data System (ADS)
Berens, Michael; Bruneton, Adrien; Bäuerle, Axel; Traub, Martin; Wester, Rolf; Stollenwerk, Jochen; Loosen, Peter
2013-09-01
We report on an extension of the previously published two-step freeform optics tailoring algorithm using a Monge-Kantorovich mass transportation framework. The algorithm's ability to design multiple freeform surfaces allows for the inclusion of multiple distinct light paths and hence the implementation of multiple lighting functions in a single optical element. We demonstrate the procedure in the context of automotive lighting, in which a fog lamp and a daytime running lamp are integrated in a single optical element illuminated by two distinct groups of LEDs.
Analysis of BaBar data for three meson tau decay modes using the Tauola generator
Shekhovtsova, Olga
2014-11-24
The hadronic current for the τ⁻ → π⁻π⁺π⁻ν τ decay calculated in the framework of the Resonance Chiral Theory with an additional modification to include the σ meson is described. In addition, implementation into the Monte Carlo generator Tauola and fitting strategy to get the model parameters using the one-dimensional distributions are discussed. The results of the fit to one-dimensional mass invariant spectrum of the BaBar data are presented.
Modelling the B2C Marketplace: Evaluation of a Reputation Metric for e-Commerce
NASA Astrophysics Data System (ADS)
Gutowska, Anna; Sloane, Andrew
This paper evaluates recently developed novel and comprehensive reputation metric designed for the distributed multi-agent reputation system for the Business-to-Consumer (B2C) E-commerce applications. To do that an agent-based simulation framework was implemented which models different types of behaviours in the marketplace. The trustworthiness of different types of providers is investigated to establish whether the simulation models behaviour of B2C e-Commerce systems as they are expected to behave in real life.
NASA Astrophysics Data System (ADS)
Ko, Heasin; Lim, Kyongchun; Oh, Junsang; Rhee, June-Koo Kevin
2016-10-01
Quantum channel loopholes due to imperfect implementations of practical devices expose quantum key distribution (QKD) systems to potential eavesdropping attacks. Even though QKD systems are implemented with optical devices that are highly selective on spectral characteristics, information theory-based analysis about a pertinent attack strategy built with a reasonable framework exploiting it has never been clarified. This paper proposes a new type of trojan horse attack called hidden pulse attack that can be applied in a plug-and-play QKD system, using general and optimal attack strategies that can extract quantum information from phase-disturbed quantum states of eavesdropper's hidden pulses. It exploits spectral characteristics of a photodiode used in a plug-and-play QKD system in order to probe modulation states of photon qubits. We analyze the security performance of the decoy-state BB84 QKD system under the optimal hidden pulse attack model that shows enormous performance degradation in terms of both secret key rate and transmission distance.
Multi-Agent Architecture with Support to Quality of Service and Quality of Control
NASA Astrophysics Data System (ADS)
Poza-Luján, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, Jose-Enrique
Multi Agent Systems (MAS) are one of the most suitable frameworks for the implementation of intelligent distributed control system. Agents provide suitable flexibility to give support to implied heterogeneity in cyber-physical systems. Quality of Service (QoS) and Quality of Control (QoC) parameters are commonly utilized to evaluate the efficiency of the communications and the control loop. Agents can use the quality measures to take a wide range of decisions, like suitable placement on the control node or to change the workload to save energy. This article describes the architecture of a multi agent system that provides support to QoS and QoC parameters to optimize de system. The architecture uses a Publish-Subscriber model, based on Data Distribution Service (DDS) to send the control messages. Due to the nature of the Publish-Subscribe model, the architecture is suitable to implement event-based control (EBC) systems. The architecture has been called FSACtrl.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
RGLite, an interface between ROOT and gLite—proof on the grid
NASA Astrophysics Data System (ADS)
Malzacher, P.; Manafov, A.; Schwarz, K.
2008-07-01
Using the gLitePROOF package it is possible to perform PROOF-based distributed data analysis on the gLite Grid. The LHC experiments managed to run globally distributed Monte Carlo productions on the Grid, now the development of tools for data analysis is in the foreground. To grant access interfaces must be provided. The ROOT/PROOF framework is used as a starting point. Using abstract ROOT classes (TGrid, ...) interfaces can be implemented, via which Grid access from ROOT can be accomplished. A concrete implementation exists for the ALICE Grid environment AliEn. Within the D-Grid project an interface to the common Grid middleware of all LHC experiments, gLite, has been created. Therefore it is possible to query Grid File Catalogues from ROOT for the location of the data to be analysed. Grid jobs can be submitted into a gLite based Grid. The status of the jobs can be asked for, and their results can be obtained.
Precise and Efficient Static Array Bound Checking for Large Embedded C Programs
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Antonacci, M.; Bagnasco, S.; Boccali, T.; Bucchi, R.; Caballer, M.; Costantini, A.; Donvito, G.; Gaido, L.; Italiano, A.; Michelotto, D.; Panella, M.; Salomoni, D.; Vallero, S.
2017-10-01
One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience. Within the INDIGO- DataCloud project, we adopt two different approaches to implement a PaaS-level, on-demand Batch Farm Service based on HTCondor and Mesos. In the first approach, described in this paper, the various HTCondor daemons are packaged inside pre-configured Docker images and deployed as Long Running Services through Marathon, profiting from its health checks and failover capabilities. In the second approach, we are going to implement an ad-hoc HTCondor framework for Mesos. Container-to-container communication and isolation have been addressed exploring a solution based on overlay networks (based on the Calico Project). Finally, we have studied the possibility to deploy an HTCondor cluster that spans over different sites, exploiting the Condor Connection Broker component, that allows communication across a private network boundary or firewall as in case of multi-site deployments. In this paper, we are going to describe and motivate our implementation choices and to show the results of the first tests performed.
WebDISCO: a web service for distributed cox model learning without patient-level data sharing.
Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila
2015-11-01
The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Anionic silicate organic frameworks constructed from hexacoordinate silicon centres
NASA Astrophysics Data System (ADS)
Roeser, Jérôme; Prill, Dragica; Bojdys, Michael J.; Fayon, Pierre; Trewin, Abbie; Fitch, Andrew N.; Schmidt, Martin U.; Thomas, Arne
2017-10-01
Crystalline frameworks composed of hexacoordinate silicon species have thus far only been observed in a few high pressure silicate phases. By implementing reversible Si-O chemistry for the crystallization of covalent organic frameworks, we demonstrate the simple one-pot synthesis of silicate organic frameworks based on octahedral dianionic SiO6 building units. Clear evidence of the hexacoordinate environment around the silicon atoms is given by 29Si nuclear magnetic resonance analysis. Characterization by high-resolution powder X-ray diffraction, density functional theory calculation and analysis of the pair-distribution function showed that those anionic frameworks—M2[Si(C16H10O4)1.5], where M = Li, Na, K and C16H10O4 is 9,10-dimethylanthracene-2,3,6,7-tetraolate—crystallize as two-dimensional hexagonal layers stabilized in a fully eclipsed stacking arrangement with pronounced disorder in the stacking direction. Permanent microporosity with high surface area (up to 1,276 m2 g-1) was evidenced by gas-sorption measurements. The negatively charged backbone balanced with extra-framework cations and the permanent microporosity are characteristics that are shared with zeolites.
Unsupervised active learning based on hierarchical graph-theoretic clustering.
Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve
2009-10-01
Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.
Development of an Analysis and Design Optimization Framework for Marine Propellers
NASA Astrophysics Data System (ADS)
Tamhane, Ashish C.
In this thesis, a framework for the analysis and design optimization of ship propellers is developed. This framework can be utilized as an efficient synthesis tool in order to determine the main geometric characteristics of the propeller but also to provide the designer with the capability to optimize the shape of the blade sections based on their specific criteria. A hybrid lifting-line method with lifting-surface corrections to account for the three-dimensional flow effects has been developed. The prediction of the correction factors is achieved using Artificial Neural Networks and Support Vector Regression. This approach results in increased approximation accuracy compared to existing methods and allows for extrapolation of the correction factor values. The effect of viscosity is implemented in the framework via the coupling of the lifting line method with the open-source RANSE solver OpenFOAM for the calculation of lift, drag and pressure distribution on the blade sections using a transition kappa-o SST turbulence model. Case studies of benchmark high-speed propulsors are utilized in order to validate the proposed framework for propeller operation in open-water conditions but also in a ship's wake.
A Framework for Multi-Stakeholder Decision-Making and ...
This contribution describes the implementation of the conditional-value-at-risk (CVaR) metric to create a general multi-stakeholder decision-making framework. It is observed that stakeholder dissatisfactions (distance to their individual ideal solutions) can be interpreted as random variables. We thus shape the dissatisfaction distribution and find an optimal compromise solution by solving a CVaR minimization problem parameterized in the probability level. This enables us to generalize multi-stakeholder settings previously proposed in the literature that minimizes average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework. We demonstrate the framework in a bio-waste processing facility location case study, where we seek compromise solutions (facility locations) that balance stakeholder priorities on transportation, safety, water quality, and capital costs. This conference presentation abstract explains a new decision-making framework that computes compromise solution alternatives (reach consensus) by mitigating dissatisfactions among stakeholders as needed for SHC Decision Science and Support Tools project.
Giannini, Tereza C.; Tambosi, Leandro R.; Acosta, André L.; Jaffé, Rodolfo; Saraiva, Antonio M.; Imperatriz-Fonseca, Vera L.; Metzger, Jean Paul
2015-01-01
Ecosystem services provided by mobile agents are increasingly threatened by the loss and modification of natural habitats and by climate change, risking the maintenance of biodiversity, ecosystem functions, and human welfare. Research oriented towards a better understanding of the joint effects of land use and climate change over the provision of specific ecosystem services is therefore essential to safeguard such services. Here we propose a methodological framework, which integrates species distribution forecasts and graph theory to identify key conservation areas, which if protected or restored could improve habitat connectivity and safeguard ecosystem services. We applied the proposed framework to the provision of pollination services by a tropical stingless bee (Melipona quadrifasciata), a key pollinator of native flora from the Brazilian Atlantic Forest and important agricultural crops. Based on the current distribution of this bee and that of the plant species used to feed and nest, we projected the joint distribution of bees and plants in the future, considering a moderate climate change scenario (following IPPC). We then used this information, the bee’s flight range, and the current mapping of Atlantic Forest remnants to infer habitat suitability and quantify local and regional habitat connectivity for 2030, 2050 and 2080. Our results revealed north to south and coastal to inland shifts in the pollinator distribution during the next 70 years. Current and future connectivity maps unraveled the most important corridors, which if protected or restored, could facilitate the dispersal and establishment of bees during distribution shifts. Our results also suggest that coffee plantations from eastern São Paulo and southern Minas Gerais States could suffer a pollinator deficit in the future, whereas pollination services seem to be secured in southern Brazil. Landowners and governmental agencies could use this information to implement new land use schemes. Overall, our proposed methodological framework could help design novel conservational and agricultural practices that can be crucial to conserve ecosystem services by buffering the joint effect of habitat configuration and climate change. PMID:26091014
Towards a centralized Grid Speedometer
NASA Astrophysics Data System (ADS)
Dzhunov, I.; Andreeva, J.; Fajardo, E.; Gutsche, O.; Luyckx, S.; Saiz, P.
2014-06-01
Given the distributed nature of the Worldwide LHC Computing Grid and the way CPU resources are pledged and shared around the globe, Virtual Organizations (VOs) face the challenge of monitoring the use of these resources. For CMS and the operation of centralized workflows, the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard Site Status Board (SSB) provides a very flexible framework to collect, aggregate and visualize data. The CMS production monitoring team uses the SSB to define the metrics that have to be monitored and the alarms that have to be raised. During the integration of CMS production monitoring into the SSB, several enhancements to the core functionality of the SSB were required; They were implemented in a generic way, so that other VOs using the SSB can exploit them. Alongside these enhancements, there were a number of changes to the core of the SSB framework. This paper presents the details of the implementation and the advantages for current and future usage of the new features in SSB.
Implementing Access to Data Distributed on Many Processors
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.
Breimaier, Helga E; Heckemann, Birgit; Halfens, Ruud J G; Lohrmann, Christa
2015-01-01
Implementing clinical practice guidelines (CPGs) in healthcare settings is a complex intervention involving both independent and interdependent components. Although the Consolidated Framework for Implementation Research (CFIR) has never been evaluated in a practical context, it appeared to be a suitable theoretical framework to guide an implementation process. The aim of this study was to evaluate the comprehensiveness, applicability and usefulness of the CFIR in the implementation of a fall-prevention CPG in nursing practice to improve patient care in an Austrian university teaching hospital setting. The evaluation of the CFIR was based on (1) team-meeting minutes, (2) the main investigator's research diary, containing a record of a before-and-after, mixed-methods study design embedded in a participatory action research (PAR) approach for guideline implementation, and (3) an analysis of qualitative and quantitative data collected from graduate and assistant nurses in two Austrian university teaching hospital departments. The CFIR was used to organise data per and across time point(s) and assess their influence on the implementation process, resulting in implementation and service outcomes. Overall, the CFIR could be demonstrated to be a comprehensive framework for the implementation of a guideline into a hospital-based nursing practice. However, the CFIR did not account for some crucial factors during the planning phase of an implementation process, such as consideration of stakeholder aims and wishes/needs when implementing an innovation, pre-established measures related to the intended innovation and pre-established strategies for implementing an innovation. For the CFIR constructs reflecting & evaluating and engaging, a more specific definition is recommended. The framework and its supplements could easily be used by researchers, and their scope was appropriate for the complexity of a prospective CPG-implementation project. The CFIR facilitated qualitative data analysis and provided a structure that allowed project results to be organised and viewed in a broader context to explain the main findings. The CFIR was a valuable and helpful framework for (1) the assessment of the baseline, process and final state of the implementation process and influential factors, (2) the content analysis of qualitative data collected throughout the implementation process, and (3) explaining the main findings.
Briggs, Andrew M; Jordan, Joanne E; Jennings, Matthew; Speerin, Robyn; Bragge, Peter; Chua, Jason; Woolf, Anthony D; Slater, Helen
2017-04-01
To develop a globally informed framework to evaluate readiness for implementation and success after implementation of musculoskeletal models of care (MOCs). Three phases were undertaken: 1) a qualitative study with 27 Australian subject matter experts (SMEs) to develop a draft framework; 2) an eDelphi study with an international panel of 93 SMEs across 30 nations to evaluate face validity, and refine and establish consensus on the framework components; and 3) translation of the framework into a user-focused resource and evaluation of its acceptability with the eDelphi panel. A comprehensive evaluation framework was developed for judging the readiness and success of musculoskeletal MOCs. The framework consists of 9 domains, with each domain containing a number of themes underpinned by detailed elements. In the first Delphi round, scores of "partly agree" or "completely agree" with the draft framework ranged 96.7%-100%. In the second round, "essential" scores ranged 58.6%-98.9%, resulting in 14 of 34 themes being classified as essential. SMEs strongly agreed or agreed that the final framework was useful (98.8%), usable (95.1%), credible (100%) and appealing (93.9%). Overall, 96.3% strongly supported or supported the final structure of the framework as it was presented, while 100%, 96.3%, and 100% strongly supported or supported the content within the readiness, initiating implementation, and success streams, respectively. An empirically derived framework to evaluate the readiness and success of musculoskeletal MOCs was strongly supported by an international panel of SMEs. The framework provides an important internationally applicable benchmark for the development, implementation, and evaluation of musculoskeletal MOCs. © 2016, American College of Rheumatology.
Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.
Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon
2013-04-15
The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.
Code C# for chaos analysis of relativistic many-body systems
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Felea, D.; Stan, E.; Esanu, T.
2010-08-01
This work presents a new Microsoft Visual C# .NET code library, conceived as a general object oriented solution for chaos analysis of three-dimensional, relativistic many-body systems. In this context, we implemented the Lyapunov exponent and the “fragmentation level” (defined using the graph theory and the Shannon entropy). Inspired by existing studies on billiard nuclear models and clusters of galaxies, we tried to apply the virial theorem for a simplified many-body system composed by nucleons. A possible application of the “virial coefficient” to the stability analysis of chaotic systems is also discussed. Catalogue identifier: AEGH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30 053 No. of bytes in distributed program, including test data, etc.: 801 258 Distribution format: tar.gz Programming language: Visual C# .NET 2005 Computer: PC Operating system: .Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread RAM: 128 Megabytes Classification: 6.2, 6.5 External routines: .Net Framework 2.0 Library Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, and energy conservation precision test. Additional comments: Easy copy/paste based deployment method. Running time: Quadratic complexity.
A quality of service negotiation procedure for distributed multimedia presentational applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafid, A.; Bochmann, G.V.; Kerherve, B.
Most of current approaches in designing and implementing distributed multimedia (MM) presentational applications, e.g. news-on-demand, have concentrated on the performance of the continuous media file servers in terms of seek time overhead, and real-time disk scheduling; particularly the QoS negotiation mechanisms they provide are used in a rather static manner that is, these mechanisms are restricted to the evaluation of the capacity of certain system components, e.g. file server a priori known to support a specific quality of service (QoS). In contrast to those approaches, we propose a general QoS negotiation framework that supports the dynamic choice of a configurationmore » of system components to support the QoS requirements of the user of a specific application: we consider different possible system configurations and select an optimal one to provide the appropriate QoS support. In this paper we document the design and implementation of a QoS negotiation procedure for distributed MM presentational applications, such as news-on-demand. The negotiation procedure described here is an instantiation of the general framework for QoS negotiation which was developed earlier Our proposal differs in many respect with the negotiation functions provided by existing approaches: (1) the negotiation process uses an optimization approach to find a configuration of system components which supports the user requirements, (2) the negotiation process supports the negotiation of a MM document and not only a single monomedia object, (3) the QoS negotiation takes into account the cost to the user, and (4) the negotiation process may be used to support automatic adaptation to react to QoS degradations, without intervention by the user/application.« less
Integrating network ecology with applied conservation: a synthesis and guide to implementation.
Kaiser-Bunbury, Christopher N; Blüthgen, Nico
2015-07-10
Ecological networks are a useful tool to study the complexity of biotic interactions at a community level. Advances in the understanding of network patterns encourage the application of a network approach in other disciplines than theoretical ecology, such as biodiversity conservation. So far, however, practical applications have been meagre. Here we present a framework for network analysis to be harnessed to advance conservation management by using plant-pollinator networks and islands as model systems. Conservation practitioners require indicators to monitor and assess management effectiveness and validate overall conservation goals. By distinguishing between two network attributes, the 'diversity' and 'distribution' of interactions, on three hierarchical levels (species, guild/group and network) we identify seven quantitative metrics to describe changes in network patterns that have implications for conservation. Diversity metrics are partner diversity, vulnerability/generality, interaction diversity and interaction evenness, and distribution metrics are the specialization indices d' and [Formula: see text] and modularity. Distribution metrics account for sampling bias and may therefore be suitable indicators to detect human-induced changes to plant-pollinator communities, thus indirectly assessing the structural and functional robustness and integrity of ecosystems. We propose an implementation pathway that outlines the stages that are required to successfully embed a network approach in biodiversity conservation. Most importantly, only if conservation action and study design are aligned by practitioners and ecologists through joint experiments, are the findings of a conservation network approach equally beneficial for advancing adaptive management and ecological network theory. We list potential obstacles to the framework, highlight the shortfall in empirical, mostly experimental, network data and discuss possible solutions. Published by Oxford University Press on behalf of the Annals of Botany Company.
European distributed seismological data archives infrastructure: EIDA
NASA Astrophysics Data System (ADS)
Clinton, John; Hanka, Winfried; Mazza, Salvatore; Pederson, Helle; Sleeman, Reinoud; Stammler, Klaus; Strollo, Angelo
2014-05-01
The European Integrated waveform Data Archive (EIDA) is a distributed Data Center system within ORFEUS that (a) securely archives seismic waveform data and related metadata gathered by European research infrastructures, and (b) provides transparent access to the archives for the geosciences research communities. EIDA was founded in 2013 by ORFEUS Data Center, GFZ, RESIF, ETH, INGV and BGR to ensure sustainability of a distributed archive system and the implementation of standards (e.g. FDSN StationXML, FDSN webservices) and coordinate new developments. Under the mandate of the ORFEUS Board of Directors and Executive Committee the founding group is responsible for steering and maintaining the technical developments and organization of the European distributed seismic waveform data archive and the integration within broader multidisciplanry frameworks like EPOS. EIDA currently offers uniform data access to unrestricted data from 8 European archives (www.orfeus-eu.org/eida), linked by the Arclink protocol, hosting data from 75 permanent networks (1800+ stations) and 33 temporary networks (1200+) stations). Moreover, each archive may also provide unique, restricted datasets. A webinterface, developed at GFZ, offers interactive access to different catalogues (EMSC, GFZ, USGS) and EIDA waveform data. Clients and toolboxes like arclink_fetch and ObsPy can connect directly to any EIDA node to collect data. Current developments are directed to the implementation of quality parameters and strong motion parameters.
ERIC Educational Resources Information Center
Mikula, Brendon D.; Heckler, Andrew F.
2017-01-01
We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with…
Gold, Rachel; Bunce, Arwen E; Cohen, Deborah J; Hollombe, Celine; Nelson, Christine A; Proctor, Enola K; Pope, Jill A; DeVoe, Jennifer E
2016-08-01
The objective of this study was to empirically demonstrate the use of a new framework for describing the strategies used to implement quality improvement interventions and provide an example that others may follow. Implementation strategies are the specific approaches, methods, structures, and resources used to introduce and encourage uptake of a given intervention's components. Such strategies have not been regularly reported in descriptions of interventions' effectiveness, or in assessments of how proven interventions are implemented in new settings. This lack of reporting may hinder efforts to successfully translate effective interventions into "real-world" practice. A recently published framework was designed to standardize reporting on implementation strategies in the implementation science literature. We applied this framework to describe the strategies used to implement a single intervention in its original commercial care setting, and when implemented in community health centers from September 2010 through May 2015. Per this framework, the target (clinic staff) and outcome (prescribing rates) remained the same across settings; the actor, action, temporality, and dose were adapted to fit local context. The framework proved helpful in articulating which of the implementation strategies were kept constant and which were tailored to fit diverse settings, and simplified our reporting of their effects. Researchers should consider consistently reporting this information, which could be crucial to the success or failure of implementing proven interventions effectively across diverse care settings. clinicaltrials.gov Identifier: NCT02299791. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Ilott, Irene; Gerrish, Kate; Booth, Andrew; Field, Becky
2013-10-01
There is an international imperative to implement research into clinical practice to improve health care. Understanding the dynamics of change requires knowledge from theoretical and empirical studies. This paper presents a novel approach to testing a new meta theoretical framework: the Consolidated Framework for Implementation Research. The utility of the Framework was evaluated using a post hoc, deductive analysis of 11 narrative accounts of innovation in health care services and practice from England, collected in 2010. A matrix, comprising the five domains and 39 constructs of the Framework was developed to examine the coherence of the terminology, to compare results across contexts and to identify new theoretical developments. The Framework captured the complexity of implementation across 11 diverse examples, offering theoretically informed, comprehensive coverage. The Framework drew attention to relevant points in individual cases together with patterns across cases; for example, all were internally developed innovations that brought direct or indirect patient advantage. In 10 cases, the change was led by clinicians. Most initiatives had been maintained for several years and there was evidence of spread in six examples. Areas for further development within the Framework include sustainability and patient/public engagement in implementation. Our analysis suggests that this conceptual framework has the potential to offer useful insights, whether as part of a situational analysis or by developing context-specific propositions for hypothesis testing. Such studies are vital now that innovation is being promoted as core business for health care. © 2012 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
Towards a distributed information architecture for avionics data
NASA Technical Reports Server (NTRS)
Mattmann, Chris; Freeborn, Dana; Crichton, Dan
2003-01-01
Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.
Hadoop neural network for parallel and distributed feature selection.
Hodge, Victoria J; O'Keefe, Simon; Austin, Jim
2016-06-01
In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking
NASA Technical Reports Server (NTRS)
Zolano, Paul Z.
2004-01-01
Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.
ERIC Educational Resources Information Center
Bartholomew, Mitch; De Jong, David
2017-01-01
Despite the successful implementation of the Response to Intervention (RtI) framework in many elementary schools, there is little evidence of successful implementation in high school settings. Several themes emerged from the interviews of nine secondary principals, including a lack of knowledge and training for successful implementation, the…
Single-qubit unitary gates by graph scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumer, Benjamin A.; Underwood, Michael S.; Feder, David L.
2011-12-15
We consider the effects of plane-wave states scattering off finite graphs as an approach to implementing single-qubit unitary operations within the continuous-time quantum walk framework of universal quantum computation. Four semi-infinite tails are attached at arbitrary points of a given graph, representing the input and output registers of a single qubit. For a range of momentum eigenstates, we enumerate all of the graphs with up to n=9 vertices for which the scattering implements a single-qubit gate. As n increases, the number of new unitary operations increases exponentially, and for n>6 the majority correspond to rotations about axes distributed roughly uniformlymore » across the Bloch sphere. Rotations by both rational and irrational multiples of {pi} are found.« less
Internet of things for an age-friendly healthcare.
Konstantinidis, Evdokimos I; Bamparopoulos, Giorgos; Billis, Antonis; Bamidis, Panagiotis D
2015-01-01
In healthcare applications a large cohort of recent implementations utilises IoT-oriented infrastructures (XMPP) as well as smart mobile devices as communication gateways. IoT characteristi Communication/Connectivity, Pervasive Computing and Ambient Intelligence, are all highly related to Active and Healthy Aging environments. This paper presents a new idea, that of IoT enabled devices which are directly connected to the IoT (a glucose meter is used as an example herein), complying with the XMPP messaging protocol and the incorporation of a recently released Controller Application Communication (CAC) framework for distributed, cross-platform communication. A web based exergaming platform and a disease management tool, provide the vehicles for the demonstration of the feasibility and the successful implementation and integration of the aforementioned infrastructure.
Genotyping in the cloud with Crossbow.
Gurtowski, James; Schatz, Michael C; Langmead, Ben
2012-09-01
Crossbow is a scalable, portable, and automatic cloud computing tool for identifying SNPs from high-coverage, short-read resequencing data. It is built on Apache Hadoop, an implementation of the MapReduce software framework. Hadoop allows Crossbow to distribute read alignment and SNP calling subtasks over a cluster of commodity computers. Two robust tools, Bowtie and SOAPsnp, implement the fundamental alignment and variant calling operations respectively, and have demonstrated capabilities within Crossbow of analyzing approximately one billion short reads per hour on a commodity Hadoop cluster with 320 cores. Through protocol examples, this unit will demonstrate the use of Crossbow for identifying variations in three different operating modes: on a Hadoop cluster, on a single computer, and on the Amazon Elastic MapReduce cloud computing service.
Data management in an object-oriented distributed aircraft conceptual design environment
NASA Astrophysics Data System (ADS)
Lu, Zhijie
In the competitive global market place, aerospace companies are forced to deliver the right products to the right market, with the right cost, and at the right time. However, the rapid development of technologies and new business opportunities, such as mergers, acquisitions, supply chain management, etc., have dramatically increased the complexity of designing an aircraft. Therefore, the pressure to reduce design cycle time and cost is enormous. One way to solve such a dilemma is to develop and apply advanced engineering environments (AEEs), which are distributed collaborative virtual design environments linking researchers, technologists, designers, etc., together by incorporating application tools and advanced computational, communications, and networking facilities. Aircraft conceptual design, as the first design stage, provides major opportunity to compress design cycle time and is the cheapest place for making design changes. However, traditional aircraft conceptual design programs, which are monolithic programs, cannot provide satisfactory functionality to meet new design requirements due to the lack of domain flexibility and analysis scalability. Therefore, we are in need of the next generation aircraft conceptual design environment (NextADE). To build the NextADE, the framework and the data management problem are two major problems that need to be addressed at the forefront. Solving these two problems, particularly the data management problem, is the focus of this research. In this dissertation, in light of AEEs, a distributed object-oriented framework is firstly formulated and tested for the NextADE. In order to improve interoperability and simplify the integration of heterogeneous application tools, data management is one of the major problems that need to be tackled. To solve this problem, taking into account the characteristics of aircraft conceptual design data, a robust, extensible object-oriented data model is then proposed according to the distributed object-oriented framework. By overcoming the shortcomings of the traditional approach of modeling aircraft conceptual design data, this data model makes it possible to capture specific detailed information of aircraft conceptual design without sacrificing generality, which is one of the most desired features of a data model for aircraft conceptual design. Based upon this data model, a prototype of the data management system, which is one of the fundamental building blocks of the NextADE, is implemented utilizing the state of the art information technologies. Using a general-purpose integration software package to demonstrate the efficacy of the proposed framework and the data management system, the NextADE is initially implemented by integrating the prototype of the data management system with other building blocks of the design environment, such as disciplinary analyses programs and mission analyses programs. As experiments, two case studies are conducted in the integrated design environments. One is based upon a simplified conceptual design of a notional conventional aircraft; the other is a simplified conceptual design of an unconventional aircraft. As a result of the experiments, the proposed framework and the data management approach are shown to be feasible solutions to the research problems.
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
Experimental research control software system
NASA Astrophysics Data System (ADS)
Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.
2014-05-01
A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.
NASA Astrophysics Data System (ADS)
Samaké, Abdoulaye; Rampal, Pierre; Bouillon, Sylvain; Ólason, Einar
2017-12-01
We present a parallel implementation framework for a new dynamic/thermodynamic sea-ice model, called neXtSIM, based on the Elasto-Brittle rheology and using an adaptive mesh. The spatial discretisation of the model is done using the finite-element method. The temporal discretisation is semi-implicit and the advection is achieved using either a pure Lagrangian scheme or an Arbitrary Lagrangian Eulerian scheme (ALE). The parallel implementation presented here focuses on the distributed-memory approach using the message-passing library MPI. The efficiency and the scalability of the parallel algorithms are illustrated by the numerical experiments performed using up to 500 processor cores of a cluster computing system. The performance obtained by the proposed parallel implementation of the neXtSIM code is shown being sufficient to perform simulations for state-of-the-art sea ice forecasting and geophysical process studies over geographical domain of several millions squared kilometers like the Arctic region.
NASA Astrophysics Data System (ADS)
Zhu, Meng-Zheng; Ye, Liu
2015-04-01
An efficient scheme is proposed to implement a quantum cloning machine in separate cavities based on a hybrid interaction between electron-spin systems placed in the cavities and an optical coherent pulse. The coefficient of the output state for the present cloning machine is just the direct product of two trigonometric functions, which ensures that different types of quantum cloning machine can be achieved readily in the same framework by appropriately adjusting the rotated angles. The present scheme can implement optimal one-to-two symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, optimal symmetric (asymmetric) real-state cloning, optimal one-to-three symmetric economical real-state cloning, and optimal symmetric cloning of qubits given by an arbitrary axisymmetric distribution. In addition, photon loss of the qubus beams during the transmission and decoherence effects caused by such a photon loss are investigated.
Konstantinidis, Georgios; Anastassopoulos, George C; Karakos, Alexandros S; Anagnostou, Emmanouil; Danielides, Vasileios
2012-04-01
The aim of this study is to present our perspectives on healthcare analysis and design and the lessons learned from our experience with the development of a distributed, object-oriented Clinical Information System (CIS). In order to overcome known issues regarding development, implementation and finally acceptance of a CIS by the physicians we decided to develop a novel object-oriented methodology by integrating usability principles and techniques in a simplified version of a well established software engineering process (SEP), the Unified Process (UP). A multilayer architecture has been defined and implemented with the use of a vendor application framework. Our first experiences from a pilot implementation of our CIS are positive. This approach allowed us to gain a socio-technical understanding of the domain and enabled us to identify all the important factors that define both the structure and the behavior of a Health Information System.
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter
2015-12-01
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, Steven Y.; Spires, Shannon V.
There are currently two proposed standards for agent communication languages, namely, KQML (Finin, Lobrou, and Mayfield 1994) and the FIPA ACL. Neither standard has yet achieved primacy, and neither has been evaluated extensively in an open environment such as the Internet. It seems prudent therefore to design a general-purpose agent communications facility for new agent architectures that is flexible yet provides an architecture that accepts many different specializations. In this paper we exhibit the salient features of an agent communications architecture based on distributed metaobjects. This architecture captures design commitments at a metaobject level, leaving the base-level design and implementationmore » up to the agent developer. The scope of the metamodel is broad enough to accommodate many different communication protocols, interaction protocols, and knowledge sharing regimes through extensions to the metaobject framework. We conclude that with a powerful distributed object substrate that supports metaobject communications, a general framework can be developed that will effectively enable different approaches to agent communications in the same agent system. We have implemented a KQML-based communications protocol and have several special-purpose interaction protocols under development.« less
Rehm, Jürgen; Crépault, Jean-François; Fischer, Benedikt
2016-08-20
This commentary to the editorial of Hajizadeh argues that the economic, social and health consequences of legalizing cannabis in Canada will depend in large part on the exact stipulations (mainly from the federal government) and on the implementation, regulation and practice of the legalization act (on provincial and municipal levels). A strict regulatory framework is necessary to minimize the health burden attributable to cannabis use. This includes prominently control of production and sale of the legal cannabis including control of price and content with ban of marketing and advertisement. Regulation of medical marijuana should be part of such a framework as well. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Moore, Justin B.; Carson, Russell L.; Webster, Collin A.; Singletary, Camelia R.; Castelli, Darla M.; Pate, Russell R.; Beets, Michael W.; Beighle, Aaron
2018-01-01
Comprehensive school physical activity programs (CSPAPs) have been endorsed as a promising strategy to increase youth physical activity (PA) in school settings. A CSPAP is a five-component approach, which includes opportunities before, during, and after school for PA. Extensive resources are available to public health practitioners and school officials regarding what should be implemented, but little guidance and few resources are available regarding how to effectively implement a CSPAP. Implementation science provides a number of conceptual frameworks that can guide implementation of a CSPAP, but few published studies have employed an implementation science framework to a CSPAP. Therefore, we developed Be a Champion! (BAC), which represents a synthesis of implementation science strategies, modified for application to CSPAPs implementation in schools while allowing for local tailoring of the approach. This article describes BAC while providing examples from the implementation of a CSPAP in three rural elementary schools. PMID:29354631
Rosen, C S; Matthieu, M M; Wiltsey Stirman, S; Cook, J M; Landes, S; Bernardy, N C; Chard, K M; Crowley, J; Eftekhari, A; Finley, E P; Hamblen, J L; Harik, J M; Kehle-Forbes, S M; Meis, L A; Osei-Bonsu, P E; Rodriguez, A L; Ruggiero, K J; Ruzek, J I; Smith, B N; Trent, L; Watts, B V
2016-11-01
Since 2006, the Veterans Health Administration (VHA) has instituted policy changes and training programs to support system-wide implementation of two evidence-based psychotherapies (EBPs) for posttraumatic stress disorder (PTSD). To assess lessons learned from this unprecedented effort, we used PubMed and the PILOTS databases and networking with researchers to identify 32 reports on contextual influences on implementation or sustainment of EBPs for PTSD in VHA settings. Findings were initially organized using the exploration, planning, implementation, and sustainment framework (EPIS; Aarons et al. in Adm Policy Ment Health Health Serv Res 38:4-23, 2011). Results that could not be adequately captured within the EPIS framework, such as implementation outcomes and adopter beliefs about the innovation, were coded using constructs from the reach, effectiveness, adoption, implementation, maintenance (RE-AIM) framework (Glasgow et al. in Am J Public Health 89:1322-1327, 1999) and Consolidated Framework for Implementation Research (CFIR; Damschroder et al. in Implement Sci 4(1):50, 2009). We highlight key areas of progress in implementation, identify continuing challenges and research questions, and discuss implications for future efforts to promote EBPs in large health care systems.
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
A Framework for Dimensioning VDL-2 Air-Ground Networks
NASA Technical Reports Server (NTRS)
Ribeiro, Leila Z.; Monticone, Leone C.; Snow, Richard E.; Box, Frank; Apaza, Rafel; Bretmersky, Steven
2014-01-01
This paper describes a framework developed at MITRE for dimensioning a Very High Frequency (VHF) Digital Link Mode 2 (VDL-2) Air-to-Ground network. This framework was developed to support the FAA's Data Communications (Data Comm) program by providing estimates of expected capacity required for the air-ground network services that will support Controller-Pilot-Data-Link Communications (CPDLC), as well as the spectrum needed to operate the system at required levels of performance. The Data Comm program is part of the FAA's NextGen initiative to implement advanced communication capabilities in the National Airspace System (NAS). The first component of the framework is the radio-frequency (RF) coverage design for the network ground stations. Then we proceed to describe the approach used to assess the aircraft geographical distribution and the data traffic demand expected in the network. The next step is the resource allocation utilizing optimization algorithms developed in MITRE's Spectrum ProspectorTM tool to propose frequency assignment solutions, and a NASA-developed VDL-2 tool to perform simulations and determine whether a proposed plan meets the desired performance requirements. The framework presented is capable of providing quantitative estimates of multiple variables related to the air-ground network, in order to satisfy established coverage, capacity and latency performance requirements. Outputs include: coverage provided at different altitudes; data capacity required in the network, aggregated or on a per ground station basis; spectrum (pool of frequencies) needed for the system to meet a target performance; optimized frequency plan for a given scenario; expected performance given spectrum available; and, estimates of throughput distributions for a given scenario. We conclude with a discussion aimed at providing insight into the tradeoffs and challenges identified with respect to radio resource management for VDL-2 air-ground networks.
A distributed control approach for power and energy management in a notional shipboard power system
NASA Astrophysics Data System (ADS)
Shen, Qunying
The main goal of this thesis is to present a power control module (PCON) based approach for power and energy management and to examine its control capability in shipboard power system (SPS). The proposed control scheme is implemented in a notional medium voltage direct current (MVDC) integrated power system (IPS) for electric ship. To realize the control functions such as ship mode selection, generator launch schedule, blackout monitoring, and fault ride-through, a PCON based distributed power and energy management system (PEMS) is developed. The control scheme is proposed as two-layer hierarchical architecture with system level on the top as the supervisory control and zonal level on the bottom as the decentralized control, which is based on the zonal distribution characteristic of the notional MVDC IPS that was proposed as one of the approaches for Next Generation Integrated Power System (NGIPS) by Norbert Doerry. Several types of modules with different functionalities are used to derive the control scheme in detail for the notional MVDC IPS. Those modules include the power generation module (PGM) that controls the function of generators, the power conversion module (PCM) that controls the functions of DC/DC or DC/AC converters, etc. Among them, the power control module (PCON) plays a critical role in the PEMS. It is the core of the control process. PCONs in the PEMS interact with all the other modules, such as power propulsion module (PPM), energy storage module (ESM), load shedding module (LSHED), and human machine interface (HMI) to realize the control algorithm in PEMS. The proposed control scheme is implemented in real time using the real time digital simulator (RTDS) to verify its validity. To achieve this, a system level energy storage module (SESM) and a zonal level energy storage module (ZESM) are developed in RTDS to cooperate with PCONs to realize the control functionalities. In addition, a load shedding module which takes into account the reliability of power supply (in terms of quality of service) is developed. This module can supply uninterruptible power to the mission critical loads. In addition, a multi-agent system (MAS) based framework is proposed to implement the PCON based PEMS through a hardware setup that is composed of MAMBA boards and FPGA interface. Agents are implemented using Java Agent DEvelopment Framework (JADE). Various test scenarios were tested to validate the approach.
MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun
Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less
MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization
Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun
2017-10-30
Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
Brown, Derek W; Shulman, Adam; Hudson, Alana; Smith, Wendy; Fisher, Brandon; Hollon, Jon; Pipman, Yakov; Van Dyk, Jacob; Einck, John
2014-11-01
We present a practical, generic, easy-to-use framework for the implementation of new radiation therapy technologies and treatment techniques in low-income countries. The framework is intended to standardize the implementation process, reduce the effort involved in generating an implementation strategy, and provide improved patient safety by reducing the likelihood that steps are missed during the implementation process. The 10 steps in the framework provide a practical approach to implementation. The steps are, 1) Site and resource assessment, 2) Evaluation of equipment and funding, 3) Establishing timelines, 4) Defining the treatment process, 5) Equipment commissioning, 6) Training and competency assessment, 7) Prospective risk analysis, 8) System testing, 9) External dosimetric audit and incident learning, and 10) Support and follow-up. For each step, practical advice for completing the step is provided, as well as links to helpful supplementary material. An associated checklist is provided that can be used to track progress through the steps in the framework. While the emphasis of this paper is on addressing the needs of low-income countries, the concepts also apply in high-income countries. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
OpenDanubia - An integrated, modular simulation system to support regional water resource management
NASA Astrophysics Data System (ADS)
Muerth, M.; Waldmann, D.; Heinzeller, C.; Hennicker, R.; Mauser, W.
2012-04-01
The already completed, multi-disciplinary research project GLOWA-Danube has developed a regional scale, integrated modeling system, which was successfully applied on the 77,000 km2 Upper Danube basin to investigate the impact of Global Change on both the natural and anthropogenic water cycle. At the end of the last project phase, the integrated modeling system was transferred into the open source project OpenDanubia, which now provides both the core system as well as all major model components to the general public. First, this will enable decision makers from government, business and management to use OpenDanubia as a tool for proactive management of water resources in the context of global change. Secondly, the model framework to support integrated simulations and all simulation models developed for OpenDanubia in the scope of GLOWA-Danube are further available for future developments and research questions. OpenDanubia allows for the investigation of water-related scenarios considering different ecological and economic aspects to support both scientists and policy makers to design policies for sustainable environmental management. OpenDanubia is designed as a framework-based, distributed system. The model system couples spatially distributed physical and socio-economic process during run-time, taking into account their mutual influence. To simulate the potential future impacts of Global Change on agriculture, industrial production, water supply, households and tourism businesses, so-called deep actor models are implemented in OpenDanubia. All important water-related fluxes and storages in the natural environment are implemented in OpenDanubia as spatially explicit, process-based modules. This includes the land surface water and energy balance, dynamic plant water uptake, ground water recharge and flow as well as river routing and reservoirs. Although the complete system is relatively demanding on data requirements and hardware requirements, the modular structure and the generic core system (Core Framework, Actor Framework) allows the application in new regions and the selection of a reduced number of modules for simulation. As part of the Open Source Initiative in GLOWA-Danube (opendanubia.glowa-danube.de) a comprehensive documentation for the system installation was created and both the program code of the framework and of all major components is licensed under the GNU General Public License. In addition, some helpful programs and scripts necessary for the operation and processing of input and result data sets are provided.
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2015-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance
Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.
2010-01-01
Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
Vector wind profile gust model
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1979-01-01
Work towards establishing a vector wind profile gust model for the Space Transportation System flight operations and trade studies is reported. To date, all the statistical and computational techniques required were established and partially implemented. An analysis of wind profile gust at Cape Kennedy within the theoretical framework is presented. The variability of theoretical and observed gust magnitude with filter type, altitude, and season is described. Various examples are presented which illustrate agreement between theoretical and observed gust percentiles. The preliminary analysis of the gust data indicates a strong variability with altitude, season, and wavelength regime. An extension of the analyses to include conditional distributions of gust magnitude given gust length, distributions of gust modulus, and phase differences between gust components has begun.
NASA Astrophysics Data System (ADS)
Boyko, Oleksiy; Zheleznyak, Mark
2015-04-01
The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.
Web Information Systems for Monitoring and Control of Indoor Air Quality at Subway Stations
NASA Astrophysics Data System (ADS)
Choi, Gi Heung; Choi, Gi Sang; Jang, Joo Hyoung
In crowded subway stations indoor air quality (IAQ) is a key factor for ensuring the safety, health and comfort of passengers. In this study, a framework for web-based information system in VDN environment for monitoring and control of IAQ in subway stations is suggested. Since physical variables that describing IAQ need to be closely monitored and controlled in multiple locations in subway stations, concept of distributed monitoring and control network using wireless media needs to be implemented. Connecting remote wireless sensor network and device (LonWorks) networks to the IP network based on the concept of VDN can provide a powerful, integrated, distributed monitoring and control performance, making a web-based information system possible.
Performance of a Heterogeneous Grid Partitioner for N-body Applications
NASA Technical Reports Server (NTRS)
Harvey, Daniel J.; Das, Sajal K.; Biswas, Rupak
2003-01-01
An important characteristic of distributed grids is that they allow geographically separated multicomputers to be tied together in a transparent virtual environment to solve large-scale computational problems. However, many of these applications require effective runtime load balancing for the resulting solutions to be viable. Recently, we developed a latency tolerant partitioner, called MinEX, specifically for use in distributed grid environments. This paper compares the performance of MinEX to that of METIS, a popular multilevel family of partitioners, using simulated heterogeneous grid configurations. A solver for the classical N-body problem is implemented to provide a framework for the comparisons. Experimental results show that MinEX provides superior quality partitions while being competitive to METIS in speed of execution.
NASA Astrophysics Data System (ADS)
Parvinnia, Elham; Khayami, Raouf; Ziarati, Koorush
Virtual collaborative networks are composed of small companies which take most advantage from the market opportunity and are able to compete with large companies. So some frameworks have been introduced for implementing this type of collaboration; although none of them has been standardized completely. In this paper we specify some instances that need to be standardized for implementing virtual enterprises. Then, a framework is suggested for implementing virtual collaborative networks. Finally, based on that suggestion, as a case study, we design a virtual collaborative network in automobile components production industry.
Sensor Data Distribution With Robustness and Reliability: Toward Distributed Components Model
NASA Technical Reports Server (NTRS)
Alena, Richard L.; Lee, Charles
2005-01-01
In planetary surface exploration mission, sensor data distribution is required in many aspects, for example, in navigation, scheduling, planning, monitoring, diagnostics, and automation of the field tasks. The challenge is to distribute such data in the robust and reliable way so that we can minimize the errors caused by miscalculations, and misjudgments that based on the error data input in the mission. The ad-hoc wireless network on planetary surface is not constantly connected because of the nature of the rough terrain and lack of permanent establishments on the surface. There are some disconnected moments that the computation nodes will re-associate with different repeaters or access points until connections are reestablished. Such a nature requires our sensor data distribution software robust and reliable with ability to tolerant disconnected moments. This paper presents a distributed components model as a framework to accomplish such tasks. The software is written in Java and utilized the available Java Message Services schema and the Boss implementation. The results of field experimentations show that the model is very effective in completing the tasks.
A framework for plasticity implementation on the SpiNNaker neural architecture.
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B
2014-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
A framework for plasticity implementation on the SpiNNaker neural architecture
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A.; Furber, Steve B.; Benosman, Ryad B.
2015-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system. PMID:25653580
Information integration from heterogeneous data sources: a Semantic Web approach.
Kunapareddy, Narendra; Mirhaji, Parsa; Richards, David; Casscells, S Ward
2006-01-01
Although the decentralized and autonomous implementation of health information systems has made it possible to extend the reach of surveillance systems to a variety of contextually disparate domains, public health use of data from these systems is not primarily anticipated. The Semantic Web has been proposed to address both representational and semantic heterogeneity in distributed and collaborative environments. We introduce a semantic approach for the integration of health data using the Resource Definition Framework (RDF) and the Simple Knowledge Organization System (SKOS) developed by the Semantic Web community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-17
PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.
Relative entropy and optimization-driven coarse-graining methods in VOTCA
Mashayak, S. Y.; Jochum, Mara N.; Koschke, Konstantin; ...
2015-07-20
We discuss recent advances of the VOTCA package for systematic coarse-graining. Two methods have been implemented, namely the downhill simplex optimization and the relative entropy minimization. We illustrate the new methods by coarse-graining SPC/E bulk water and more complex water-methanol mixture systems. The CG potentials obtained from both methods are then evaluated by comparing the pair distributions from the coarse-grained to the reference atomistic simulations.We have also added a parallel analysis framework to improve the computational efficiency of the coarse-graining process.
The national elevation data set
Gesch, Dean B.; Oimoen, Michael J.; Greenlee, Susan K.; Nelson, Charles A.; Steuck, Michael J.; Tyler, Dean J.
2002-01-01
The NED is a seamless raster dataset from the USGS that fulfills many of the concepts of framework geospatial data as envisioned for the NSDI, allowing users to focus on analysis rather than data preparation. It is regularly maintained and updated, and it provides basic elevation data for many GIS applications. The NED is one of several seamless datasets that the USGS is making available through the Web. The techniques and approaches developed for producing, maintaining, and distributing the NED are the type that will be used for implementing the USGS National Map (http://nationalmap.usgs.gov/).
Transforming organizational culture through nursing shared governance.
Newman, Karen Profitt
2011-03-01
Nursing shared governance (NSG) provides a framework for the professionalization of nursing, provides a broader distribution of decision making across the profession, and allocates decisions based on accountability and role expectations. Shared governance defines staff-based decisions, accountability, roles, and ownership of staff in those activities that directly affect nurses' lives and practice. Although NSG is a somewhat ambiguous concept with a vast application, examining it from the perspective of structure, process, and outcomes can more clearly outline a successful strategy for implementation and growth. Copyright © 2011 Elsevier Inc. All rights reserved.
The Impact and Implementation of National Qualifications Frameworks: A Comparison of 16 Countries
ERIC Educational Resources Information Center
Allais, Stephanie M.
2011-01-01
This article provides some of the key findings of a comparative study commissioned by the International Labour Organization (ILO), which attempted to understand more about the impact and implementation of national qualifications frameworks (NQFs). Sixteen case studies were produced, on qualifications frameworks in Australia; Bangladesh; Botswana;…
A Contextual Factors Framework to Inform Implementation and Evaluation of Public Health Initiatives
ERIC Educational Resources Information Center
Vanderkruik, Rachel; McPherson, Marianne E.
2017-01-01
Evaluating initiatives implemented across multiple settings can elucidate how various contextual factors may influence both implementation and outcomes. Understanding context is especially critical when the same program has varying levels of success across settings. We present a framework for evaluating contextual factors affecting an initiative…
An Integrated Approach to Disability Policy Development, Implementation, and Evaluation
ERIC Educational Resources Information Center
Shogren, Karrie A.; Luckasson, Ruth; Schalock, Robert L.
2017-01-01
This article provides a framework for an integrated approach to disability policy development, implementation, and evaluation. The article discusses how a framework that combines systems thinking and valued outcomes can be used by coalition partners across ecological systems to implement disability policy, promote the effective use of resources,…
Designing Caregiver-Implemented Shared-Reading Interventions to Overcome Implementation Barriers
ERIC Educational Resources Information Center
Justice, Laura M.; Logan, Jessica R.; Damschroder, Laura
2015-01-01
Purpose: This study presents an application of the theoretical domains framework (TDF; Michie et al., 2005), an integrative framework drawing on behavior-change theories, to speech-language pathology. Methods: A multistep procedure was used to identify barriers affecting caregivers' implementation of shared-reading interventions with their…
McCallum, Meg; Carver, Janet; Dupere, David; Ganong, Sharon; Henderson, J David; McKim, Ann; McNeil-Campbell, Lisa; Richardson, Holly; Simpson, Judy; Tschupruk, Cheryl; Jewers, Heather
2018-05-15
In 2014, Nova Scotia released a provincial palliative care strategy and implementation working groups were established. The Capacity Building and Practice Change Working Group, comprised of health professionals, public advisors, academics, educators, and a volunteer supervisor, was asked to select palliative care education programs for health professionals and volunteers. The first step in achieving this mandate was to establish competencies for health professionals and volunteers caring for patients with life-limiting illness and their families and those specializing in palliative care. In 2015, a literature search for palliative care competencies and an environmental scan of related education programs were conducted. The Irish Palliative Care Competence Framework serves as the foundation of the Nova Scotia Palliative Care Competency Framework. Additional disciplines and competencies were added and any competencies not specific to palliative care were removed. To highlight interprofessional practice, the framework illustrates shared and discipline-specific competencies. Stakeholders were asked to validate the framework and map the competencies to educational programs. Numerous rounds of review refined the framework. The framework includes competencies for 22 disciplines, 9 nursing specialties, and 4 physician specialties. The framework, released in 2017, and the selection and implementation of education programs were a significant undertaking. The framework will support the implementation of the Nova Scotia Integrated Palliative Care Strategy, enhance the interprofessional nature of palliative care, and guide the further implementation of education programs. Other jurisdictions have expressed considerable interest in the framework.
Automated Discovery of Simulation Between Programs
2014-10-18
relation. These relations enable the refinement-step of SimAbs. We have implemented SimAbs using UFO framework and Z3 SMT-solver and applied it to...step of SimAbs. We implemented SimAbs and AE-VAL on the top of the UFO framework [1, 15] and an SMT-solver Z3 [8], respectively. We have evaluated SimAbs...ut 6 Evaluation We have implemented SimAbs in the UFO framework, and evaluated it on the Software Verification Competition (SVCOMP’14) benchmarks and
Probabilistic arithmetic automata and their applications.
Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven
2012-01-01
We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.
Hughes, Richard John; Thrasher, James Thomas; Nordholt, Jane Elizabeth
2016-11-29
Innovations for quantum key management harness quantum communications to form a cryptography system within a public key infrastructure framework. In example implementations, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a Merkle signature scheme (using Winternitz one-time digital signatures or other one-time digital signatures, and Merkle hash trees) to constitute a cryptography system. More generally, the quantum key management innovations combine quantum key distribution and a quantum identification protocol with a hash-based signature scheme. This provides a secure way to identify, authenticate, verify, and exchange secret cryptographic keys. Features of the quantum key management innovations further include secure enrollment of users with a registration authority, as well as credential checking and revocation with a certificate authority, where the registration authority and/or certificate authority can be part of the same system as a trusted authority for quantum key distribution.
Lorthios-Guilledroit, Agathe; Richard, Lucie; Filiatrault, Johanne
2018-06-01
Peer education is growing in popularity as a useful health promotion strategy. However, optimal conditions for implementing peer-led health promotion programs (HPPs) remain unclear. This scoping review aimed to describe factors that can influence implementation of peer-led HPPs targeting adult populations. Five databases were searched using the keywords "health promotion/prevention", "implementation", "peers", and related terms. Studies were included if they reported at least one factor associated with the implementation of community-based peer-led HPPs. Fifty-five studies were selected for the analysis. The method known as "best fit framework synthesis" was used to analyze the factors identified in the selected papers. Many factors included in existing implementation conceptual frameworks were deemed applicable to peer-led HPPs. However, other factors related to individuals, programs, and implementation context also emerged from the analysis. Based on this synthesis, an adapted theoretical framework was elaborated, grounded in a complex adaptive system perspective and specifying potential mechanisms through which factors may influence implementation of community-based peer-led HPPs. Further research is needed to test the theoretical framework against empirical data. Findings from this scoping review increase our knowledge of the optimal conditions for implementing peer-led HPPs and thereby maximizing the benefits of such programs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sustainable Management of Seagrass Meadows: the GEOSS AIP-6 Pilot
NASA Astrophysics Data System (ADS)
Santoro, Mattia; Pastres, Roberto; Zucchetta, Matteo; Venier, Chiara; Roncella, Roberto; Bigagli, Lorenzo; Mangin, Antoine; Amine Taji, Mohamed; Gonzalo Malvarez, Gonzalo; Nativi, Stefano
2014-05-01
Seagrass meadows (marine angiosperm plants) occupy less than 0.2% of the global ocean surface, annually store about 10-18% of the so-called "Blue Carbon", i.e. the Carbon stored in coastal vegetated areas. Recent literature estimates that the flux to the long-term carbon sink in seagrasses represents 10-20% of seagrasses global average production. Such figures can be translated into economic benefits, taking into account that a ton of carbon dioxide in Europe is paid at around 15 € in the carbon market. This means that the organic carbon retained in seagrass sediments in the Mediterranean is worth 138 - 1128 billion €, which represents 6-23 € per square meter. This is 9-35 times more than one square meter of tropical forest soil (0.66 € per square meter), or 5-17 times when considering both the above and the belowground compartments in tropical forests. According the most conservative estimations, about 10% of the Mediterranean meadows have been lost during the last century. To estimate seagrass meadows distribution, a Species Distribution Model (SDM) can be used. SDM is a tool that is used to evaluate the potential distribution of a given species (e.g. Posidonia oceanica for seagrass) on the basis of the features (bio-chemical-physical parameters) of the studied environment. In the framework of the GEOSS (Global Earth Observation System of Systems) initiative, the FP7 project MEDINA developed a showcase as part of the GEOSS Architecture Interoperability Pilot - phase 6 (AIP-6). The showcase aims at providing a tool for the sustainable management of seagrass meadows along the Mediterranean coastline by integrating the SDM with available GEOSS resources. This way, the required input data can be searched, accessed and ingested into the model leveraging the brokering framework of the GEOSS Common Infrastructure (GCI). This framework is comprised of a set of middle-ware components (Brokers) that are in charge of implementing the needed interoperability arrangements to interconnect the heterogeneous and distributed capacities contributing to GEOSS. The presentation discusses such a framework explaining how the input data is discovered, accessed and processed to ingest the model. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 282977.
The Digital Anatomist Distributed Framework and Its Applications to Knowledge-based Medical Imaging
Brinkley, James F.; Rosse, Cornelius
1997-01-01
Abstract The domain of medical imaging is anatomy. Therefore, anatomic knowledge should be a rational basis for organizing and analyzing images. The goals of the Digital Anatomist Program at the University of Washington include the development of an anatomically based software framework for organizing, analyzing, visualizing and utilizing biomedical information. The framework is based on representations for both spatial and symbolic anatomic knowledge, and is being implemented in a distributed architecture in which multiple client programs on the Internet are used to update and access an expanding set of anatomical information resources. The development of this framework is driven by several practical applications, including symbolic anatomic reasoning, knowledge based image segmentation, anatomy information retrieval, and functional brain mapping. Since each of these areas involves many difficult image processing issues, our research strategy is an evolutionary one, in which applications are developed somewhat independently, and partial solutions are integrated in a piecemeal fashion, using the network as the substrate. This approach assumes that networks of interacting components can synergistically work together to solve problems larger than either could solve on its own. Each of the individual projects is described, along with evaluations that show that the individual components are solving the problems they were designed for, and are beginning to interact with each other in a synergistic manner. We argue that this synergy will increase, not only within our own group, but also among groups as the Internet matures, and that an anatomic knowledge base will be a useful means for fostering these interactions. PMID:9147337
NASA Astrophysics Data System (ADS)
Palalloi, Irfan Andi; Anwar, Azwar; Syarifuddin
2018-05-01
Most of the activities of Majene regency society dominant as a fisherman, by and large, they work based on the hereditary experiences of their ancestors. This is proven by fishery industry statistic that highest from other industry with 18,30 % in the distribution of the gross regional domestic product. In each specific case, utilization of technology becomes a necessity that plays a key role. Adoption of technology for fishermen groups in use of GPSequipment has frequently committed by the government also non-profit organization go through training and mentoring. Nowadays there are some modern mobile applications has been developed by government agency assist the group of fishermen handy on managing their fishing activity. Such us ZPPI data row from Lapan Satelite, nelpinwas also known as smart fisheries, infrastructure development for space oceanography (indeso). However, all of them carry out of the risk and problems on the user side. One of them related toaccuracy and reliability. In this research, we elaborate technical factor, governance, through Cobit framework and analyze the best practice standardfor implementing the technology. All in all, the result presented of the governance standard on control and implementing technology under customer dimension in information technology governance on the standard process to ensure benefit delivery for implementing mobile application fishery in DKP Majene regency.
Modelling multimedia teleservices with OSI upper layers framework: Short paper
NASA Astrophysics Data System (ADS)
Widya, I.; Vanrijssen, E.; Michiels, E.
The paper presents the use of the concepts and modelling principles of the Open Systems Interconnection (OSI) upper layers structure in the modelling of multimedia teleservices. It puts emphasis on the revised Application Layer Structure (OSI/ALS). OSI/ALS is an object based reference model which intends to coordinate the development of application oriented services and protocols in a consistent and modular way. It enables the rapid deployment and integrated use of these services. The paper emphasizes further on the nesting structure defined in OSI/ALS which allows the design of scalable and user tailorable/controllable teleservices. OSI/ALS consistent teleservices are moreover implementable on communication platforms of different capabilities. An analysis of distributed multimedia architectures which can be found in the literature, confirms the ability of the OSI/ALS framework to model the interworking functionalities of teleservices.
Addressing practical challenges in utility optimization of mobile wireless sensor networks
NASA Astrophysics Data System (ADS)
Eswaran, Sharanya; Misra, Archan; La Porta, Thomas; Leung, Kin
2008-04-01
This paper examines the practical challenges in the application of the distributed network utility maximization (NUM) framework to the problem of resource allocation and sensor device adaptation in a mission-centric wireless sensor network (WSN) environment. By providing rich (multi-modal), real-time information about a variety of (often inaccessible or hostile) operating environments, sensors such as video, acoustic and short-aperture radar enhance the situational awareness of many battlefield missions. Prior work on the applicability of the NUM framework to mission-centric WSNs has focused on tackling the challenges introduced by i) the definition of an individual mission's utility as a collective function of multiple sensor flows and ii) the dissemination of an individual sensor's data via a multicast tree to multiple consuming missions. However, the practical application and performance of this framework is influenced by several parameters internal to the framework and also by implementation-specific decisions. This is made further complex due to mobile nodes. In this paper, we use discrete-event simulations to study the effects of these parameters on the performance of the protocol in terms of speed of convergence, packet loss, and signaling overhead thereby addressing the challenges posed by wireless interference and node mobility in ad-hoc battlefield scenarios. This study provides better understanding of the issues involved in the practical adaptation of the NUM framework. It also helps identify potential avenues of improvement within the framework and protocol.
Protocol for fermionic positive-operator-valued measures
NASA Astrophysics Data System (ADS)
Arvidsson-Shukur, D. R. M.; Lepage, H. V.; Owen, E. T.; Ferrus, T.; Barnes, C. H. W.
2017-11-01
In this paper we present a protocol for the implementation of a positive-operator-valued measure (POVM) on massive fermionic qubits. We present methods for implementing nondispersive qubit transport, spin rotations, and spin polarizing beam-splitter operations. Our scheme attains linear opticslike control of the spatial extent of the qubits by considering ground-state electrons trapped in the minima of surface acoustic waves in semiconductor heterostructures. Furthermore, we numerically simulate a high-fidelity POVM that carries out Procrustean entanglement distillation in the framework of our scheme, using experimentally realistic potentials. Our protocol can be applied not only to pure ensembles with particle pairs of known identical entanglement, but also to realistic ensembles of particle pairs with a distribution of entanglement entropies. This paper provides an experimentally realizable design for future quantum technologies.
Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback
NASA Astrophysics Data System (ADS)
Schwarz, Stefan; Heath, Robert W.; Rupp, Markus
2013-12-01
This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Mitchell, Travis; Leonardi, Christopher; Bolster, Diogo
2017-11-01
Based on phase-field theory, we introduce a robust lattice-Boltzmann equation for modeling immiscible multiphase flows at large density and viscosity contrasts. Our approach is built by modifying the method proposed by Zu and He [Phys. Rev. E 87, 043301 (2013), 10.1103/PhysRevE.87.043301] in such a way as to improve efficiency and numerical stability. In particular, we employ a different interface-tracking equation based on the so-called conservative phase-field model, a simplified equilibrium distribution that decouples pressure and velocity calculations, and a local scheme based on the hydrodynamic distribution functions for calculation of the stress tensor. In addition to two distribution functions for interface tracking and recovery of hydrodynamic properties, the only nonlocal variable in the proposed model is the phase field. Moreover, within our framework there is no need to use biased or mixed difference stencils for numerical stability and accuracy at high density ratios. This not only simplifies the implementation and efficiency of the model, but also leads to a model that is better suited to parallel implementation on distributed-memory machines. Several benchmark cases are considered to assess the efficacy of the proposed model, including the layered Poiseuille flow in a rectangular channel, Rayleigh-Taylor instability, and the rise of a Taylor bubble in a duct. The numerical results are in good agreement with available numerical and experimental data.
Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers
NASA Technical Reports Server (NTRS)
Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj
1995-01-01
The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.
Integrated Nationwide Electronic Health Records system: Semi-distributed architecture approach.
Fragidis, Leonidas L; Chatzoglou, Prodromos D; Aggelidis, Vassilios P
2016-11-14
The integration of heterogeneous electronic health records systems by building an interoperable nationwide electronic health record system provides undisputable benefits in health care, like superior health information quality, medical errors prevention and cost saving. This paper proposes a semi-distributed system architecture approach for an integrated national electronic health record system incorporating the advantages of the two dominant approaches, the centralized architecture and the distributed architecture. The high level design of the main elements for the proposed architecture is provided along with diagrams of execution and operation and data synchronization architecture for the proposed solution. The proposed approach effectively handles issues related to redundancy, consistency, security, privacy, availability, load balancing, maintainability, complexity and interoperability of citizen's health data. The proposed semi-distributed architecture offers a robust interoperability framework without healthcare providers to change their local EHR systems. It is a pragmatic approach taking into account the characteristics of the Greek national healthcare system along with the national public administration data communication network infrastructure, for achieving EHR integration with acceptable implementation cost.
Jacobs, Sara R; Weiner, Bryan J; Reeve, Bryce B; Hofmann, David A; Christian, Michael; Weinberger, Morris
2015-01-22
The failure rates for implementing complex innovations in healthcare organizations are high. Estimates range from 30% to 90% depending on the scope of the organizational change involved, the definition of failure, and the criteria to judge it. The innovation implementation framework offers a promising approach to examine the organizational factors that determine effective implementation. To date, the utility of this framework in a healthcare setting has been limited to qualitative studies and/or group level analyses. Therefore, the goal of this study was to quantitatively examine this framework among individual participants in the National Cancer Institute's Community Clinical Oncology Program using structural equation modeling. We examined the innovation implementation framework using structural equation modeling (SEM) among 481 physician participants in the National Cancer Institute's Community Clinical Oncology Program (CCOP). The data sources included the CCOP Annual Progress Reports, surveys of CCOP physician participants and administrators, and the American Medical Association Physician Masterfile. Overall the final model fit well. Our results demonstrated that not only did perceptions of implementation climate have a statistically significant direct effect on implementation effectiveness, but physicians' perceptions of implementation climate also mediated the relationship between organizational implementation policies and practices (IPP) and enrollment (p <0.05). In addition, physician factors such as CCOP PI status, age, radiological oncologists, and non-oncologist specialists significantly influenced enrollment as well as CCOP organizational size and structure, which had indirect effects on implementation effectiveness through IPP and implementation climate. Overall, our results quantitatively confirmed the main relationship postulated in the innovation implementation framework between IPP, implementation climate, and implementation effectiveness among individual physicians. This finding is important, as although the model has been discussed within healthcare organizations before, the studies have been predominately qualitative in nature and/or at the organizational level. In addition, our findings have practical applications. Managers looking to increase implementation effectiveness of an innovation should focus on creating an environment that physicians perceive as encouraging implementation. In addition, managers should consider instituting specific organizational IPP aimed at increasing positive perceptions of implementation climate. For example, IPP should include specific expectations, support, and rewards for innovation use.
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
Creating an outcomes framework.
Doerge, J B
2000-01-01
Four constructs used to build a framework for outcomes management for a large midwestern tertiary hospital are described in this article. A system framework outlining a model of clinical integration and population management based in Steven Shortell's work is discussed. This framework includes key definitions of high-risk patients, target groups, populations and community. Roles for each level of population management and how they were implemented in the health care system are described. A point of service framework centered on seven dimensions of care is the next construct applied on each nursing unit. The third construct outlines the framework for role development. Three roles for nursing were created to implement strategies for target groups that are strategic disease categories; two of those roles are described in depth. The philosophy of nursing practice is centered on caring and existential advocacy. The final construct is the modification of the Dartmouth model as a common framework for outcomes. System applications of the scorecard and lessons learned in the 2-year process of implementation are shared
von Groote, Per Maximilian; Giustini, Alessandro; Bickenbach, Jerome Edmond
2014-01-01
A long-standing scientific discourse on the use of health research evidence to inform policy has come to produce multiple implementation theories, frameworks, models, and strategies. It is from this extensive body of research that the authors extract and present essential components of an implementation process in the health domain, gaining valuable guidance on how to successfully meet the challenges of implementation. Furthermore, this article describes how implementation content can be analyzed and reorganized, with a special focus on implementation at different policy, systems and services, and individual levels using existing frameworks and tools. In doing so, the authors aim to contribute to the establishment and testing of an implementation framework for reports such as the World Health Organization World Report on Disability, the World Health Organization International Perspectives on Spinal Cord Injury, and other health policy reports or technical health guidelines.
Learning in the model space for cognitive fault diagnosis.
Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin
2014-01-01
The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.
Watershed management in South Asia: A synoptic review
NASA Astrophysics Data System (ADS)
Ratna Reddy, V.; Saharawat, Yashpal Singh; George, Biju
2017-08-01
Watershed management (WSM) is the most widely adopted technology in developed as well as developing countries due to its suitability across climatic conditions. Watershed technology is suitable to protect and enhance soil fertility, which is deteriorating at an alarming rate with agricultural intensification in high as well as low rainfall regions. Of late, WSM is considered as an effective poverty alleviation intervention in the rain fed regions in countries like India. This paper aims at providing a basic watershed policy and implementation framework based on a critical review of experiences of WSM initiatives across South Asia. The purpose is to provide cross learnings within South Asia and other developing countries (especially Africa) that are embarking on WSM in recent years. Countries in the region accord differential policy priority and are at different levels of institutional arrangements for implementing WSM programmes. The implementation of watershed interventions is neither scientific nor comprehensive in all the countries limiting the effectiveness (impacts). Implementation of the programmes for enhancing the livelihoods of the communities need to strengthen both technical and institutional aspects. While countries like India and Nepal are yet to strengthen the technical aspects in terms of integrating hydrogeology and biophysical aspects into watershed design, others need to look at these aspects as they move towards strengthening the watershed institutions. Another important challenge in all the countries is regarding the distribution of benefits. Due to the existing property rights in land and water resources coupled with the agrarian structure and uneven distribution and geometry of aquifers access to sub-surface water resources is unevenly distributed across households. Though most of the countries are moving towards incorporating livelihoods components in order to ensure benefits to all sections of the community, not much is done in terms of addressing the equity aspects of WSM.
ERIC Educational Resources Information Center
Klebansky, Anna; Fraser, Sharon P.
2013-01-01
This paper details a conceptual framework that situates curriculum design for information literacy and lifelong learning, through a cohesive developmental information literacy based model for learning, at the core of teacher education courses at UTAS. The implementation of the framework facilitates curriculum design that systematically,…
One Urban School's Implementation of a Systemic Response-to-Intervention (RTI) Framework
ERIC Educational Resources Information Center
Higgins Averill, Orla C.
2014-01-01
School districts have been attempting to implement the response-to-intervention (RTI) framework in an effort both to comply with federal legislation (i.e., IDEA 2004) and to improve teaching for all students. Extant research on this framework has focused on exploring assessment practices across tiers and the efficacy of specific interventions,…
Distributed tactical reasoning framework for intelligent vehicles
NASA Astrophysics Data System (ADS)
Sukthankar, Rahul; Pomerleau, Dean A.; Thorpe, Chuck E.
1998-01-01
In independent vehicle concepts for the Automated Highway System (AHS), the ability to make competent tactical-level decisions in real-time is crucial. Traditional approaches to tactical reasoning typically involve the implementation of large monolithic systems, such as decision trees or finite state machines. However, as the complexity of the environment grows, the unforeseen interactions between components can make modifications to such systems very challenging. For example, changing an overtaking behavior may require several, non-local changes to car-following, lane changing and gap acceptance rules. This paper presents a distributed solution to the problem. PolySAPIENT consists of a collection of autonomous modules, each specializing in a particular aspect of the driving task - classified by traffic entities rather than tactical behavior. Thus, the influence of the vehicle ahead on the available actions is managed by one reasoning object, while the implications of an approaching exit are managed by another. The independent recommendations form these reasoning objects are expressed in the form of votes and vetos over a 'tactical action space', and are resolved by a voting arbiter. This local independence enables PolySAPIENT reasoning objects to be developed independently, using a heterogenous implementation. PolySAPIENT vehicles are implemented in the SHIVA tactical highway simulator, whose vehicles are based on the Carnegie Mellon Navlab robots.
NASA Astrophysics Data System (ADS)
Garcia, M.; Kumar, S.; Gochis, D.; Yates, D.; McHenry, J.; Burnet, T.; Coats, C.; Condrey, J.
2006-05-01
Collaboration between scientists at UMBC-GEST and NASA-GSFC, the NCAR Research Applications Laboratory (RAL), and Baron Advanced Meteorological Services (BAMS), has produced a modeling framework for the application of traditional land surface models (LSMs) in a distributed hydrologic system which can be used for diagnosis and prediction of routed stream discharge hydrographs. This collaboration is oriented on near-term system implementation across Romania for flood and flash-flood analyses and forecasting as part of the World Bank-funded Destructive Waters Abatement (DESWAT) program. Meteorological forcing from surface observations, model analyses and numerical forecasts are employed in the NASA-GSFC Land Information System (LIS) to drive the Unified Noah LSM with Noah-Distributed components, stream network delineation and routing schemes original to this work. The Unified Noah LSM is the outgrowth of a joint modeling effort between several research partners including NCAR, the NOAA National Center for Environmental Prediction (NCEP), and the Air Force Weather Agency (AFWA). At NCAR, hydrologically-oriented extensions to the Noah LSM have been developed for LSM applications in a distributed domain in order to address the lateral redistribution of soil moisture by surface and subsurface flow processes. These advancements have been integrated into the NASA-GSFC Land Information System (LIS) and coupled with an original framework for hydraulic channel network definition and specification, linkages with the Noah-Distributed overland and subsurface flow framework, and distributed cell- to-cell (or link-node) hydraulic routing. This poster presents an overview of the system components and their organization, as well as results of the first U.S. case study performed with this system under various configurations. The case study simulated precipitation events over a headwater basin in the southern Appalachian Mountains in October 2005 following the landfall of Tropical Storm Tammy in South Carolina. These events followed on a long dry period in the region, lending to the demonstration of watershed response to strong precipitation forcing under nearly ideal and easily-specified initial conditions. The results presented here will compare simulated versus observed streamflow conditions at various locations in the test watershed using a selection of routing methods.
MAFsnp: A Multi-Sample Accurate and Flexible SNP Caller Using Next-Generation Sequencing Data
Hu, Jiyuan; Li, Tengfei; Xiu, Zidi; Zhang, Hong
2015-01-01
Most existing statistical methods developed for calling single nucleotide polymorphisms (SNPs) using next-generation sequencing (NGS) data are based on Bayesian frameworks, and there does not exist any SNP caller that produces p-values for calling SNPs in a frequentist framework. To fill in this gap, we develop a new method MAFsnp, a Multiple-sample based Accurate and Flexible algorithm for calling SNPs with NGS data. MAFsnp is based on an estimated likelihood ratio test (eLRT) statistic. In practical situation, the involved parameter is very close to the boundary of the parametric space, so the standard large sample property is not suitable to evaluate the finite-sample distribution of the eLRT statistic. Observing that the distribution of the test statistic is a mixture of zero and a continuous part, we propose to model the test statistic with a novel two-parameter mixture distribution. Once the parameters in the mixture distribution are estimated, p-values can be easily calculated for detecting SNPs, and the multiple-testing corrected p-values can be used to control false discovery rate (FDR) at any pre-specified level. With simulated data, MAFsnp is shown to have much better control of FDR than the existing SNP callers. Through the application to two real datasets, MAFsnp is also shown to outperform the existing SNP callers in terms of calling accuracy. An R package “MAFsnp” implementing the new SNP caller is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/. PMID:26309201
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.
Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil
2017-10-01
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.
EPICS-based control and data acquisition for the APS slope profiler (Conference Presentation)
NASA Astrophysics Data System (ADS)
Sullivan, Joseph; Assoufid, Lahsen; Qian, Jun; Jemian, Peter R.; Mooney, Tim; Rivers, Mark L.; Goetze, Kurt; Sluiter, Ronald L.; Lang, Keenan
2016-09-01
The motion control, data acquisition and analysis system for APS Slope Measuring Profiler was implemented using the Experimental Physics and Industrial Control System (EPICS). EPICS was designed as a framework with software tools and applications that provide a software infrastructure used in building distributed control systems to operate devices such as particle accelerators, large experiments and major telescopes. EPICS was chosen to implement the APS Slope Measuring Profiler because it is also applicable to single purpose systems. The control and data handling capability available in the EPICS framework provides the basic functionality needed for high precision X-ray mirror measurement. Those built in capabilities include hardware integration of high-performance motion control systems (3-axis gantry and tip-tilt stages), mirror measurement devices (autocollimator, laser spot camera) and temperature sensors. Scanning the mirror and taking measurements was accomplished with an EPICS feature (the sscan record) which synchronizes motor positioning with measurement triggers and data storage. Various mirror scanning modes were automatically configured using EPICS built-in scripting. EPICS tools also provide low-level image processing (areaDetector). Operation screens were created using EPICS-aware GUI screen development tools.
A general framework for a collaborative water quality knowledge and information network.
Dalcanale, Fernanda; Fontane, Darrell; Csapo, Jorge
2011-03-01
Increasing knowledge about the environment has brought about a better understanding of the complexity of the issues, and more information publicly available has resulted into a steady shift from centralized decision making to increasing levels of participatory processes. The management of that information, in turn, is becoming more complex. One of the ways to deal with the complexity is the development of tools that would allow all players, including managers, researchers, educators, stakeholders and the civil society, to be able to contribute to the information system, in any level they are inclined to do so. In this project, a search for the available technology for collaboration, methods of community filtering, and community-based review was performed and the possible implementation of these tools to create a general framework for a collaborative "Water Quality Knowledge and Information Network" was evaluated. The main goals of the network are to advance water quality education and knowledge; encourage distribution and access to data; provide networking opportunities; allow public perceptions and concerns to be collected; promote exchange of ideas; and, give general, open, and free access to information. A reference implementation was made available online and received positive feedback from the community, which also suggested some possible improvements.
Wyse, Sara A; Long, Tammy M; Ebert-May, Diane
2014-01-01
Graduate teaching assistants (TAs) are increasingly responsible for instruction in undergraduate science, technology, engineering, and mathematics (STEM) courses. Various professional development (PD) programs have been developed and implemented to prepare TAs for this role, but data about effectiveness are lacking and are derived almost exclusively from self-reported surveys. In this study, we describe the design of a reformed PD (RPD) model and apply Kirkpatrick's Evaluation Framework to evaluate multiple outcomes of TA PD before, during, and after implementing RPD. This framework allows evaluation that includes both direct measures and self-reported data. In RPD, TAs created and aligned learning objectives and assessments and incorporated more learner-centered instructional practices in their teaching. However, these data are inconsistent with TAs' self-reported perceptions about RPD and suggest that single measures are insufficient to evaluate TA PD programs. © 2014 Wyse et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
A General Framework for a Collaborative Water Quality Knowledge and Information Network
NASA Astrophysics Data System (ADS)
Dalcanale, Fernanda; Fontane, Darrell; Csapo, Jorge
2011-03-01
Increasing knowledge about the environment has brought about a better understanding of the complexity of the issues, and more information publicly available has resulted into a steady shift from centralized decision making to increasing levels of participatory processes. The management of that information, in turn, is becoming more complex. One of the ways to deal with the complexity is the development of tools that would allow all players, including managers, researchers, educators, stakeholders and the civil society, to be able to contribute to the information system, in any level they are inclined to do so. In this project, a search for the available technology for collaboration, methods of community filtering, and community-based review was performed and the possible implementation of these tools to create a general framework for a collaborative "Water Quality Knowledge and Information Network" was evaluated. The main goals of the network are to advance water quality education and knowledge; encourage distribution and access to data; provide networking opportunities; allow public perceptions and concerns to be collected; promote exchange of ideas; and, give general, open, and free access to information. A reference implementation was made available online and received positive feedback from the community, which also suggested some possible improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Birken, Sarah A; Powell, Byron J; Presseau, Justin; Kirk, M Alexis; Lorencatto, Fabiana; Gould, Natalie J; Shea, Christopher M; Weiner, Bryan J; Francis, Jill J; Yu, Yan; Haines, Emily; Damschroder, Laura J
2017-01-05
Over 60 implementation frameworks exist. Using multiple frameworks may help researchers to address multiple study purposes, levels, and degrees of theoretical heritage and operationalizability; however, using multiple frameworks may result in unnecessary complexity and redundancy if doing so does not address study needs. The Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF) are both well-operationalized, multi-level implementation determinant frameworks derived from theory. As such, the rationale for using the frameworks in combination (i.e., CFIR + TDF) is unclear. The objective of this systematic review was to elucidate the rationale for using CFIR + TDF by (1) describing studies that have used CFIR + TDF, (2) how they used CFIR + TDF, and (2) their stated rationale for using CFIR + TDF. We undertook a systematic review to identify studies that mentioned both the CFIR and the TDF, were written in English, were peer-reviewed, and reported either a protocol or results of an empirical study in MEDLINE/PubMed, PsycInfo, Web of Science, or Google Scholar. We then abstracted data into a matrix and analyzed it qualitatively, identifying salient themes. We identified five protocols and seven completed studies that used CFIR + TDF. CFIR + TDF was applied to studies in several countries, to a range of healthcare interventions, and at multiple intervention phases; used many designs, methods, and units of analysis; and assessed a variety of outcomes. Three studies indicated that using CFIR + TDF addressed multiple study purposes. Six studies indicated that using CFIR + TDF addressed multiple conceptual levels. Four studies did not explicitly state their rationale for using CFIR + TDF. Differences in the purposes that authors of the CFIR (e.g., comprehensive set of implementation determinants) and the TDF (e.g., intervention development) propose help to justify the use of CFIR + TDF. Given that the CFIR and the TDF are both multi-level frameworks, the rationale that using CFIR + TDF is needed to address multiple conceptual levels may reflect potentially misleading conventional wisdom. On the other hand, using CFIR + TDF may more fully define the multi-level nature of implementation. To avoid concerns about unnecessary complexity and redundancy, scholars who use CFIR + TDF and combinations of other frameworks should specify how the frameworks contribute to their study. PROSPERO CRD42015027615.
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
Joint Bayesian Component Separation and CMB Power Spectrum Estimation
NASA Technical Reports Server (NTRS)
Eriksen, H. K.; Jewell, J. B.; Dickinson, C.; Banday, A. J.; Gorski, K. M.; Lawrence, C. R.
2008-01-01
We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are (1) conditional sampling of foreground spectral parameters and (2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground- CMB posterior distribution and, therefore, all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multiresolution observations. To verify the method, we analyze simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3 yr WMAP data, downgraded to a common resolution of 3 deg FWHM. The results from the actual 3 yr WMAP temperature analysis are presented in a companion Letter.
Automating ATLAS Computing Operations using the Site Status Board
NASA Astrophysics Data System (ADS)
J, Andreeva; Iglesias C, Borrego; S, Campana; Girolamo A, Di; I, Dzhunov; Curull X, Espinal; S, Gayazov; E, Magradze; M, Nowotka M.; L, Rinaldi; P, Saiz; J, Schovancova; A, Stewart G.; M, Wright
2012-12-01
The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses the SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. The ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The paper will describe how the SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in the SSB. It will demonstrate the positive impact of the use of the SSB on the overall performance of ATLAS computing activities and will overview future plans.
The role of advanced nursing in lung cancer: A framework based development.
Serena, A; Castellani, P; Fucina, N; Griesser, A-C; Jeanmonod, J; Peters, S; Eicher, M
2015-12-01
Advanced Practice Lung Cancer Nurses (APLCN) are well-established in several countries but their role has yet to be established in Switzerland. Developing an innovative nursing role requires a structured approach to guide successful implementation and to meet the overarching goal of improved nursing sensitive patient outcomes. The "Participatory, Evidence-based, Patient-focused process, for guiding the development, implementation, and evaluation of advanced practice nursing" (PEPPA framework) is one approach that was developed in the context of the Canadian health system. The purpose of this article is to describe the development of an APLCN model at a Swiss Academic Medical Center as part of a specialized Thoracic Cancer Center and to evaluate the applicability of PEPPA framework in this process. In order to develop and implement the APLCN role, we applied the first seven phases of the PEPPA framework. This article spreads the applicability of the PEPPA framework for an APLCN development. This framework allowed us to i) identify key components of an APLCN model responsive to lung cancer patients' health needs, ii) identify role facilitators and barriers, iii) implement the APLCN role and iv) design a feasibility study of this new role. The PEPPA framework provides a structured process for implementing novel Advanced Practice Nursing roles in a local context, particularly where such roles are in their infancy. Two key points in the process include assessing patients' health needs and involving key stakeholders. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
van der Beek, Allard J; Dennerlein, Jack T; Huysmans, Maaike A; Mathiassen, Svend Erik; Burdorf, Alex; van Mechelen, Willem; van Dieën, Jaap H; Frings-Dresen, Monique Hw; Holtermann, Andreas; Janwantanakul, Prawit; van der Molen, Henk F; Rempel, David; Straker, Leon; Walker-Bone, Karen; Coenen, Pieter
2017-11-01
Objectives Work-related musculoskeletal disorders (MSD) are highly prevalent and put a large burden on (working) society. Primary prevention of work-related MSD focuses often on physical risk factors (such as manual lifting and awkward postures) but has not been too successful in reducing the MSD burden. This may partly be caused by insufficient knowledge of etiological mechanisms and/or a lack of adequately feasible interventions (theory failure and program failure, respectively), possibly due to limited integration of research disciplines. A research framework could link research disciplines thereby strengthening the development and implementation of preventive interventions. Our objective was to define and describe such a framework for multi-disciplinary research on work-related MSD prevention. Methods We described a framework for MSD prevention research, partly based on frameworks from other research fields (ie, sports injury prevention and public health). Results The framework is composed of a repeated sequence of six steps comprising the assessment of (i) incidence and severity of MSD, (ii) risk factors for MSD, and (iii) underlying mechanisms; and the (iv) development, (v) evaluation, and (vi) implementation of preventive intervention(s). Conclusions In the present framework for optimal work-related MSD prevention, research disciplines are linked. This framework can thereby help to improve theories and strengthen the development and implementation of prevention strategies for work-related MSD.
GeoNetwork powered GI-cat: a geoportal hybrid solution
NASA Astrophysics Data System (ADS)
Baldini, Alessio; Boldrini, Enrico; Santoro, Mattia; Mazzetti, Paolo
2010-05-01
To the aim of setting up a Spatial Data Infrastructures (SDI) the creation of a system for the metadata management and discovery plays a fundamental role. An effective solution is the use of a geoportal (e.g. FAO/ESA geoportal), that has the important benefit of being accessible from a web browser. With this work we present a solution based integrating two of the available frameworks: GeoNetwork and GI-cat. GeoNetwork is an opensource software designed to improve accessibility of a wide variety of data together with the associated ancillary information (metadata), at different scale and from multidisciplinary sources; data are organized and documented in a standard and consistent way. GeoNetwork implements both the Portal and Catalog components of a Spatial Data Infrastructure (SDI) defined in the OGC Reference Architecture. It provides tools for managing and publishing metadata on spatial data and related services. GeoNetwork allows harvesting of various types of web data sources e.g. OGC Web Services (e.g. CSW, WCS, WMS). GI-cat is a distributed catalog based on a service-oriented framework of modular components and can be customized and tailored to support different deployment scenarios. It can federate a multiplicity of catalogs services, as well as inventory and access services in order to discover and access heterogeneous ESS resources. The federated resources are exposed by GI-cat through several standard catalog interfaces (e.g. OGC CSW AP ISO, OpenSearch, etc.) and by the GI-cat extended interface. Specific components implement mediation services for interfacing heterogeneous service providers, each of which exposes a specific standard specification; such components are called Accessors. These mediating components solve providers data modelmultiplicity by mapping them onto the GI-cat internal data model which implements the ISO 19115 Core profile. Accessors also implement the query protocol mapping; first they translate the query requests expressed according to the interface protocols exposed by GI-cat into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services, THREDDS Data Server, SeaDataNet Common Data Index, GBIF and OpenSearch engines. A GeoNetwork powered GI-cat has been developed in order to exploit the best of the two frameworks. The new system uses a modified version of GeoNetwork web interface in order to add the capability of querying also the specified GI-cat catalog and not only the GeoNetwork internal database. The resulting system consists in a geoportal in which GI-cat plays the role of the search engine. This new system allows to distribute the query on the different types of data sources linked to a GI-cat. The metadata results of the query are then visualized by the Geonetwork web interface. This configuration was experimented in the framework of GIIDA, a project of the Italian National Research Council (CNR) focused on data accessibility and interoperability. A second advantage of this solution is achieved setting up a GeoNetwork catalog amongst the accessors of the GI-cat instance. Such a configuration will allow in turn GI-cat to run the query against the internal GeoNetwork database. This allows to have both the harvesting and the metadata editor functionalities provided by GeoNetwork and the distributed search functionality of GI-cat available in a consistent way through the same web interface.
Implementation guide for monitoring work zone safety and mobility impacts
DOT National Transportation Integrated Search
2009-01-01
This implementation guide describes the conceptual framework, data requirements, and computational procedures for determining the safety and mobility impacts of work zones in Texas. Researchers designed the framework and procedures to assist district...
Global Framework for Climate Services (GFCS): status of implementation
NASA Astrophysics Data System (ADS)
Lucio, Filipe
2014-05-01
The GFCS is a global partnership of governments and UN and international agencies that produce and use climate information and services. WMO, which is leading the initiative in collaboration with UN ISDR, WHO, WFP, FAO, UNESCO, UNDP and other UN and international partners are pooling their expertise and resources in order to co-design and co-produce knowledge, information and services to support effective decision making in response to climate variability and change in four priority areas (agriculture and fod security, water, health and disaster risk reduction). To address the entire value chain for the effective production and application of climate services the GFCS main components or pillars are being implemented, namely: • User Interface Platform — to provide ways for climate service users and providers to interact to identify needs and capacities and improve the effectiveness of the Framework and its climate services; • Climate Services Information System — to produce and distribute climate data, products and information according to the needs of users and to agreed standards; • Observations and Monitoring - to generate the necessary data for climate services according to agreed standards; • Research, Modelling and Prediction — to harness science capabilities and results and develop appropriate tools to meet the needs of climate services; • Capacity Building — to support the systematic development of the institutions, infrastructure and human resources needed for effective climate services. Activities are being implemented in various countries in Africa, the Caribbean and South pacific Islands. This paper will provide details on the status of implementation of the GFCS worldwider.
Freehafer, Douglas A.; Pierson, Oliver
2004-01-01
In the fall of 2002, the Onondaga Lake Partnership (OLP) formed a Geographic Information System (GIS) Planning Committee to begin the process of developing a comprehensive watershed geographic information system for Onondaga Lake. The goal of the Onondaga Lake Partnership geographic information system is to integrate the various types of spatial data used for scientific investigations, resource management, and planning and design of improvement projects in the Onondaga Lake Watershed. A needs-assessment survey was conducted and a spatial data framework developed to support the Onondaga Lake Partnership use of geographic information system technology. The design focused on the collection, management, and distribution of spatial data, maps, and internet mapping applications. A geographic information system library of over 100 spatial datasets and metadata links was assembled on the basis of the results of the needs assessment survey. Implementation options were presented, and the Geographic Information System Planning Committee offered recommendations for the management and distribution of spatial data belonging to Onondaga Lake Partnership members. The Onondaga Lake Partnership now has a strong foundation for building a comprehensive geographic information system for the Onondaga Lake watershed. The successful implementation of a geographic information system depends on the Onondaga Lake Partnership’s determination of: (1) the design and plan for a geographic information system, including the applications and spatial data that will be provided and to whom, (2) the level of geographic information system technology to be utilized and funded, and (3) the institutional issues of operation and maintenance of the system.
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Wiedermann, Wolfgang; Li, Xintong
2018-04-16
In nonexperimental data, at least three possible explanations exist for the association of two variables x and y: (1) x is the cause of y, (2) y is the cause of x, or (3) an unmeasured confounder is present. Statistical tests that identify which of the three explanatory models fits best would be a useful adjunct to the use of theory alone. The present article introduces one such statistical method, direction dependence analysis (DDA), which assesses the relative plausibility of the three explanatory models on the basis of higher-moment information about the variables (i.e., skewness and kurtosis). DDA involves the evaluation of three properties of the data: (1) the observed distributions of the variables, (2) the residual distributions of the competing models, and (3) the independence properties of the predictors and residuals of the competing models. When the observed variables are nonnormally distributed, we show that DDA components can be used to uniquely identify each explanatory model. Statistical inference methods for model selection are presented, and macros to implement DDA in SPSS are provided. An empirical example is given to illustrate the approach. Conceptual and empirical considerations are discussed for best-practice applications in psychological data, and sample size recommendations based on previous simulation studies are provided.
Implementation of the SPH Procedure Within the MOOSE Finite Element Framework
NASA Astrophysics Data System (ADS)
Laurier, Alexandre
The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.
Reyes, E Michael; Sharma, Anjali; Thomas, Kate K; Kuehn, Chuck; Morales, José Rafael
2014-09-17
Little information exists on the technical assistance needs of local indigenous organizations charged with managing HIV care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR). This paper describes the methods used to adapt the Primary Care Assessment Tool (PCAT) framework, which has successfully strengthened HIV primary care services in the US, into one that could strengthen the capacity of local partners to deliver priority health programs in resource-constrained settings by identifying their specific technical assistance needs. Qualitative methods and inductive reasoning approaches were used to conceptualize and adapt the new Clinical Assessment for Systems Strengthening (ClASS) framework. Stakeholder interviews, comparisons of existing assessment tools, and a pilot test helped determine the overall ClASS framework for use in low-resource settings. The framework was further refined one year post-ClASS implementation. Stakeholder interviews, assessment of existing tools, a pilot process and the one-year post- implementation assessment informed the adaptation of the ClASS framework for assessing and strengthening technical and managerial capacities of health programs at three levels: international partner, local indigenous partner, and local partner treatment facility. The PCAT focus on organizational strengths and systems strengthening was retained and implemented in the ClASS framework and approach. A modular format was chosen to allow the use of administrative, fiscal and clinical modules in any combination and to insert new modules as needed by programs. The pilot led to refined pre-visit planning, informed review team composition, increased visit duration, and restructured modules. A web-based toolkit was developed to capture three years of experiential learning; this kit can also be used for independent implementation of the ClASS framework. A systematic adaptation process has produced a qualitative framework that can inform implementation strategies in support of country led HIV care and treatment programs. The framework, as a well-received iterative process focused on technical assistance, may have broader utility in other global programs.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Simon, Horst D.
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
Numerical studies of identification in nonlinear distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lo, C. K.; Reich, Simeon; Rosen, I. G.
1989-01-01
An abstract approximation framework and convergence theory for the identification of first and second order nonlinear distributed parameter systems developed previously by the authors and reported on in detail elsewhere are summarized and discussed. The theory is based upon results for systems whose dynamics can be described by monotone operators in Hilbert space and an abstract approximation theorem for the resulting nonlinear evolution system. The application of the theory together with numerical evidence demonstrating the feasibility of the general approach are discussed in the context of the identification of a first order quasi-linear parabolic model for one dimensional heat conduction/mass transport and the identification of a nonlinear dissipation mechanism (i.e., damping) in a second order one dimensional wave equation. Computational and implementational considerations, in particular, with regard to supercomputing, are addressed.
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
ERIC Educational Resources Information Center
Anwar-McHenry, Julia; Donovan, Robert John; Nicholas, Amberlee; Kerrigan, Simone; Francas, Stephanie; Phan, Tina
2016-01-01
Purpose: Mentally Healthy WA developed and implemented the Mentally Healthy Schools Framework in 2010 in response to demand from schools wanting to promote the community-based Act-Belong-Commit mental health promotion message within a school setting. Schools are an important setting for mental health promotion, therefore, the Framework encourages…
2014-09-18
and full/scale experimental verifications towards ground/ satellite quantum key distribution0 Oat Qhotonics 4235>9+7,=5;9!អ \\58^ Zin K. Dao Z. Miu T...Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification DISSERTATION Jeffrey D. Morris... QUANTUM KEY DISTRIBUTION SIMULATION FRAMEWORK USING THE DISCRETE EVENT SYSTEM SPECIFICATION DISSERTATION Presented to the Faculty Department of Systems
Mathematical Frameworks for Diagnostics, Prognostics and Condition Based Maintenance Problems
2008-08-15
REPORT Mathematical Frameworks for Diagnostics, Prognostics and Condition Based Maintenance Problems (W911NF-05-1-0426) 14. ABSTRACT 16. SECURITY ...other documentation. 12. DISTRIBUTION AVAILIBILITY STATEMENT Approved for Public Release; Distribution Unlimited 9. SPONSORING/MONITORING AGENCY NAME...parallel and distributed computing environment were researched. In support of the Condition Based Maintenance (CBM) philosophy, a theoretical framework
Deist, Timo M; Jochems, A; van Soest, Johan; Nalbantov, Georgi; Oberije, Cary; Walsh, Seán; Eble, Michael; Bulens, Paul; Coucke, Philippe; Dries, Wim; Dekker, Andre; Lambin, Philippe
2017-06-01
Machine learning applications for personalized medicine are highly dependent on access to sufficient data. For personalized radiation oncology, datasets representing the variation in the entire cancer patient population need to be acquired and used to learn prediction models. Ethical and legal boundaries to ensure data privacy hamper collaboration between research institutes. We hypothesize that data sharing is possible without identifiable patient data leaving the radiation clinics and that building machine learning applications on distributed datasets is feasible. We developed and implemented an IT infrastructure in five radiation clinics across three countries (Belgium, Germany, and The Netherlands). We present here a proof-of-principle for future 'big data' infrastructures and distributed learning studies. Lung cancer patient data was collected in all five locations and stored in local databases. Exemplary support vector machine (SVM) models were learned using the Alternating Direction Method of Multipliers (ADMM) from the distributed databases to predict post-radiotherapy dyspnea grade [Formula: see text]. The discriminative performance was assessed by the area under the curve (AUC) in a five-fold cross-validation (learning on four sites and validating on the fifth). The performance of the distributed learning algorithm was compared to centralized learning where datasets of all institutes are jointly analyzed. The euroCAT infrastructure has been successfully implemented in five radiation clinics across three countries. SVM models can be learned on data distributed over all five clinics. Furthermore, the infrastructure provides a general framework to execute learning algorithms on distributed data. The ongoing expansion of the euroCAT network will facilitate machine learning in radiation oncology. The resulting access to larger datasets with sufficient variation will pave the way for generalizable prediction models and personalized medicine.
A 21st Century Collaborative Policy Development and Implementation Approach: A Discourse Analysis
ERIC Educational Resources Information Center
Nyoni, J.
2012-01-01
The article used Unisa Framework for the implementation of a team approach to curriculum and learning development to explore and analyse the views and experiences of academic lecturers and curriculum and learning development experts on the conceptualisation and development of the said framework and its subsequent implementation thereof. I used a…
ERIC Educational Resources Information Center
Urquhart, Robin; Sargeant, Joan; Grunfeld, Eva
2013-01-01
Moving knowledge into practice and the implementation of innovations in health care remain significant challenges. Few researchers adequately address the influence of organizations on the implementation of innovations in health care. The aims of this article are to (1) present 2 conceptual frameworks for understanding the organizational factors…
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
Hanson, Rochelle F; Self-Brown, Shannon; Rostad, Whitney L; Jackson, Matthew C
2016-03-01
It is widely recognized that children in the child welfare system are particularly vulnerable to the adverse health and mental effects associated with exposure to abuse and neglect, making it imperative to have broad-based availability of evidence-based practices (EBPs) that can prevent child maltreatment and reduce the negative mental health outcomes for youth who are victims. A variety of EBPs exist for reducing child maltreatment risk and addressing the associated negative mental health outcomes, but the reach of these practices is limited. An emerging literature documents factors that can enhance or inhibit the success of EBP implementation in community service agencies, including how the selection of a theory-driven conceptual framework, or model, might facilitate implementation planning by providing guidance for best practices during implementation phases. However, limited research is available to guide decision makers in the selection of implementation frameworks that can boost implementation success for EBPs that focus on preventing child welfare recidivism and serving the mental health needs of maltreated youth. The aims of this conceptual paper are to (1) provide an overview of existing implementation frameworks, beginning with a discussion of definitional issues and the selection criteria for frameworks included in the review; and (2) offer recommendations for practice and policy as applicable for professionals and systems serving victims of child maltreatment and their families. Copyright © 2015 Elsevier Ltd. All rights reserved.
Policy implementation in practice: the case of national service frameworks in general practice.
Checkland, Kath; Harrison, Stephen
2004-10-01
National Service Frameworks are an integral part of the government's drive to 'modernise' the NHS, intended to standardise both clinical care and the design of the services used to deliver that clinical care. This article uses evidence from qualitative case studies in three general practices to illustrate the difficulties associated with the implementation of such top-down guidelines and models of service. In these studies it was found that, while there had been little explicit activity directed at implementation overall, the National Service Framework for coronary heart disease had in general fared better than that for older people. Gunn's notion of 'perfect implementation' is used to make sense of the findings.
An Application of Artificial Intelligence to the Implementation of Electronic Commerce
NASA Astrophysics Data System (ADS)
Srivastava, Anoop Kumar
In this paper, we present an application of Artificial Intelligence (AI) to the implementation of Electronic Commerce. We provide a multi autonomous agent based framework. Our agent based architecture leads to flexible design of a spectrum of multiagent system (MAS) by distributing computation and by providing a unified interface to data and programs. Autonomous agents are intelligent enough and provide autonomy, simplicity of communication, computation, and a well developed semantics. The steps of design and implementation are discussed in depth, structure of Electronic Marketplace, an ontology, the agent model, and interaction pattern between agents is given. We have developed mechanisms for coordination between agents using a language, which is called Virtual Enterprise Modeling Language (VEML). VEML is a integration of Java and Knowledge Query and Manipulation Language (KQML). VEML provides application programmers with potential to globally develop different kinds of MAS based on their requirements and applications. We have implemented a multi autonomous agent based system called VE System. We demonstrate efficacy of our system by discussing experimental results and its salient features.
Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework
NASA Astrophysics Data System (ADS)
Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.
2015-12-01
Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Xuehang; Chen, Xingyuan; Ye, Ming
2015-07-01
This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data.more » Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.« less
A trajectory generation framework for modeling spacecraft entry in MDAO
NASA Astrophysics Data System (ADS)
D`Souza, Sarah N.; Sarigul-Klijn, Nesrin
2016-04-01
In this paper a novel trajectory generation framework was developed that optimizes trajectory event conditions for use in a Generalized Entry Guidance algorithm. The framework was developed to be adaptable via the use of high fidelity equations of motion and drag based analytical bank profiles. Within this framework, a novel technique was implemented that resolved the sensitivity of the bank profile to atmospheric non-linearities. The framework's adaptability was established by running two different entry bank conditions. Each case yielded a reference trajectory and set of transition event conditions that are flight feasible and implementable in a Generalized Entry Guidance algorithm.
ClimateSpark: An in-memory distributed computing framework for big climate data analytics
NASA Astrophysics Data System (ADS)
Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei
2018-06-01
The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.
Legacies of Lead in Charm City’s Soil: Lessons from the Baltimore Ecosystem Study
Schwarz, Kirsten; Pouyat, Richard V.; Yesilonis, Ian
2016-01-01
Understanding the spatial distribution of soil lead has been a focus of the Baltimore Ecosystem Study since its inception in 1997. Through multiple research projects that span spatial scales and use different methodologies, three overarching patterns have been identified: (1) soil lead concentrations often exceed state and federal regulatory limits; (2) the variability of soil lead concentrations is high; and (3) despite multiple sources and the highly heterogeneous and patchy nature of soil lead, discernable patterns do exist. Specifically, housing age, the distance to built structures, and the distance to a major roadway are strong predictors of soil lead concentrations. Understanding what drives the spatial distribution of soil lead can inform the transition of underutilized urban space into gardens and other desirable land uses while protecting human health. A framework for management is proposed that considers three factors: (1) the level of contamination; (2) the desired land use; and (3) the community’s preference in implementing the desired land use. The goal of the framework is to promote dialogue and resultant policy changes that support consistent and clear regulatory guidelines for soil lead, without which urban communities will continue to be subject to the potential for lead exposure. PMID:26861371
Pereira, José N; Silva, Porfírio; Lima, Pedro U; Martinoli, Alcherio
2014-01-01
The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets-an extension of Petri nets that takes into account robot actions and sensing-to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Giorgi, Ana Paula; Rovzar, Corey; Davis, Kelsey S.; Fuller, Trevon; Buermann, Wolfgang; Saatchi, Sassan; Smith, Thomas B.; Silveira, Luis Fabio; Gillespie, Thomas W.
2017-01-01
Historic rates of habitat change and growing exploitation of natural resources threaten avian biodiversity in the Brazilian Atlantic Forest, a global biodiversity hotspot. We implemented a twostage framework for conservation planning in the Atlantic Forest. First, we used ecological niche modeling to predict the distributions of 23 endemic bird species using 19 climatic metrics and 12 spectral and radar remote sensing metrics. Second, we utilized the principle of complementarity to prioritize new sites to augment the Atlantic Forest's existing reserves. The best predictors of bird distributions were precipitation metrics (the seasonality of rainfall) and radar remote sensing metrics (QSCAT). The existing protected areas do not include 10% of the habitat of each of the 23 endemic species. We propose a more economical set of protected areas by reducing the extent to which new sites duplicate the biodiversity content of existing protected areas. There is a high concordance between the proposed conservation areas that we designed using computerized algorithms and Important Bird Areas prioritized by BirdLife International. Insofar as deforestation in the Atlantic Forest is similar to land conversion in other biodiversity hotspots, our methodology is applicable to conservation efforts elsewhere in the world. PMID:28210009
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
NASA Astrophysics Data System (ADS)
Chaney, N.; Wood, E. F.
2014-12-01
The increasing accessibility of high-resolution land data (< 100 m) and high performance computing allows improved parameterizations of subgrid hydrologic processes in macroscale land surface models. Continental scale fully distributed modeling at these spatial scales is possible; however, its practicality for operational use is still unknown due to uncertainties in input data, model parameters, and storage requirements. To address these concerns, we propose a modeling framework that provides the spatial detail of a fully distributed model yet maintains the benefits of a semi-distributed model. In this presentation we will introduce DTOPLATS-MP, a coupling between the NOAH-MP land surface model and the Dynamic TOPMODEL hydrologic model. This new model captures a catchment's spatial heterogeneity by clustering high-resolution land datasets (soil, topography, and land cover) into hundreds of hydrologic similar units (HSUs). A prior DEM analysis defines the connections between each HSU. At each time step, the 1D land surface model updates each HSU; the HSUs then interact laterally via the subsurface and surface. When compared to the fully distributed form of the model, this framework allows a significant decrease in computation and storage while providing most of the same information and enabling parameter transferability. As a proof of concept, we will show how this new modeling framework can be run over CONUS at a 30-meter spatial resolution. For each catchment in the WBD HUC-12 dataset, the model is run between 2002 and 2012 using available high-resolution continental scale land and meteorological datasets over CONUS (dSSURGO, NLCD, NED, and NCEP Stage IV). For each catchment, the model is run with 1000 model parameter sets obtained from a Latin hypercube sample. This exercise will illustrate the feasibility of running the model operationally at continental scales while accounting for model parameter uncertainty.
Lawson, Beverley; Sampalli, Tara; Wood, Stephanie; Warner, Grace; Moorhouse, Paige; Gibson, Rick; Mallery, Laurie; Burge, Fred; Bedford, Lisa G
2017-03-07
Understanding and addressing the needs of frail persons is an emerging health priority for Nova Scotia and internationally. Primary healthcare (PHC) providers regularly encounter frail persons in their daily clinical work. However, routine identification and measurement of frailty is not standard practice and, in general, there is a lack of awareness about how to identify and respond to frailty. A web-based tool called the Frailty Portal was developed to aid in identifying, screening, and providing care for frail patients in PHC settings. In this study, we will assess the implementation feasibility and impact of the Frailty Portal to: (1) support increased awareness of frailty among providers and patients, (2) identify the degree of frailty within individual patients, and (3) develop and deliver actions to respond to frailtyl in community PHC practice. This study will be approached using a convergent mixed method design where quantitative and qualitative data are collected concurrently, in this case, over a 9-month period, analyzed separately, and then merged to summarize, interpret and produce a more comprehensive understanding of the initiative's feasibility and scalability. Methods will be informed by the 'Implementing the Frailty Portal in Community Primary Care Practice' logic model and questions will be guided by domains and constructs from an implementation science framework, the Consolidated Framework for Implementation Research (CFIR). The 'Frailty Portal' aims to improve access to, and coordination of, primary care services for persons experiencing frailty. It also aims to increase primary care providers' ability to care for patients in the context of their frailty. Our goal is to help optimize care in the community by helping community providers gain the knowledge they may lack about frailty both in general and in their practice, support improved identification of frailty with the use of screening tools, offer evidence based severity-specific care goals and connect providers with local available community supports. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Rycroft-Malone, Jo; Seers, Kate; Chandler, Jackie; Hawkes, Claire A; Crichton, Nicola; Allen, Claire; Bullock, Ian; Strunin, Leo
2013-03-09
The case has been made for more and better theory-informed process evaluations within trials in an effort to facilitate insightful understandings of how interventions work. In this paper, we provide an explanation of implementation processes from one of the first national implementation research randomized controlled trials with embedded process evaluation conducted within acute care, and a proposed extension to the Promoting Action on Research Implementation in Health Services (PARIHS) framework. The PARIHS framework was prospectively applied to guide decisions about intervention design, data collection, and analysis processes in a trial focussed on reducing peri-operative fasting times. In order to capture a holistic picture of implementation processes, the same data were collected across 19 participating hospitals irrespective of allocation to intervention. This paper reports on findings from data collected from a purposive sample of 151 staff and patients pre- and post-intervention. Data were analysed using content analysis within, and then across data sets. A robust and uncontested evidence base was a necessary, but not sufficient condition for practice change, in that individual staff and patient responses such as caution influenced decision making. The implementation context was challenging, in which individuals and teams were bounded by professional issues, communication challenges, power and a lack of clarity for the authority and responsibility for practice change. Progress was made in sites where processes were aligned with existing initiatives. Additionally, facilitators reported engaging in many intervention implementation activities, some of which result in practice changes, but not significant improvements to outcomes. This study provided an opportunity for reflection on the comprehensiveness of the PARIHS framework. Consistent with the underlying tenant of PARIHS, a multi-faceted and dynamic story of implementation was evident. However, the prominent role that individuals played as part of the interaction between evidence and context is not currently explicit within the framework. We propose that successful implementation of evidence into practice is a planned facilitated process involving an interplay between individuals, evidence, and context to promote evidence-informed practice. This proposal will enhance the potential of the PARIHS framework for explanation, and ensure theoretical development both informs and responds to the evidence base for implementation.
A conceptual approach to approximate tree root architecture in infinite slope models
NASA Astrophysics Data System (ADS)
Schmaltz, Elmar; Glade, Thomas
2016-04-01
Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic paraboloids represent a cordate-root-system with radius r, height h and a constant, species-independent curvature. This procedure simplifies the classification of tree species into the three defined geometric solids. In this study we introduce a conceptual approach to estimate the 2- and 3-dimensional distribution of different tree root systems, and to implement it in a raster environment, as it is used in infinite slope models. Hereto we used the PCRaster extension in a python framework. The results show that root distribution and root growth are spatially reproducible in a simple raster framework. The outputs exhibit significant effects for a synthetically generated slope on local scale for equal time-steps. The preliminary results depict an initial step to develop a vegetation module that can be coupled with hydro-mechanical slope stability models. This approach is expected to yield a valuable contribution to the implementation of vegetation-related properties, in particular effects of root-reinforcement, into physically-based approaches using infinite slope models.
Krishnan, Jerry A.; Au, David H.; Bender, Bruce G.; Carson, Shannon S.; Cattamanchi, Adithya; Cloutier, Michelle M.; Cooke, Colin R.; Erickson, Karen; George, Maureen; Gerald, Joe K.; Gerald, Lynn B.; Goss, Christopher H.; Gould, Michael K.; Hyzy, Robert; Kahn, Jeremy M.; Mittman, Brian S.; Mosesón, Erika M.; Mularski, Richard A.; Parthasarathy, Sairam; Patel, Sanjay R.; Rand, Cynthia S.; Redeker, Nancy S.; Reiss, Theodore F.; Riekert, Kristin A.; Rubenfeld, Gordon D.; Tate, Judith A.; Wilson, Kevin C.; Thomson, Carey C.
2016-01-01
Background: Many advances in health care fail to reach patients. Implementation science is the study of novel approaches to mitigate this evidence-to-practice gap. Methods: The American Thoracic Society (ATS) created a multidisciplinary ad hoc committee to develop a research statement on implementation science in pulmonary, critical care, and sleep medicine. The committee used an iterative consensus process to define implementation science and review the use of conceptual frameworks to guide implementation science for the pulmonary, critical care, and sleep community and to explore how professional medical societies such as the ATS can promote implementation science. Results: The committee defined implementation science as the study of the mechanisms by which effective health care interventions are either adopted or not adopted in clinical and community settings. The committee also distinguished implementation science from the act of implementation. Ideally, implementation science should include early and continuous stakeholder involvement and the use of conceptual frameworks (i.e., models to systematize the conduct of studies and standardize the communication of findings). Multiple conceptual frameworks are available, and we suggest the selection of one or more frameworks on the basis of the specific research question and setting. Professional medical societies such as the ATS can have an important role in promoting implementation science. Recommendations for professional societies to consider include: unifying implementation science activities through a single organizational structure, linking front-line clinicians with implementation scientists, seeking collaborations to prioritize and conduct implementation science studies, supporting implementation science projects through funding opportunities, working with research funding bodies to set the research agenda in the field, collaborating with external bodies responsible for health care delivery, disseminating results of implementation science through scientific journals and conferences, and teaching the next generation about implementation science through courses and other media. Conclusions: Implementation science plays an increasingly important role in health care. Through support of implementation science, the ATS and other professional medical societies can work with other stakeholders to lead this effort. PMID:27739895
Weiss, Curtis H; Krishnan, Jerry A; Au, David H; Bender, Bruce G; Carson, Shannon S; Cattamanchi, Adithya; Cloutier, Michelle M; Cooke, Colin R; Erickson, Karen; George, Maureen; Gerald, Joe K; Gerald, Lynn B; Goss, Christopher H; Gould, Michael K; Hyzy, Robert; Kahn, Jeremy M; Mittman, Brian S; Mosesón, Erika M; Mularski, Richard A; Parthasarathy, Sairam; Patel, Sanjay R; Rand, Cynthia S; Redeker, Nancy S; Reiss, Theodore F; Riekert, Kristin A; Rubenfeld, Gordon D; Tate, Judith A; Wilson, Kevin C; Thomson, Carey C
2016-10-15
Many advances in health care fail to reach patients. Implementation science is the study of novel approaches to mitigate this evidence-to-practice gap. The American Thoracic Society (ATS) created a multidisciplinary ad hoc committee to develop a research statement on implementation science in pulmonary, critical care, and sleep medicine. The committee used an iterative consensus process to define implementation science and review the use of conceptual frameworks to guide implementation science for the pulmonary, critical care, and sleep community and to explore how professional medical societies such as the ATS can promote implementation science. The committee defined implementation science as the study of the mechanisms by which effective health care interventions are either adopted or not adopted in clinical and community settings. The committee also distinguished implementation science from the act of implementation. Ideally, implementation science should include early and continuous stakeholder involvement and the use of conceptual frameworks (i.e., models to systematize the conduct of studies and standardize the communication of findings). Multiple conceptual frameworks are available, and we suggest the selection of one or more frameworks on the basis of the specific research question and setting. Professional medical societies such as the ATS can have an important role in promoting implementation science. Recommendations for professional societies to consider include: unifying implementation science activities through a single organizational structure, linking front-line clinicians with implementation scientists, seeking collaborations to prioritize and conduct implementation science studies, supporting implementation science projects through funding opportunities, working with research funding bodies to set the research agenda in the field, collaborating with external bodies responsible for health care delivery, disseminating results of implementation science through scientific journals and conferences, and teaching the next generation about implementation science through courses and other media. Implementation science plays an increasingly important role in health care. Through support of implementation science, the ATS and other professional medical societies can work with other stakeholders to lead this effort.
Models and Frameworks: A Synergistic Association for Developing Component-Based Applications
Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858
Models and frameworks: a synergistic association for developing component-based applications.
Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.
Evolution of a multilevel framework for health program evaluation.
Masso, Malcolm; Quinsey, Karen; Fildes, Dave
2017-07-01
A well-conceived evaluation framework increases understanding of a program's goals and objectives, facilitates the identification of outcomes and can be used as a planning tool during program development. Herein we describe the origins and development of an evaluation framework that recognises that implementation is influenced by the setting in which it takes place, the individuals involved and the processes by which implementation is accomplished. The framework includes an evaluation hierarchy that focuses on outcomes for consumers, providers and the care delivery system, and is structured according to six domains: program delivery, impact, sustainability, capacity building, generalisability and dissemination. These components of the evaluation framework fit into a matrix structure, and cells within the matrix are supported by relevant evaluation tools. The development of the framework has been influenced by feedback from various stakeholders, existing knowledge of the evaluators and the literature on health promotion and implementation science. Over the years, the framework has matured and is generic enough to be useful in a wide variety of circumstances, yet specific enough to focus data collection, data analysis and the presentation of findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veeramany, Arun; Coles, Garill A.; Unwin, Stephen D.
The Pacific Northwest National Laboratory developed a risk framework for modeling high-impact, low-frequency power grid events to support risk-informed decisions. In this paper, we briefly recap the framework and demonstrate its implementation for seismic and geomagnetic hazards using a benchmark reliability test system. We describe integration of a collection of models implemented to perform hazard analysis, fragility evaluation, consequence estimation, and postevent restoration. We demonstrate the value of the framework as a multihazard power grid risk assessment and management tool. As a result, the research will benefit transmission planners and emergency planners by improving their ability to maintain a resilientmore » grid infrastructure against impacts from major events.« less
Veeramany, Arun; Coles, Garill A.; Unwin, Stephen D.; ...
2017-08-25
The Pacific Northwest National Laboratory developed a risk framework for modeling high-impact, low-frequency power grid events to support risk-informed decisions. In this paper, we briefly recap the framework and demonstrate its implementation for seismic and geomagnetic hazards using a benchmark reliability test system. We describe integration of a collection of models implemented to perform hazard analysis, fragility evaluation, consequence estimation, and postevent restoration. We demonstrate the value of the framework as a multihazard power grid risk assessment and management tool. As a result, the research will benefit transmission planners and emergency planners by improving their ability to maintain a resilientmore » grid infrastructure against impacts from major events.« less
BioNet Digital Communications Framework
NASA Technical Reports Server (NTRS)
Gifford, Kevin; Kuzminsky, Sebastian; Williams, Shea
2010-01-01
BioNet v2 is a peer-to-peer middleware that enables digital communication devices to talk to each other. It provides a software development framework, standardized application, network-transparent device integration services, a flexible messaging model, and network communications for distributed applications. BioNet is an implementation of the Constellation Program Command, Control, Communications and Information (C3I) Interoperability specification, given in CxP 70022-01. The system architecture provides the necessary infrastructure for the integration of heterogeneous wired and wireless sensing and control devices into a unified data system with a standardized application interface, providing plug-and-play operation for hardware and software systems. BioNet v2 features a naming schema for mobility and coarse-grained localization information, data normalization within a network-transparent device driver framework, enabling of network communications to non-IP devices, and fine-grained application control of data subscription band width usage. BioNet directly integrates Disruption Tolerant Networking (DTN) as a communications technology, enabling networked communications with assets that are only intermittently connected including orbiting relay satellites and planetary rover vehicles.
A hybrid parallel framework for the cellular Potts model simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yi; He, Kejing; Dong, Shoubin
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Shea, Christopher M; Young, Tiffany L; Powell, Byron J; Rohweder, Catherine; Enga, Zoe K; Scott, Jennifer E; Carter-Edwards, Lori; Corbie-Smith, Giselle
2017-09-01
Participating in community-engaged dissemination and implementation (CEDI) research is challenging for a variety of reasons. Currently, there is not specific guidance or a tool available for researchers to assess their readiness to conduct CEDI research. We propose a conceptual framework that identifies detailed competencies for researchers participating in CEDI and maps these competencies to domains. The framework is a necessary step toward developing a CEDI research readiness survey that measures a researcher's attitudes, willingness, and self-reported ability for acquiring the knowledge and performing the behaviors necessary for effective community engagement. The conceptual framework for CEDI competencies was developed by a team of eight faculty and staff affiliated with a university's Clinical and Translational Science Award (CTSA). The authors developed CEDI competencies by identifying the attitudes, knowledge, and behaviors necessary for carrying out commonly accepted CE principles. After collectively developing an initial list of competencies, team members individually mapped each competency to a single domain that provided the best fit. Following the individual mapping, the group held two sessions in which the sorting preferences were shared and discrepancies were discussed until consensus was reached. During this discussion, modifications to wording of competencies and domains were made as needed. The team then engaged five community stakeholders to review and modify the competencies and domains. The CEDI framework consists of 40 competencies organized into nine domains: perceived value of CE in D&I research, introspection and openness, knowledge of community characteristics, appreciation for stakeholder's experience with and attitudes toward research, preparing the partnership for collaborative decision-making, collaborative planning for the research design and goals, communication effectiveness, equitable distribution of resources and credit, and sustaining the partnership. Delineation of CEDI competencies advances the broader CE principles and D&I research goals found in the literature and facilitates development of readiness assessments tied to specific training resources for researchers interested in conducting CEDI research.
Hammer, Monica; Balfors, Berit; Mörtberg, Ulla; Petersson, Mona; Quin, Andrew
2011-03-01
In this article, focusing on the ongoing implementation of the EU Water Framework Directive, we analyze some of the opportunities and challenges for a sustainable governance of water resources from an ecosystem management perspective. In the face of uncertainty and change, the ecosystem approach as a holistic and integrated management framework is increasingly recognized. The ongoing implementation of the Water Framework Directive (WFD) could be viewed as a reorganization phase in the process of change in institutional arrangements and ecosystems. In this case study from the Northern Baltic Sea River Basin District, Sweden, we focus in particular on data and information management from a multi-level governance perspective from the local stakeholder to the River Basin level. We apply a document analysis, hydrological mapping, and GIS models to analyze some of the institutional framework created for the implementation of the WFD. The study underlines the importance of institutional arrangements that can handle variability of local situations and trade-offs between solutions and priorities on different hierarchical levels.
Places to Intervene to Make Complex Food Systems More Healthy, Green, Fair, and Affordable
Malhi, Luvdeep; Karanfil, Özge; Merth, Tommy; Acheson, Molly; Palmer, Amanda; Finegood, Diane T.
2009-01-01
A Food Systems and Public Health conference was convened in April 2009 to consider research supporting food systems that are healthy, green, fair, and affordable. We used a complex systems framework to examine the contents of background material provided to conference participants. Application of our intervention-level framework (paradigm, goals, system structure, feedback and delays, structural elements) enabled comparison of the conference themes of healthy, green, fair, and affordable. At the level of system structure suggested actions to achieve these goals are fairly compatible, including broad public discussion and implementation of policies and programs that support sustainable food production and distribution. At the level of paradigm and goals, the challenge of making healthy and green food affordable becomes apparent as some actions may be in conflict. Systems thinking can provide insight into the challenges and opportunities to act to make the food supply more healthy, green, fair, and affordable. PMID:23173029
NASA Technical Reports Server (NTRS)
Gorski, K. M.; Hivon, Eric; Banday, A. J.; Wandelt, Benjamin D.; Hansen, Frode K.; Reinecke, Mstvos; Bartelmann, Matthia
2005-01-01
HEALPix the Hierarchical Equal Area isoLatitude Pixelization is a versatile structure for the pixelization of data on the sphere. An associated library of computational algorithms and visualization software supports fast scientific applications executable directly on discretized spherical maps generated from very large volumes of astronomical data. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background experiments (e.g., BOOMERANG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including, e.g., Planck, Herschel, SAFIR, and the Beyond Einstein inflation probe. In this paper we consider the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere. We demonstrate how these are explicitly satisfied by HEALPix.
Programmable multi-node quantum network design and simulation
NASA Astrophysics Data System (ADS)
Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.
2016-05-01
Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.
Global Optimization of Emergency Evacuation Assignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Lee; Yuan, Fang; Chin, Shih-Miao
2006-01-01
Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less
NASA Astrophysics Data System (ADS)
Argoneto, Pierluigi; Renna, Paolo
2016-02-01
This paper proposes a Framework for Capacity Sharing in Cloud Manufacturing (FCSCM) able to support the capacity sharing issue among independent firms. The success of geographical distributed plants depends strongly on the use of opportune tools to integrate their resources and demand forecast in order to gather a specific production objective. The framework proposed is based on two different tools: a cooperative game algorithm, based on the Gale-Shapley model, and a fuzzy engine. The capacity allocation policy takes into account the utility functions of the involved firms. It is shown how the capacity allocation policy proposed induces all firms to report truthfully their information about their requirements. A discrete event simulation environment has been developed to test the proposed FCSCM. The numerical results show the drastic reduction of unsatisfied capacity obtained by the model of cooperation implemented in this work.
Stakeholder management for conservation projects: a case study of Ream National Park, Cambodia.
De Lopez, T T
2001-07-01
The paper gives an account of the development and implementation of a stakeholder management framework at Ream National Park, Cambodia. Firstly, the concept of stakeholder is reviewed in management and in conservation literatures. Secondly, the context in which the stakeholder framework was implemented is described. Thirdly, a five-step methodological framework is suggested: (1) stakeholder analysis, (2) stakeholder mapping, (3) development of generic strategies and workplan, (4) presentation of the workplan to stakeholders, and (5) implementation of the workplan. This framework classifies stakeholders according to their level of influence on the project and their potential for the conservation of natural resources. In a situation characterized by conflicting claims on natural resources, park authorities were able to successfully develop specific strategies for the management of stakeholders. The conclusion discusses the implications of the Ream experience and the generalization of the framework to other protected areas.
Cane, James; O'Connor, Denise; Michie, Susan
2012-04-24
An integrative theoretical framework, developed for cross-disciplinary implementation and other behaviour change research, has been applied across a wide range of clinical situations. This study tests the validity of this framework. Validity was investigated by behavioural experts sorting 112 unique theoretical constructs using closed and open sort tasks. The extent of replication was tested by Discriminant Content Validation and Fuzzy Cluster Analysis. There was good support for a refinement of the framework comprising 14 domains of theoretical constructs (average silhouette value 0.29): 'Knowledge', 'Skills', 'Social/Professional Role and Identity', 'Beliefs about Capabilities', 'Optimism', 'Beliefs about Consequences', 'Reinforcement', 'Intentions', 'Goals', 'Memory, Attention and Decision Processes', 'Environmental Context and Resources', 'Social Influences', 'Emotions', and 'Behavioural Regulation'. The refined Theoretical Domains Framework has a strengthened empirical base and provides a method for theoretically assessing implementation problems, as well as professional and other health-related behaviours as a basis for intervention development.
Alvioli, M.; Baum, R.L.
2016-01-01
We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.
Domitrovich, Celene E.; Bradshaw, Catherine P.; Poduska, Jeanne M.; Hoagwood, Kimberly; Buckley, Jacquelyn A.; Olin, Serene; Romanelli, Lisa Hunter; Leaf, Philip J.; Greenberg, Mark T.; Ialongo, Nicholas S.
2011-01-01
Increased availability of research-supported, school-based prevention programs, coupled with the growing national policy emphasis on use of evidence-based practices, has contributed to a shift in research priorities from efficacy to implementation and dissemination. A critical issue in moving research to practice is ensuring high-quality implementation of both the intervention model and the support system for sustaining it. The paper describes a three-level framework for considering the implementation quality of school-based interventions. Future directions for research on implementation are discussed. PMID:27182282
Adriaens, Peter; Goovaerts, Pierre; Skerlos, Steven; Edwards, Elizabeth; Egli, Thomas
2003-12-01
Recent commercial and residential development have substantially impacted the fluxes and quality of water that recharge the aquifers and discharges to streams, lakes and wetlands and, ultimately, is recycled for potable use. Whereas the contaminant sources may be varied in scope and composition, these issues of urban water sustainability are of public health concern at all levels of economic development worldwide, and require cheap and innovative environmental sensing capabilities and interactive monitoring networks, as well as tailored distributed water treatment technologies. To address this need, a roundtable was organized to explore the potential role of advances in biotechnology and bioengineering to aid in developing causative relationships between spatial and temporal changes in urbanization patterns and groundwater and surface water quality parameters, and to address aspects of socioeconomic constraints in implementing sustainable exploitation of water resources. An interactive framework for quantitative analysis of the coupling between human and natural systems requires integrating information derived from online and offline point measurements with Geographic Information Systems (GIS)-based remote sensing imagery analysis, groundwater-surface water hydrologic fluxes and water quality data to assess the vulnerability of potable water supplies. Spatially referenced data to inform uncertainty-based dynamic models can be used to rank watershed-specific stressors and receptors to guide researchers and policymakers in the development of targeted sensing and monitoring technologies, as well as tailored control measures for risk mitigation of potable water from microbial and chemical environmental contamination. The enabling technologies encompass: (i) distributed sensing approaches for microbial and chemical contamination (e.g. pathogens, endocrine disruptors); (ii) distributed application-specific, and infrastructure-adaptive water treatment systems; (iii) geostatistical integration of monitoring data and GIS layers; and (iv) systems analysis of microbial and chemical proliferation in distribution systems. This operational framework is aimed at technology implementation while maximizing economic and public health benefits. The outcomes of the roundtable will further research agendas in information technology-based monitoring infrastructure development, integration of processes and spatial analysis, as well as in new educational and training platforms for students, practitioners and regulators. The potential for technology diffusion to emerging economies with limited financial resources is substantial.
Simulating competitive egress of noncircular pedestrians.
Hidalgo, R C; Parisi, D R; Zuriguel, I
2017-04-01
We present a numerical framework to simulate pedestrian dynamics in highly competitive conditions by means of a force-based model implemented with spherocylindrical particles instead of the traditional, symmetric disks. This modification of the individuals' shape allows one to naturally reproduce recent experimental findings of room evacuations through narrow doors in situations where the contact pressure among the pedestrians was rather large. In particular, we obtain a power-law tail distribution of the time lapses between the passage of consecutive individuals. In addition, we show that this improvement leads to new features where the particles' rotation acquires great significance.
A UIMA wrapper for the NCBO annotator.
Roeder, Christophe; Jonquet, Clement; Shah, Nigam H; Baumgartner, William A; Verspoor, Karin; Hunter, Lawrence
2010-07-15
The Unstructured Information Management Architecture (UIMA) framework and web services are emerging as useful tools for integrating biomedical text mining tools. This note describes our work, which wraps the National Center for Biomedical Ontology (NCBO) Annotator-an ontology-based annotation service-to make it available as a component in UIMA workflows. This wrapper is freely available on the web at http://bionlp-uima.sourceforge.net/ as part of the UIMA tools distribution from the Center for Computational Pharmacology (CCP) at the University of Colorado School of Medicine. It has been implemented in Java for support on Mac OS X, Linux and MS Windows.
Comprehensive multiplatform collaboration
NASA Astrophysics Data System (ADS)
Singh, Kundan; Wu, Xiaotao; Lennox, Jonathan; Schulzrinne, Henning G.
2003-12-01
We describe the architecture and implementation of our comprehensive multi-platform collaboration framework known as Columbia InterNet Extensible Multimedia Architecture (CINEMA). It provides a distributed architecture for collaboration using synchronous communications like multimedia conferencing, instant messaging, shared web-browsing, and asynchronous communications like discussion forums, shared files, voice and video mails. It allows seamless integration with various communication means like telephones, IP phones, web and electronic mail. In addition, it provides value-added services such as call handling based on location information and presence status. The paper discusses the media services needed for collaborative environment, the components provided by CINEMA and the interaction among those components.
NASA Astrophysics Data System (ADS)
Berlin, Julian; Bogaard, Thom; Van Westen, Cees; Bakker, Wim; Mostert, Eric; Dopheide, Emile
2014-05-01
Cost benefit analysis (CBA) is a well know method used widely for the assessment of investments either in the private and public sector. In the context of risk mitigation and the evaluation of risk reduction alternatives for natural hazards its use is very important to evaluate the effectiveness of such efforts in terms of avoided monetary losses. However the current method has some disadvantages related to the spatial distribution of the costs and benefits, the geographical distribution of the avoided damage and losses, the variation in areas that are benefited in terms of invested money and avoided monetary risk. Decision-makers are often interested in how the costs and benefits are distributed among different administrative units of a large area or region, so they will be able to compare and analyse the cost and benefits per administrative unit as a result of the implementation of the risk reduction projects. In this work we first examined the Cost benefit procedure for natural hazards, how the costs are assessed for several structural and non-structural risk reduction alternatives, we also examined the current problems of the method such as the inclusion of cultural and social considerations that are complex to monetize , the problem of discounting future values using a defined interest rate and the spatial distribution of cost and benefits. We also examined the additional benefits and the indirect costs associated with the implementation of the risk reduction alternatives such as the cost of having a ugly landscape (also called negative benefits). In the last part we examined the current tools and software used in natural hazards assessment with support to conduct CBA and we propose design considerations for the implementation of the CBA module for the CHANGES-SDSS Platform an initiative of the ongoing 7th Framework Programme "CHANGES of the European commission. Keywords: Risk management, Economics of risk mitigation, EU Flood Directive, resilience, prevention, cost benefit analysis, spatial distribution of costs and benefits
NASA Astrophysics Data System (ADS)
Havens, Scott; Marks, Danny; Kormos, Patrick; Hedrick, Andrew
2017-12-01
In the Western US and many mountainous regions of the world, critical water resources and climate conditions are difficult to monitor because the observation network is generally very sparse. The critical resource from the mountain snowpack is water flowing into streams and reservoirs that will provide for irrigation, flood control, power generation, and ecosystem services. Water supply forecasting in a rapidly changing climate has become increasingly difficult because of non-stationary conditions. In response, operational water supply managers have begun to move from statistical techniques towards the use of physically based models. As we begin to transition physically based models from research to operational use, we must address the most difficult and time-consuming aspect of model initiation: the need for robust methods to develop and distribute the input forcing data. In this paper, we present a new open source framework, the Spatial Modeling for Resources Framework (SMRF), which automates and simplifies the common forcing data distribution methods. It is computationally efficient and can be implemented for both research and operational applications. We present an example of how SMRF is able to generate all of the forcing data required to a run physically based snow model at 50-100 m resolution over regions of 1000-7000 km2. The approach has been successfully applied in real time and historical applications for both the Boise River Basin in Idaho, USA and the Tuolumne River Basin in California, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of a physically based snow model possible.
Almond, H; Cummings, E; Turner, P
2013-01-01
The Australian Government launched a personally controlled electronic health record (PCEHR) system in July 2012 committing $466.7m. Currently Australia lacks a clearly articulated implementation and evaluation framework and there remains limited detail on how this system's success will be determined. These problems are especially visible in primary healthcare. The UK and US, have been advocated as models, however they have started to report points of failure arising from their approaches. Evidence suggests that alternatives need to be considered, if mistakes are not to be replicated. Insights from e-health record implementation and evaluation approaches in Denmark and the Netherlands provide Australia with other approaches. The PCEHR requires different and radical thinking around the delivery of health services. Drawing on a range of English language articles identified between 1996 and 2012, the paper generates a conceptual framework for implementation and evaluation of the PCEHR. The generation of a grounded implementation and evaluation framework in primary healthcare will reduce provider scepticism and facilitate complex changes associated with PCEHR uptake.
Evaluation Framework for Telemedicine Using the Logical Framework Approach and a Fishbone Diagram
2015-01-01
Objectives Technological advances using telemedicine and telehealth are growing in healthcare fields, but the evaluation framework for them is inconsistent and limited. This paper suggests a comprehensive evaluation framework for telemedicine system implementation and will support related stakeholders' decision-making by promoting general understanding, and resolving arguments and controversies. Methods This study focused on developing a comprehensive evaluation framework by summarizing themes across the range of evaluation techniques and organized foundational evaluation frameworks generally applicable through studies and cases of diverse telemedicine. Evaluation factors related to aspects of information technology; the evaluation of satisfaction of service providers and consumers, cost, quality, and information security are organized using the fishbone diagram. Results It was not easy to develop a monitoring and evaluation framework for telemedicine since evaluation frameworks for telemedicine are very complex with many potential inputs, activities, outputs, outcomes, and stakeholders. A conceptual framework was developed that incorporates the key dimensions that need to be considered in the evaluation of telehealth implementation for a formal structured approach to the evaluation of a service. The suggested framework consists of six major dimensions and the subsequent branches for each dimension. Conclusions To implement telemedicine and telehealth services, stakeholders should make decisions based on sufficient evidence in quality and safety measured by the comprehensive evaluation framework. Further work would be valuable in applying more comprehensive evaluations to verify and improve the comprehensive framework across a variety of contexts with more factors and participant group dimensions. PMID:26618028
Evaluation Framework for Telemedicine Using the Logical Framework Approach and a Fishbone Diagram.
Chang, Hyejung
2015-10-01
Technological advances using telemedicine and telehealth are growing in healthcare fields, but the evaluation framework for them is inconsistent and limited. This paper suggests a comprehensive evaluation framework for telemedicine system implementation and will support related stakeholders' decision-making by promoting general understanding, and resolving arguments and controversies. This study focused on developing a comprehensive evaluation framework by summarizing themes across the range of evaluation techniques and organized foundational evaluation frameworks generally applicable through studies and cases of diverse telemedicine. Evaluation factors related to aspects of information technology; the evaluation of satisfaction of service providers and consumers, cost, quality, and information security are organized using the fishbone diagram. It was not easy to develop a monitoring and evaluation framework for telemedicine since evaluation frameworks for telemedicine are very complex with many potential inputs, activities, outputs, outcomes, and stakeholders. A conceptual framework was developed that incorporates the key dimensions that need to be considered in the evaluation of telehealth implementation for a formal structured approach to the evaluation of a service. The suggested framework consists of six major dimensions and the subsequent branches for each dimension. To implement telemedicine and telehealth services, stakeholders should make decisions based on sufficient evidence in quality and safety measured by the comprehensive evaluation framework. Further work would be valuable in applying more comprehensive evaluations to verify and improve the comprehensive framework across a variety of contexts with more factors and participant group dimensions.
Rangachari, Pavani
2014-12-01
Despite the federal policy momentum towards "meaningful use" of Electronic Health Records, the healthcare organizational literature remains replete with reports of unintended adverse consequences of implementing Electronic Health Records, including: increased work for clinicians, unfavorable workflow changes, and unexpected changes in communication patterns & practices. In addition to being costly and unsafe, these unintended adverse consequences may pose a formidable barrier to "meaningful use" of Electronic Health Records. Correspondingly, it is essential for hospital administrators to understand and detect the causes of unintended adverse consequences, to ensure successful implementation of Electronic Health Records. The longstanding Technology-in-Practice framework emphasizes the role of human agency in enacting structures of technology use or "technologies-in-practice." Given a set of unintended adverse consequences from health information technology implementation, this framework could help trace them back to specific actions (types of technology-in-practice) and institutional conditions (social structures). On the other hand, the more recent Knowledge-in-Practice framework helps understand how information and communication technologies ( e.g. , social knowledge networking systems) could be implemented alongside existing technology systems, to create new social structures, generate new knowledge-in-practice, and transform technology-in-practice. Therefore, integrating the two literature streams could serve the dual purpose of understanding and overcoming unintended adverse consequences of Electronic Health Record implementation. This paper seeks to: (1) review the theoretical literatures on technology use & implementation, and identify a framework for understanding & overcoming unintended adverse consequences of implementing Electronic Health Records; (2) outline a broad project proposal to test the applicability of the framework in enabling "meaningful use" of Electronic Health Records in a healthcare context; and (3) identify strategies for successful implementation of Electronic Health Records in hospitals & health systems, based on the literature review and application.
Kallam, Brianne; Pettitt-Schieber, Christie; Owen, Medge; Agyare Asante, Rebecca; Darko, Elizabeth; Ramaswamy, Rohit
2018-05-19
Low-resource clinical settings often face obstacles that challenge the implementation of recommended evidence-based practices (EBPs). Implementation science approaches are useful in identifying barriers and developing strategies to address them. Ridge Regional Hospital (RRH), a tertiary referral hospital in Accra, Ghana experienced a spike in rates of neonatal sepsis and launched a quality improvement (QI) initiative that identified poor adherence to hand hygiene in the neonatal intensive care unit as a potential source of infections. A multi-modal change package of World Health Organization-recommended solutions was created to address this issue. To ensure that the outputs of the QI effort were adopted within the organization, leaders at RRH and Kybele, Inc. used an implementation science framework called the 'Interactive Systems Framework for Dissemination and Implementation' (ISF) to create a package of locally acceptable implementation strategies. The ISF has never been used before to guide implementation in low-resource settings. Hand hygiene compliance rose from 67% to 92% overall, including a 36% increase during the night shifts-a group of healthcare workers with typically very low levels of compliance. The drastic improvement in adherence to hand hygiene suggests the potential value of the joint use of QI and implementation science to promote the creation and application of contextually appropriate EBPs in low-resource settings. Our results also suggest that using an implementation framework such as the ISF could rapidly increase the uptake of other evidence-based interventions in low-resource settings.
2007-09-01
AFRL-RZ-WP-TP-2008-2044 ADVANCED, ADAPTIVE, MODULAR, DISTRIBUTED, GENERIC UNIVERSAL FADEC FRAMEWORK FOR INTELLIGENT PROPULSION CONTROL...GRANT NUMBER 4. TITLE AND SUBTITLE ADVANCED, ADAPTIVE, MODULAR, DISTRIBUTED, GENERIC UNIVERSAL FADEC FRAMEWORK FOR INTELLIGENT PROPULSION... FADEC is unique and expensive to develop, produce, maintain, and upgrade for its particular application. Each FADEC is a centralized system, with a
Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT.
Lavassani, Mehrzad; Forsström, Stefan; Jennehag, Ulf; Zhang, Tingting
2018-05-12
Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.
Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT
Lavassani, Mehrzad; Jennehag, Ulf; Zhang, Tingting
2018-01-01
Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications. PMID:29757227
NASA Astrophysics Data System (ADS)
Modarres, M.; Masouminia, M. R.; Hosseinkhani, H.; Olanj, N.
2016-01-01
In the spirit of performing a complete phenomenological investigation of the merits of Kimber-Martin-Ryskin (KMR) and Martin-Ryskin-Watt (MRW) unintegrated parton distribution functions (UPDF), we have computed the longitudinal structure function of the proton, FL (x ,Q2), from the so-called dipole approximation, using the LO and the NLO-UPDF, prepared in the respective frameworks. The preparation process utilizes the PDF of Martin et al., MSTW2008-LO and MSTW2008-NLO, as the inputs. Afterwards, the numerical results are undergone a series of comparisons against the exact kt-factorization and the kt-approximate results, derived from the work of Golec-Biernat and Stasto, against each other and the experimental data from ZEUS and H1 Collaborations at HERA. Interestingly, our results show a much better agreement with the exact kt-factorization, compared to the kt-approximate outcome. In addition, our results are completely consistent with those prepared from embedding the KMR and MRW UPDF directly into the kt-factorization framework. One may point out that the FL, prepared from the KMR UPDF shows a better agreement with the exact kt-factorization. This is despite the fact that the MRW formalism employs a better theoretical description of the DGLAP evolution equation and has an NLO expansion. Such unexpected consequence appears, due to the different implementation of the angular ordering constraint in the KMR approach, which automatically includes the resummation of ln (1 / x), BFKL logarithms, in the LO-DGLAP evolution equation.
Introducing high performance distributed logging service for ACS
NASA Astrophysics Data System (ADS)
Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca
2010-07-01
The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.
Boersma, Petra; van Weert, Julia C M; Lakerveld, Jeroen; Dröes, Rose-Marie
2015-01-01
In the past decades many psychosocial interventions for elderly people with dementia have been developed and implemented. Relatively little research has been done on the extent to which these interventions were implemented in the daily care. The aim of this study was to obtain insight into strategies for successful implementation of psychosocial interventions in the daily residential dementia care. Using a modified RE-AIM framework, the indicators that are considered important for effective and sustainable implementation were defined. A systematic literature search was undertaken in PubMed, PsycINFO, and Cinahl, followed by a hand search for key papers. The included publications were mapped based on the dimensions of the RE-AIM framework: Reach, Effectiveness, Adoption, Implementation, and Maintenance. Fifty-four papers met the inclusion criteria and described various psychosocial interventions. A distinction was made between studies that used one and studies that used multiple implementation strategies. This review shows that to improve their knowledge, caregivers needed at least multiple implementation strategies, only education is not enough. For increasing a more person-centered attitude, different types of knowledge transfer can be effective. Little consideration is given to the adoption of the method by caregivers and to the long-term sustainability (maintenance). This review shows that in order to successfully implement a psychosocial method the use of multiple implementation strategies is recommended. To ensure sustainability of a psychosocial care method in daily nursing home care, innovators as well as researchers should specifically pay attention to the dimensions Adoption, Implementation, and Maintenance of the RE-AIM implementation framework.
Arenas-Castro, Salvador; Gonçalves, João; Alves, Paulo; Alcaraz-Segura, Domingo; Honrado, João P
2018-01-01
Global environmental changes are rapidly affecting species' distributions and habitat suitability worldwide, requiring a continuous update of biodiversity status to support effective decisions on conservation policy and management. In this regard, satellite-derived Ecosystem Functional Attributes (EFAs) offer a more integrative and quicker evaluation of ecosystem responses to environmental drivers and changes than climate and structural or compositional landscape attributes. Thus, EFAs may hold advantages as predictors in Species Distribution Models (SDMs) and for implementing multi-scale species monitoring programs. Here we describe a modelling framework to assess the predictive ability of EFAs as Essential Biodiversity Variables (EBVs) against traditional datasets (climate, land-cover) at several scales. We test the framework with a multi-scale assessment of habitat suitability for two plant species of conservation concern, both protected under the EU Habitats Directive, differing in terms of life history, range and distribution pattern (Iris boissieri and Taxus baccata). We fitted four sets of SDMs for the two test species, calibrated with: interpolated climate variables; landscape variables; EFAs; and a combination of climate and landscape variables. EFA-based models performed very well at the several scales (AUCmedian from 0.881±0.072 to 0.983±0.125), and similarly to traditional climate-based models, individually or in combination with land-cover predictors (AUCmedian from 0.882±0.059 to 0.995±0.083). Moreover, EFA-based models identified additional suitable areas and provided valuable information on functional features of habitat suitability for both test species (narrowly vs. widely distributed), for both coarse and fine scales. Our results suggest a relatively small scale-dependence of the predictive ability of satellite-derived EFAs, supporting their use as meaningful EBVs in SDMs from regional and broader scales to more local and finer scales. Since the evaluation of species' conservation status and habitat quality should as far as possible be performed based on scalable indicators linking to meaningful processes, our framework may guide conservation managers in decision-making related to biodiversity monitoring and reporting schemes.
A Reference Implementation of the OGC CSW EO Standard for the ESA HMA-T project
NASA Astrophysics Data System (ADS)
Bigagli, Lorenzo; Boldrini, Enrico; Papeschi, Fabrizio; Vitale, Fabrizio
2010-05-01
This work was developed in the context of the ESA Heterogeneous Missions Accessibility (HMA) project, whose main objective is to involve the stakeholders, namely National space agencies, satellite or mission owners and operators, in an harmonization and standardization process of their ground segment services and related interfaces. Among HMA objectives was the specification, conformance testing, and experimentation of two Extension Packages (EPs) of the ebRIM Application Profile (AP) of the OGC Catalog Service for the Web (CSW) specification: the Earth Observation Products (EO) EP (OGC 06-131) and the Cataloguing of ISO Metadata (CIM) EP (OGC 07-038). Our contributions have included the development and deployment of Reference Implementations (RIs) for both the above specifications, and their integration with the ESA Service Support Environment (SSE). The RIs are based on the GI-cat framework, an implementation of a distributed catalog service, able to query disparate Earth and Space Science data sources (e.g. OGC Web Services, Unidata THREDDS) and to expose several standard interfaces for data discovery (e.g. OGC CSW ISO AP). Following our initial planning, the GI-cat framework has been extended in order to expose the CSW.ebRIM-CIM and CSW.ebRIM-EO interfaces, and to distribute queries to CSW.ebRIM-CIM and CSW.ebRIM-EO data sources. We expected that a mapping strategy would suffice for accommodating CIM, but this proved to be unpractical during implementation. Hence, a model extension strategy was eventually implemented for both the CIM and EO EPs, and the GI-cat federal model was enhanced in order to support the underlying ebRIM AP. This work has provided us with new insights into the different data models for geospatial data, and the technologies for their implementation. The extension is used by suitable CIM and EO profilers (front-end mediator components) and accessors (back-end mediator components), that relate ISO 19115 concepts to EO and CIM ones. Moreover, a mapping to GI-cat federal model was developed for each EP (quite limited for EO; complete for CIM), in order to enable the discovery of resources through any of GI-cat profilers. The query manager was also improved. GI-cat-EO and -CIM installation packages were made available for distribution, and two RI instances were deployed on the Amazon EC2 facility (plus an ad-hoc instance returning incorrect control data). Integration activities of the EO RI with the ESA SSE Portal for Earth Observation Products were also successfully carried on. During our work, we have contributed feedback and comments to the CIM and EO EP specification working groups. Our contributions resulted in version 0.2.5 of the EO EP, recently approved as an OGC standard, and were useful to consolidate version 0.1.11 of the CIM EP (still being developed).
2013-01-01
Background The case has been made for more and better theory-informed process evaluations within trials in an effort to facilitate insightful understandings of how interventions work. In this paper, we provide an explanation of implementation processes from one of the first national implementation research randomized controlled trials with embedded process evaluation conducted within acute care, and a proposed extension to the Promoting Action on Research Implementation in Health Services (PARIHS) framework. Methods The PARIHS framework was prospectively applied to guide decisions about intervention design, data collection, and analysis processes in a trial focussed on reducing peri-operative fasting times. In order to capture a holistic picture of implementation processes, the same data were collected across 19 participating hospitals irrespective of allocation to intervention. This paper reports on findings from data collected from a purposive sample of 151 staff and patients pre- and post-intervention. Data were analysed using content analysis within, and then across data sets. Results A robust and uncontested evidence base was a necessary, but not sufficient condition for practice change, in that individual staff and patient responses such as caution influenced decision making. The implementation context was challenging, in which individuals and teams were bounded by professional issues, communication challenges, power and a lack of clarity for the authority and responsibility for practice change. Progress was made in sites where processes were aligned with existing initiatives. Additionally, facilitators reported engaging in many intervention implementation activities, some of which result in practice changes, but not significant improvements to outcomes. Conclusions This study provided an opportunity for reflection on the comprehensiveness of the PARIHS framework. Consistent with the underlying tenant of PARIHS, a multi-faceted and dynamic story of implementation was evident. However, the prominent role that individuals played as part of the interaction between evidence and context is not currently explicit within the framework. We propose that successful implementation of evidence into practice is a planned facilitated process involving an interplay between individuals, evidence, and context to promote evidence-informed practice. This proposal will enhance the potential of the PARIHS framework for explanation, and ensure theoretical development both informs and responds to the evidence base for implementation. Trial registration ISRCTN18046709 - Peri-operative Implementation Study Evaluation (PoISE). PMID:23497438
Slaughter, Susan E; Bampton, Erin; Erin, Daniel F; Ickert, Carla; Jones, C Allyson; Estabrooks, Carole A
2017-06-01
Innovative approaches are required to facilitate the adoption and sustainability of evidence-based care practices. We propose a novel implementation strategy, a peer reminder role, which involves offering a brief formal reminder to peers during structured unit meetings. This study aims to (a) identify healthcare aide (HCA) perceptions of a peer reminder role for HCAs, and (b) develop a conceptual framework for the role based on these perceptions. In 2013, a qualitative focus group study was conducted in five purposively sampled residential care facilities in western Canada. A convenience sample of 24 HCAs agreed to participate in five focus groups. Concurrent with data collection, two researchers coded the transcripts and identified themes by consensus. They jointly determined when saturation was achieved and took steps to optimize the trustworthiness of the findings. Five HCAs from the original focus groups commented on the resulting conceptual framework. HCAs were cautious about accepting a role that might alienate them from their co-workers. They emphasized feeling comfortable with the peer reminder role and identified circumstances that would optimize their comfort including: effective implementation strategies, perceptions of the role, role credibility and a supportive context. These intersecting themes formed a peer reminder conceptual framework. We identified HCAs' perspectives of a new peer reminder role designed specifically for them. Based on their perceptions, a conceptual framework was developed to guide the implementation of a peer reminder role for HCAs. This role may be a strategic implementation strategy to optimize the sustainability of new practices in residential care settings, and the related framework could offer guidance on how to implement this role. © 2017 Sigma Theta Tau International.
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
Towards a Framework for Managing Risk Associated with Technology-Induced Error.
Borycki, Elizabeth M; Kushniruk, Andre W
2017-01-01
Health information technologies (HIT) promised to streamline and modernize healthcare processes. However, a growing body of research has indicated that if such technologies are not designed, implemented or maintained properly this may lead to an increased incidence of new types of errors which the authors have referred to as "technology-induced errors". In this paper, framework is presented that can be used to manage HIT risk. The framework considers the reduction of technology-induced errors at different stages by managing risks associated with the implementation of HIT. Frameworks that allow health information technology managers to employ proactive and preventative approaches that can be used to manage the risks associated with technology-induced errors are critical to improving HIT safety and managing risk associated with implementing new technologies.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.
Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-02-01
Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.
Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Wu, Di; Kalsi, Karanjit
This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less
Brown, Jason L; Weber, Jennifer J; Alvarado-Serrano, Diego F; Hickerson, Michael J; Franks, Steven J; Carnaval, Ana C
2016-01-01
Climate change is a widely accepted threat to biodiversity. Species distribution models (SDMs) are used to forecast whether and how species distributions may track these changes. Yet, SDMs generally fail to account for genetic and demographic processes, limiting population-level inferences. We still do not understand how predicted environmental shifts will impact the spatial distribution of genetic diversity within taxa. We propose a novel method that predicts spatially explicit genetic and demographic landscapes of populations under future climatic conditions. We use carefully parameterized SDMs as estimates of the spatial distribution of suitable habitats and landscape dispersal permeability under present-day, past, and future conditions. We use empirical genetic data and approximate Bayesian computation to estimate unknown demographic parameters. Finally, we employ these parameters to simulate realistic and complex models of responses to future environmental shifts. We contrast parameterized models under current and future landscapes to quantify the expected magnitude of change. We implement this framework on neutral genetic data available from Penstemon deustus. Our results predict that future climate change will result in geographically widespread declines in genetic diversity in this species. The extent of reduction will heavily depend on the continuity of population networks and deme sizes. To our knowledge, this is the first study to provide spatially explicit predictions of within-species genetic diversity using climatic, demographic, and genetic data. Our approach accounts for climatic, geographic, and biological complexity. This framework is promising for understanding evolutionary consequences of climate change, and guiding conservation planning. © 2016 Botanical Society of America.
van der Togt, Remko; Bakker, Piet J M; Jaspers, Monique W M
2011-04-01
RFID offers great opportunities to health care. Nevertheless, prior experiences also show that RFID systems have not been designed and tested in response to the particular needs of health care settings and might introduce new risks. The aim of this study is to present a framework that can be used to assess the performance of RFID systems particularly in health care settings. We developed a framework describing a systematic approach that can be used for assessing the feasibility of using an RFID technology in a particular healthcare setting; more specific for testing the impact of environmental factors on the quality of RFID generated data and vice versa. This framework is based on our own experiences with an RFID pilot implementation in an academic hospital in The Netherlands and a literature review concerning RFID test methods and current insights of RFID implementations in healthcare. The implementation of an RFID system within the blood transfusion chain inside a hospital setting was used as a show case to explain the different phases of the framework. The framework consists of nine phases, including an implementation development plan, RFID and medical equipment interference tests, data accuracy- and data completeness tests to be run in laboratory, simulated field and real field settings. The potential risks that RFID technologies may bring to the healthcare setting should be thoroughly evaluated before they are introduced into a vital environment. The RFID performance assessment framework that we present can act as a reference model to start an RFID development, engineering, implementation and testing plan and more specific, to assess the potential risks of interference and to test the quality of the RFID generated data potentially influenced by physical objects in specific health care environments. Copyright © 2010 Elsevier Inc. All rights reserved.
Cloud computing strategic framework (FY13 - FY15).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arellano, Lawrence R.; Arroyo, Steven C.; Giese, Gerald J.
This document presents an architectural framework (plan) and roadmap for the implementation of a robust Cloud Computing capability at Sandia National Laboratories. It is intended to be a living document and serve as the basis for detailed implementation plans, project proposals and strategic investment requests.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Reuthler, James J.; McDaniel, Ryan D.
2003-01-01
A flexible framework for the development of block structured volume grids for hypersonic Navier-Stokes flow simulations was developed for analysis of the Shuttle Orbiter Columbia. The development of the flexible framework, resulted in an ability to quickly generate meshes to directly correlate solutions contributed by participating groups on a common surface mesh, providing confidence for the extension of the envelope of solutions and damage scenarios. The framework draws on the experience of NASA Langely and NASA Ames Research Centers in structured grid generation, and consists of a grid generation process that is implemented through a division of responsibilities. The nominal division of labor consisted of NASA Johnson Space Center coordinating the damage scenarios to be analyzed by the Aerothermodynamics Columbia Accident Investigation (CAI) team, Ames developing the surface grids that described the computational volume about the orbiter, and Langely improving grid quality of Ames generated data and constructing the final volume grids. Distributing the work among the participants in the Aerothermodynamic CIA team resulted in significantly less time required to construct complete meshes than possible by any individual participant. The approach demonstrated that the One-NASA grid generation team could sustain the demand for new meshes to explore new damage scenarios within a aggressive timeline.
Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine
NASA Astrophysics Data System (ADS)
Boehm, J.; Liu, K.; Alis, C.
2016-06-01
In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.
Web Based Prognostics and 24/7 Monitoring
NASA Technical Reports Server (NTRS)
Strautkalns, Miryam; Robinson, Peter
2013-01-01
We created a general framework for analysts to store and view data in a way that removes the boundaries created by operating systems, programming languages, and proximity. With the advent of HTML5 and CSS3 with JavaScript the distribution of information is limited to only those who lack a browser. We created a framework based on the methodology: one server, one web based application. Additional benefits are increased opportunities for collaboration. Today the idea of a group in a single room is antiquated. Groups will communicate and collaborate with others from other universities, organizations, as well as other continents across times zones. There are many varieties of data gathering and condition-monitoring software available as well as companies who specialize in customizing software to individual applications. One single group will depend on multiple languages, environments, and computers to oversee recording and collaborating with one another in a single lab. The heterogeneous nature of the system creates challenges for seamless exchange of data and ideas between members. To address these limitations we designed a framework to allow users seamless accessibility to their data. Our framework was deployed using the data feed on the NASA Ames' planetary rover testbed. Our paper demonstrates the process and implementation we followed on the rover.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinnon, Archibald D.; Thompson, Seth R.; Doroshchuk, Ruslan A.
mart grid technologies are transforming the electric power grid into a grid with bi-directional flows of both power and information. Operating millions of new smart meters and smart appliances will significantly impact electric distribution systems resulting in greater efficiency. However, the scale of the grid and the new types of information transmitted will potentially introduce several security risks that cannot be addressed by traditional, centralized security techniques. We propose a new bio-inspired cyber security approach. Social insects, such as ants and bees, have developed complex-adaptive systems that emerge from the collective application of simple, light-weight behaviors. The Digital Ants frameworkmore » is a bio-inspired framework that uses mobile light-weight agents. Sensors within the framework use digital pheromones to communicate with each other and to alert each other of possible cyber security issues. All communication and coordination is both localized and decentralized thereby allowing the framework to scale across the large numbers of devices that will exist in the smart grid. Furthermore, the sensors are light-weight and therefore suitable for implementation on devices with limited computational resources. This paper will provide a brief overview of the Digital Ants framework and then present results from test bed-based demonstrations that show that Digital Ants can identify a cyber attack scenario against smart meter deployments.« less
NASA Astrophysics Data System (ADS)
Tien Bui, Dieu; Hoang, Nhat-Duc
2017-09-01
In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.
Scovil, Carol Y; Flett, Heather M; McMillan, Lan T; Delparte, Jude J; Leber, Diane J; Brown, Jacquie; Burns, Anthony S
2014-09-01
To implement pressure ulcer (PU) prevention best practices in spinal cord injury (SCI) rehabilitation using implementation science frameworks. Quality improvement. SCI Rehabilitation Center. Inpatients admitted January 2012 to July 2013. Implementation of two PU best practices were targeted: (1) completing a comprehensive PU risk assessment and individualized interprofessional PU prevention plan (PUPP); and (2) providing patient education for PU prevention; as part of the pan-Canadian SCI Knowledge Mobilization Network. At our center, the SCI Pressure Ulcer Scale replaced the Braden risk assessment scale and an interprofessional PUPP form was implemented. Comprehensive educational programing existed, so efforts focused on improving documentation. Implementation science frameworks provided structure for a systematic approach to best practice implementation (BPI): (1) site implementation team, (2) implementation drivers, (3) stages of implementation, and (4) improvement cycles. Strategies were developed to address key implementation drivers (staff competency, organizational supports, and leadership) through the four stages of implementation: exploration, installation, initial implementation, and full implementation. Improvement cycles were used to address BPI challenges. Implementation processes (e.g. staff training) and BPI outcomes (completion rates). Following BPI, risk assessment completion rates improved from 29 to 82%. The PUPP completion rate was 89%. PU education was documented for 45% of patients (vs. 21% pre-implementation). Implementation science provided a framework and effective tools for successful pressure ulcer BPI in SCI rehabilitation. Ongoing improvement cycles will target timeliness of tool completion and documentation of patient education.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
NASA Astrophysics Data System (ADS)
Polyakov, Evgeny A.; Vorontsov-Velyaminov, Pavel N.
2014-08-01
Properties of ferrofluid bilayer (modeled as a system of two planar layers separated by a distance h and each layer carrying a soft sphere dipolar liquid) are calculated in the framework of inhomogeneous Ornstein-Zernike equations with reference hypernetted chain closure (RHNC). The bridge functions are taken from a soft sphere (1/r12) reference system in the pressure-consistent closure approximation. In order to make the RHNC problem tractable, the angular dependence of the correlation functions is expanded into special orthogonal polynomials according to Lado. The resulting equations are solved using the Newton-GRMES algorithm as implemented in the public-domain solver NITSOL. Orientational densities and pair distribution functions of dipoles are compared with Monte Carlo simulation results. A numerical algorithm for the Fourier-Hankel transform of any positive integer order on a uniform grid is presented.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
Hadoop-BAM: directly manipulating next generation sequencing data in the cloud.
Niemenmaa, Matti; Kallio, Aleksi; Schumacher, André; Klemelä, Petri; Korpelainen, Eija; Heljanko, Keijo
2012-03-15
Hadoop-BAM is a novel library for the scalable manipulation of aligned next-generation sequencing data in the Hadoop distributed computing framework. It acts as an integration layer between analysis applications and BAM files that are processed using Hadoop. Hadoop-BAM solves the issues related to BAM data access by presenting a convenient API for implementing map and reduce functions that can directly operate on BAM records. It builds on top of the Picard SAM JDK, so tools that rely on the Picard API are expected to be easily convertible to support large-scale distributed processing. In this article we demonstrate the use of Hadoop-BAM by building a coverage summarizing tool for the Chipster genome browser. Our results show that Hadoop offers good scalability, and one should avoid moving data in and out of Hadoop between analysis steps.
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less
Durojaye, Ebenezer
2017-07-05
The Tobacco Convention was adopted by the World Health Organization (WHO) in 2003. Nikogosian and Kickbusch examine the five potential impacts of the Tobacco Convention and its Protocol on public health. These include the adoption of the Convention would seem to unlock the treaty-making powers of WHO; the impact of the Convention in the global health architecture has been phenomenal globally; the Convention has facilitated the adoption of further instruments to strengthen its implementation at the national level; the Convention has led to the adoption of appropriate legal framework to combat the use of tobacco at the national level and that the impact of the Convention would seem to go beyond public health but has also led to the adoption of the Protocol to Eliminate Illicit Trade in Tobacco. However, the article by Nikogosian and Kickbusch would seem to overlook some of the challenges that may militate against the effective implementation of international law, including the Tobacco Convention, at the national level. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ERIC Educational Resources Information Center
Palmer, Jackie; Powell, Mary Jo
The Laboratory Network Program and the National Network of Eisenhower Mathematics and Science Regional Consortia, operating as the Curriculum Frameworks Task Force, jointly convened a group of educators involved in implementing state-level mathematics or science curriculum frameworks (CF). The Hilton Head (South Carolina) conference had a dual…
A framework for the design, implementation, and evaluation of interprofessional education.
Pardue, Karen T
2015-01-01
The growing emphasis on teamwork and care coordination within health care delivery is sparking interest in interprofessional education (IPE) among nursing and health profession faculty. Faculty often lack firsthand IPE experience, which hinders pedagogical reform. This article proposes a theoretically grounded framework for the design, implementation, and evaluation of IPE. Supporting literature and practical advice are interwoven. The proposed framework guides faculty in the successful creation and evaluation of collaborative learning experiences.
Becan, Jennifer E; Bartkowski, John P; Knight, Danica K; Wiley, Tisha R A; DiClemente, Ralph; Ducharme, Lori; Welsh, Wayne N; Bowser, Diana; McCollister, Kathryn; Hiller, Matthew; Spaulding, Anne C; Flynn, Patrick M; Swartzendruber, Andrea; Dickson, Megan F; Fisher, Jacqueline Horan; Aarons, Gregory A
2018-04-13
This paper describes the means by which a United States National Institute on Drug Abuse (NIDA)-funded cooperative, Juvenile Justice-Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS), utilized an established implementation science framework in conducting a multi-site, multi-research center implementation intervention initiative. The initiative aimed to bolster the ability of juvenile justice agencies to address unmet client needs related to substance use while enhancing inter-organizational relationships between juvenile justice and local behavioral health partners. The EPIS (Exploration, Preparation, Implementation, Sustainment) framework was selected and utilized as the guiding model from inception through project completion; including the mapping of implementation strategies to EPIS stages, articulation of research questions, and selection, content, and timing of measurement protocols. Among other key developments, the project led to a reconceptualization of its governing implementation science framework into cyclical form as the EPIS Wheel. The EPIS Wheel is more consistent with rapid-cycle testing principles and permits researchers to track both progressive and recursive movement through EPIS. Moreover, because this randomized controlled trial was predicated on a bundled strategy method, JJ-TRIALS was designed to rigorously test progress through the EPIS stages as promoted by facilitation of data-driven decision making principles. The project extended EPIS by (1) elucidating the role and nature of recursive activity in promoting change (yielding the circular EPIS Wheel), (2) by expanding the applicability of the EPIS framework beyond a single evidence-based practice (EBP) to address varying process improvement efforts (representing varying EBPs), and (3) by disentangling outcome measures of progression through EPIS stages from the a priori established study timeline. The utilization of EPIS in JJ-TRIALS provides a model for practical and applied use of implementation frameworks in real-world settings that span outer service system and inner organizational contexts in improving care for vulnerable populations. NCT02672150 . Retrospectively registered on 22 January 2016.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
The Applications of Model-Based Geostatistics in Helminth Epidemiology and Control
Magalhães, Ricardo J. Soares; Clements, Archie C.A.; Patil, Anand P.; Gething, Peter W.; Brooker, Simon
2011-01-01
Funding agencies are dedicating substantial resources to tackle helminth infections. Reliable maps of the distribution of helminth infection can assist these efforts by targeting control resources to areas of greatest need. The ability to define the distribution of infection at regional, national and subnational levels has been enhanced greatly by the increased availability of good quality survey data and the use of model-based geostatistics (MBG), enabling spatial prediction in unsampled locations. A major advantage of MBG risk mapping approaches is that they provide a flexible statistical platform for handling and representing different sources of uncertainty, providing plausible and robust information on the spatial distribution of infections to inform the design and implementation of control programmes. Focussing on schistosomiasis and soil-transmitted helminthiasis, with additional examples for lymphatic filariasis and onchocerciasis, we review the progress made to date with the application of MBG tools in large-scale, real-world control programmes and propose a general framework for their application to inform integrative spatial planning of helminth disease control programmes. PMID:21295680
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744
Defects controlled wrinkling and topological design in graphene
NASA Astrophysics Data System (ADS)
Zhang, Teng; Li, Xiaoyan; Gao, Huajian
2014-07-01
Due to its atomic scale thickness, the deformation energy in a free standing graphene sheet can be easily released through out-of-plane wrinkles which, if controllable, may be used to tune the electrical and mechanical properties of graphene. Here we adopt a generalized von Karman equation for a flexible solid membrane to describe graphene wrinkling induced by a prescribed distribution of topological defects such as disclinations (heptagons or pentagons) and dislocations (heptagon-pentagon dipoles). In this framework, a given distribution of topological defects in a graphene sheet is represented as an eigenstrain field which is determined from a Poisson equation and can be conveniently implemented in finite element (FEM) simulations. Comparison with atomistic simulations indicates that the proposed model, with only three parameters (i.e., bond length, stretching modulus and bending stiffness), is capable of accurately predicting the atomic scale wrinkles near disclination/dislocation cores while also capturing the large scale graphene configurations under specific defect distributions such as those leading to a sinusoidal surface ruga2
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
A Framework for Distributed Problem Solving
NASA Astrophysics Data System (ADS)
Leone, Joseph; Shin, Don G.
1989-03-01
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
Varsi, Cecilie; Ekstedt, Mirjam; Gammon, Deede
2015-01-01
Background Although there is growing evidence of the positive effects of Internet-based patient-provider communication (IPPC) services for both patients and health care providers, their implementation into clinical practice continues to be a challenge. Objective The 3 aims of this study were to (1) identify and compare barriers and facilitators influencing the implementation of an IPPC service in 5 hospital units using the Consolidated Framework for Implementation Research (CFIR), (2) assess the ability of the different constructs of CFIR to distinguish between high and low implementation success, and (3) compare our findings with those from other studies that used the CFIR to discriminate between high and low implementation success. Methods This study was based on individual interviews with 10 nurses, 6 physicians, and 1 nutritionist who had used the IPPC to answer messages from patients. Results Of the 36 CFIR constructs, 28 were addressed in the interviews, of which 12 distinguished between high and low implementation units. Most of the distinguishing constructs were related to the inner setting domain of CFIR, indicating that institutional factors were particularly important for successful implementation. Health care providers’ beliefs in the intervention as useful for themselves and their patients as well as the implementation process itself were also important. A comparison of constructs across ours and 2 other studies that also used the CFIR to discriminate between high and low implementation success showed that 24 CFIR constructs distinguished between high and low implementation units in at least 1 study; 11 constructs distinguished in 2 studies. However, only 2 constructs (patient need and resources and available resources) distinguished consistently between high and low implementation units in all 3 studies. Conclusions The CFIR is a helpful framework for illuminating barriers and facilitators influencing IPPC implementation. However, CFIR’s strength of being broad and comprehensive also limits its usefulness as an implementation framework because it does not discriminate between the relative importance of its many constructs for implementation success. This is the first study to identify which CFIR constructs are the most promising to distinguish between high and low implementation success across settings and interventions. Findings from this study can contribute to the refinement of CFIR toward a more succinct and parsimonious framework for planning and evaluation of the implementation of clinical interventions. ClinicalTrial Clinicaltrials.gov NCT00971139; http://clinicaltrial.gov/ct2/show/NCT00971139 (Archived by WebCite at http://www.webcitation.org/6cWeqN1uY) PMID:26582138
Martínez-Pardo, María Esther; Mariano-Magaña, David
2007-01-01
Tissue banking is a complex operation concerned with the organisation and coordination of all the steps, that is, from donor selection up to storage and distribution of the final products for therapeutic, diagnostic, instruction and research purposes. An appropriate quality framework should be established in order to cover all the specific methodology as well as the general aspects of quality management, such as research and development, design, instruction and training, specific documentation, traceability, corrective action, client satisfaction, and the like. Such a framework can be obtained by developing a quality management system (QMS) in accordance with a suitable international standard: ISO 9001:2000. This paper presents the implementation process of the tissue bank QMS at the Instituto Nacional de Investigaciones Nucleares in Mexico. The objective of the paper is to share the experience gained by the tissue bank personnel [radiosterilised tissue bank (BTR)] at the Instituto Nacional de Investigaciones Nucleares (ININ, National Institute of Nuclear Research), during implementation of the ISO 9001:2000 certification process. At present, the quality management system (QMS) of ININ also complies with the Mexican standard NMX-CC-9001:2000. The scope of this QMS is Research, Development and Processing of Biological Tissues Sterilised by Gamma Radiation, among others.
Interrater reliability of visually evaluated high frequency oscillations.
Spring, Aaron M; Pittman, Daniel J; Aghakhani, Yahya; Jirsch, Jeffrey; Pillay, Neelan; Bello-Espinosa, Luis E; Josephson, Colin; Federico, Paolo
2017-03-01
High frequency oscillations (HFOs) and interictal epileptiform discharges (IEDs) have been shown to be markers of epileptogenic regions. However, there is currently no 'gold standard' for identifying HFOs. Accordingly, we aimed to formally characterize the interrater reliability of HFO markings to validate the current practices. A morphology detector was implemented to detect events (candidate HFOs, lower-threshold events, and distractors) from the intracranial EEG (iEEG) of ten patients. Six electroencephalographers visually evaluated these events for the presence of HFOs and IEDs. Interrater reliability was calculated using pairwise Cohen's Kappa (κ) and intraclass correlation coefficients (ICC). The HFO evaluation distributions were significantly different for most pairs of reviewers (p<0.05; 11/15 pairs). Interrater reliability was poor for HFOs alone (κ mean =0.403; ICC=0.401) and HFO+IEDs (κ mean =0.568; ICC=0.570). The current practice of using two visual reviewers to identify HFOs is prone to bias arising from the poor agreement between reviewers, limiting the extrinsic validity of studies using these markers. The poor interrater reliability underlines the need for a framework to reconcile the important findings of existing studies. The present epoched design is an ideal candidate for the implementation of such a framework. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Streambank stabilization techniques are often implemented to reduce sediment loads from unstable streambanks. Process-based models can predict sediment yields with stabilization scenarios prior to implementation. However, a framework does not exist on how to effectively utilize these models to evalu...
Implementing the Biological Condition Gradient Framework for Management of Estuaries and Coasts
The Biological Condition Gradient (BCG) is an scientific approach to consistent bioassessment that was developed by the U.S. EPA’s Office of Water (Office of Science and Technology) and partners. This report describes implementation of the BCG framework for estuaries and coasts ...
Vision: A Conceptual Framework for School Counselors
ERIC Educational Resources Information Center
Watkinson, Jennifer Scaturo
2013-01-01
Vision is essential to the implementation of the American School Counselor Association (ASCA) National Model. Drawing from research in organizational leadership, this article provides a conceptual framework for how school counselors can incorporate vision as a strategy for implementing school counseling programs within the context of practice.…
Forum on Implementing Accessibility Frameworks for ALL Students
ERIC Educational Resources Information Center
Warren, S.; Christensen, L.; Chartrand, A.; Shyyan, V.; Lazarus, S.; Thurlow, M.
2015-01-01
Sixty individuals representing staff from state departments of education, school districts, other countries, testing and testing-related companies, and other educational organizations participated in a forum on June 22, 2015 in San Diego, California, to discuss implementing accessibility frameworks for all students, including students in general…
Support for User Interfaces for Distributed Systems
NASA Technical Reports Server (NTRS)
Eychaner, Glenn; Niessner, Albert
2005-01-01
An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
2015-06-22
Galerkin methodology formulated in the framework of the residual-distribution method. For both second- and third- 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...construct these schemes based on the Low-Diffusion-A and the Streamwise-Upwind-Petrov-Galerkin methodology formulated in the framework of the residual...methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit
NASA Astrophysics Data System (ADS)
Berthou, B.; Binosi, D.; Chouika, N.; Colaneri, L.; Guidal, M.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.; Sabatié, F.; Sznajder, P.; Wagner, J.
2018-06-01
We describe the architecture and functionalities of a C++ software framework, coined PARTONS, dedicated to the phenomenology of Generalized Parton Distributions. These distributions describe the three-dimensional structure of hadrons in terms of quarks and gluons, and can be accessed in deeply exclusive lepto- or photo-production of mesons or photons. PARTONS provides a necessary bridge between models of Generalized Parton Distributions and experimental data collected in various exclusive production channels. We outline the specification of the PARTONS framework in terms of practical needs, physical content and numerical capacity. This framework will be useful for physicists - theorists or experimentalists - not only to develop new models, but also to interpret existing measurements and even design new experiments.