A three-level atomicity model for decentralized workflow management systems
NASA Astrophysics Data System (ADS)
Ben-Shaul, Israel Z.; Heineman, George T.
1996-12-01
A workflow management system (WFMS) employs a workflow manager (WM) to execute and automate the various activities within a workflow. To protect the consistency of data, the WM encapsulates each activity with a transaction; a transaction manager (TM) then guarantees the atomicity of activities. Since workflows often group several activities together, the TM is responsible for guaranteeing the atomicity of these units. There are scalability issues, however, with centralized WFMSs. Decentralized WFMSs provide an architecture for multiple autonomous WFMSs to interoperate, thus accommodating multiple workflows and geographically-dispersed teams. When atomic units are composed of activities spread across multiple WFMSs, however, there is a conflict between global atomicity and local autonomy of each WFMS. This paper describes a decentralized atomicity model that enables workflow administrators to specify the scope of multi-site atomicity based upon the desired semantics of multi-site tasks in the decentralized WFMS. We describe an architecture that realizes our model and execution paradigm.
GUEST EDITOR'S INTRODUCTION: Guest Editor's introduction
NASA Astrophysics Data System (ADS)
Chrysanthis, Panos K.
1996-12-01
Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260, USA This special issue focuses on current efforts to represent and support workflows that integrate information systems and human resources within a business or manufacturing enterprise. Workflows may also be viewed as an emerging computational paradigm for effective structuring of cooperative applications involving human users and access to diverse data types not necessarily maintained by traditional database management systems. A workflow is an automated organizational process (also called business process) which consists of a set of activities or tasks that need to be executed in a particular controlled order over a combination of heterogeneous database systems and legacy systems. Within workflows, tasks are performed cooperatively by either human or computational agents in accordance with their roles in the organizational hierarchy. The challenge in facilitating the implementation of workflows lies in developing efficient workflow management systems. A workflow management system (also called workflow server, workflow engine or workflow enactment system) provides the necessary interfaces for coordination and communication among human and computational agents to execute the tasks involved in a workflow and controls the execution orderings of tasks as well as the flow of data that these tasks manipulate. That is, the workflow management system is responsible for correctly and reliably supporting the specification, execution, and monitoring of workflows. The six papers selected (out of the twenty-seven submitted for this special issue of Distributed Systems Engineering) address different aspects of these three functional components of a workflow management system. In the first paper, `Correctness issues in workflow management', Kamath and Ramamritham discuss the important issue of correctness in workflow management that constitutes a prerequisite for the use of workflows in the automation of the critical organizational/business processes. In particular, this paper examines the issues of execution atomicity and failure atomicity, differentiating between correctness requirements of system failures and logical failures, and surveys techniques that can be used to ensure data consistency in workflow management systems. While the first paper is concerned with correctness assuming transactional workflows in which selective transactional properties are associated with individual tasks or the entire workflow, the second paper, `Scheduling workflows by enforcing intertask dependencies' by Attie et al, assumes that the tasks can be either transactions or other activities involving legacy systems. This second paper describes the modelling and specification of conditions involving events and dependencies among tasks within a workflow using temporal logic and finite state automata. It also presents a scheduling algorithm that enforces all stated dependencies by executing at any given time only those events that are allowed by all the dependency automata and in an order as specified by the dependencies. In any system with decentralized control, there is a need to effectively cope with the tension that exists between autonomy and consistency requirements. In `A three-level atomicity model for decentralized workflow management systems', Ben-Shaul and Heineman focus on the specific requirement of enforcing failure atomicity in decentralized, autonomous and interacting workflow management systems. Their paper describes a model in which each workflow manager must be able to specify the sequence of tasks that comprise an atomic unit for the purposes of correctness, and the degrees of local and global atomicity for the purpose of cooperation with other workflow managers. The paper also discusses a realization of this model in which treaties and summits provide an agreement mechanism, while underlying transaction managers are responsible for maintaining failure atomicity. The fourth and fifth papers are experience papers describing a workflow management system and a large scale workflow application, respectively. Schill and Mittasch, in `Workflow management systems on top of OSF DCE and OMG CORBA', describe a decentralized workflow management system and discuss its implementation using two standardized middleware platforms, namely, OSF DCE and OMG CORBA. The system supports a new approach to workflow management, introducing several new concepts such as data type management for integrating various types of data and quality of service for various services provided by servers. A problem common to both database applications and workflows is the handling of missing and incomplete information. This is particularly pervasive in an `electronic market' with a huge number of retail outlets producing and exchanging volumes of data, the application discussed in `Information flow in the DAMA project beyond database managers: information flow managers'. Motivated by the need for a method that allows a task to proceed in a timely manner if not all data produced by other tasks are available by its deadline, Russell et al propose an architectural framework and a language that can be used to detect, approximate and, later on, to adjust missing data if necessary. The final paper, `The evolution towards flexible workflow systems' by Nutt, is complementary to the other papers and is a survey of issues and of work related to both workflow and computer supported collaborative work (CSCW) areas. In particular, the paper provides a model and a categorization of the dimensions which workflow management and CSCW systems share. Besides summarizing the recent advancements towards efficient workflow management, the papers in this special issue suggest areas open to investigation and it is our hope that they will also provide the stimulus for further research and development in the area of workflow management systems.
Wave scheduling - Decentralized scheduling of task forces in multicomputers
NASA Technical Reports Server (NTRS)
Van Tilborg, A. M.; Wittie, L. D.
1984-01-01
Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.
Application of decentralized cooperative problem solving in dynamic flexible scheduling
NASA Astrophysics Data System (ADS)
Guan, Zai-Lin; Lei, Ming; Wu, Bo; Wu, Ya; Yang, Shuzi
1995-08-01
The object of this study is to discuss an intelligent solution to the problem of task-allocation in shop floor scheduling. For this purpose, the technique of distributed artificial intelligence (DAI) is applied. Intelligent agents (IAs) are used to realize decentralized cooperation, and negotiation is realized by using message passing based on the contract net model. Multiple agents, such as manager agents, workcell agents, and workstation agents, make game-like decisions based on multiple criteria evaluations. This procedure of decentralized cooperative problem solving makes local scheduling possible. And by integrating such multiple local schedules, dynamic flexible scheduling for the whole shop floor production can be realized.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2014-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2017-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237
Schedule-Aware Workflow Management Systems
NASA Astrophysics Data System (ADS)
Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.
Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Operator Objective Function Guidance for a Real-Time Unmanned Vehicle Scheduling Algorithm
2012-12-01
Consensus - Based Decentralized Auctions for Robust Task Allocation ,” IEEE Transactions on Robotics and Automation, Vol. 25, No. 4, No. 4, 2009, pp. 912...planning for the fleet. The decentralized task planner used in OPS-USERS is the consensus - based bundle algorithm (CBBA), a decentralized , polynomial...and surveillance (OPS-USERS), which leverages decentralized algorithms for vehicle routing and task allocation . This
Decentralizing the Team Station: Simulation before Reality as a Best-Practice Approach.
Charko, Jackie; Geertsen, Alice; O'Brien, Patrick; Rouse, Wendy; Shahid, Ammarah; Hardenne, Denise
2016-01-01
The purpose of this article is to share the logistical planning requirements and simulation experience of one Canadian hospital as it prepared its staff for the change from a centralized inpatient unit model to the decentralized design planned for its new community hospital. With the commitment and support of senior leadership, project management resources and clinical leads worked collaboratively to design a decentralized prototype in the form of a pod-style environment in the hospital's current setting. Critical success factors included engaging the right stakeholders, providing an opportunity to test new workflows and technology, creating a strong communication plan and building on lessons learned as subsequent pod prototypes are launched.
MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler
NASA Astrophysics Data System (ADS)
Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre
This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.
ERIC Educational Resources Information Center
Li, Wenhao
2011-01-01
Distributed workflow technology has been widely used in modern education and e-business systems. Distributed web applications have shown cross-domain and cooperative characteristics to meet the need of current distributed workflow applications. In this paper, the author proposes a dynamic and adaptive scheduling algorithm PCSA (Pre-Calculated…
Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization
Malawski, Maciej; Figiela, Kamil; Bubak, Marian; ...
2015-01-01
This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize themore » cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.« less
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
Decentralized Real-Time Scheduling
1990-08-01
must provide several alternative resource management policies, including FIFO and deadline queueing for shared resources that are not available. 5...When demand exceeds the supply of shared resources (even within a single switch), some calls cannot be completed. In that case, a call’s priority...associated chiefly with the need to manage resources in a timely and decentralized fashion. The Alpha programming model permits the convenient expression of
Integrating prediction, provenance, and optimization into high energy workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schram, M.; Bansal, V.; Friese, R. D.
We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.
Command in Air War: Centralized vs. Decentralized Control of Combat Airpower
2005-05-19
centralized control of these missions, requiring a full day for scheduling a target, was ineffective at supporting the D-day invasion and even proved...dangerous to friendly troops. Americans developed a method of scheduling a steady stream of...controller took over this function. Thus, although the aircraft were still scheduled and routed by a centralized �Combined Operations Center,� they
Multi-core processing and scheduling performance in CMS
NASA Astrophysics Data System (ADS)
Hernández, J. M.; Evans, D.; Foulkes, S.
2012-12-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.
A mission planning concept and mission planning system for future manned space missions
NASA Technical Reports Server (NTRS)
Wickler, Martin
1994-01-01
The international character of future manned space missions will compel the involvement of several international space agencies in mission planning tasks. Additionally, the community of users requires a higher degree of freedom for experiment planning. Both of these problems can be solved by a decentralized mission planning concept using the so-called 'envelope method,' by which resources are allocated to users by distributing resource profiles ('envelopes') which define resource availabilities at specified times. The users are essentially free to plan their activities independently of each other, provided that they stay within their envelopes. The new developments were aimed at refining the existing vague envelope concept into a practical method for decentralized planning. Selected critical functions were exercised by planning an example, founded on experience acquired by the MSCC during the Spacelab missions D-1 and D-2. The main activity regarding future mission planning tasks was to improve the existing MSCC mission planning system, using new techniques. An electronic interface was developed to collect all formalized user inputs more effectively, along with an 'envelope generator' for generation and manipulation of the resource envelopes. The existing scheduler and its data base were successfully replaced by an artificial intelligence scheduler. This scheduler is not only capable of handling resource envelopes, but also uses a new technology based on neuronal networks. Therefore, it is very well suited to solve the future scheduling problems more efficiently. This prototype mission planning system was used to gain new practical experience with decentralized mission planning, using the envelope method. In future steps, software tools will be optimized, and all data management planning activities will be embedded into the scheduler.
Expansion of a residency program through provision of second-shift decentralized services.
Host, Brian D; Anderson, Michael J; Lucas, Paul D
2014-12-15
The rationale for and logistics of the expansion of a postgraduate year 1 (PGY1) residency program in a community hospital are described. Baptist Health Lexington, a nonprofit community hospital in Lexington, Kentucky, sought to expand the PGY1 program by having residents perform second-shift decentralized pharmacist functions. Program expansion was predicated on aligning resident staffing functions with current hospitalwide initiatives involving medication reconciliation and patient education. The focus was to integrate residents into the workflow while allowing them more time to practice as pharmacists and contribute to departmental objectives. The staffing function would increase residents' overall knowledge of departmental operations and foster their sense of independence and ownership. The decentralized functions would include initiation of clinical pharmacokinetic consultations, admission medication reconciliation, discharge teaching for patients with heart failure, and order-entry support from decentralized locations. The program grew from three to five residents and established a staffing rotation for second-shift decentralized coverage. The increased time spent staffing did not detract from the time allotted to previously established learning experiences and enhanced overall continuity of the staffing experience. The change also emphasized to the residents the importance of integration of distributive and clinical functions within the department. Pharmacist participation in admission and discharge medication reconciliation activities has also increased patient satisfaction, evidenced by follow-up surveys conducted by the hospital. A PGY1 residency program was expanded through the provision of second-shift decentralized clinical services, which helped provide residents with increased patient exposure and enhanced staffing experience. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
An Organizational and Qualitative Approach to Improving University Course Scheduling
ERIC Educational Resources Information Center
Hill, Duncan L.
2010-01-01
Focusing on the current timetabling process at the University of Toronto Mississauga (UTM), I apply David Wesson's theoretical framework in order to understand (1) how increasing enrollment interacts with a decentralized timetabling process to limit the flexibility of course schedules and (2) the resultant impact on educational quality. I then…
ERIC Educational Resources Information Center
Hill, Duncan L.
2008-01-01
Focusing on the current timetabling process at the University of Toronto Mississauga, I apply David Wesson's theoretical framework in order to understand how increasing enrolment interacts with a decentralized timetabling process to limit the flexibility of course schedules, and the resultant impact on educational quality. I then apply Robert…
Multi-core processing and scheduling performance in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J. M.; Evans, D.; Foulkes, S.
2012-01-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less
Design and implementation of workflow engine for service-oriented architecture
NASA Astrophysics Data System (ADS)
Peng, Shuqing; Duan, Huining; Chen, Deyun
2009-04-01
As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.
Decentralized Control of Scheduling in Distributed Systems.
1983-03-18
the job scheduling algorithm adapts to the changing busyness of the various hosts in the system. The environment in which the job scheduling entities...resources and processes that constitute the node and a set of interfaces for accessing these processes and resources. The structure of a node could change ...parallel. Chang [CHNG82] has also described some algorithms for detecting properties of general graphs by traversing paths in a graph in parallel. One of
Decentralized Control of Scheduling in Distributed Systems.
1983-12-15
does not perform quite as well as the 10 state system, but is less sensitive to changes in scheduling period. It performs best when scheduling is...intra-process concerns. We extend theLr concept of a process to inolude Inter -ress comunication. That is. various form of send and receive primitives...Current busyness of each site based on some responses to requests for bids. A received bid is utilization factor. adjusted by incrementing it by a
Integrating Behavioral Health in Primary Care Using Lean Workflow Analysis: A Case Study
van Eeghen, Constance; Littenberg, Benjamin; Holman, Melissa D.; Kessler, Rodger
2016-01-01
Background Primary care offices are integrating behavioral health (BH) clinicians into their practices. Implementing such a change is complex, difficult, and time consuming. Lean workflow analysis may be an efficient, effective, and acceptable method for integration. Objective Observe BH integration into primary care and measure its impact. Design Prospective, mixed methods case study in a primary care practice. Measurements Change in treatment initiation (referrals generating BH visits within the system). Secondary measures: primary care visits resulting in BH referrals, referrals resulting in scheduled appointments, time from referral to scheduled appointment, and time from referral to first visit. Providers and staff were surveyed on the Lean method. Results Referrals increased from 23 to 37/1000 visits (P<.001). Referrals resulted in more scheduled (60% to 74%, P<.001) and arrived visits (44% to 53%, P=.025). Time from referral to first scheduled visit decreased (Hazard Ratio (HR) 1.60; 95% Confidence Interval (CI) 1.37, 1.88; P<0.001) as did time to first arrived visit (HR 1.36; 95% CI 1.14, 1.62; P=0.001). Surveys and comments were positive. Conclusions This pilot integration of BH showed significant improvements in treatment initiation and other measures. Strengths of Lean included workflow improvement, system perspective, and project success. Further evaluation is indicated. PMID:27170796
Game-Based Virtual Worlds as Decentralized Virtual Activity Systems
NASA Astrophysics Data System (ADS)
Scacchi, Walt
There is widespread interest in the development and use of decentralized systems and virtual world environments as possible new places for engaging in collaborative work activities. Similarly, there is widespread interest in stimulating new technological innovations that enable people to come together through social networking, file/media sharing, and networked multi-player computer game play. A decentralized virtual activity system (DVAS) is a networked computer supported work/play system whose elements and social activities can be both virtual and decentralized (Scacchi et al. 2008b). Massively multi-player online games (MMOGs) such as World of Warcraft and online virtual worlds such as Second Life are each popular examples of a DVAS. Furthermore, these systems are beginning to be used for research, deve-lopment, and education activities in different science, technology, and engineering domains (Bainbridge 2007, Bohannon et al. 2009; Rieber 2005; Scacchi and Adams 2007; Shaffer 2006), which are also of interest here. This chapter explores two case studies of DVASs developed at the University of California at Irvine that employ game-based virtual worlds to support collaborative work/play activities in different settings. The settings include those that model and simulate practical or imaginative physical worlds in different domains of science, technology, or engineering through alternative virtual worlds where players/workers engage in different kinds of quests or quest-like workflows (Jakobsson 2006).
Optimal Decentralized Protocol for Electric Vehicle Charging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gan, LW; Topcu, U; Low, SH
We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guidemore » their updates. The algorithm converges to optimal charging profiles (that are as "flat" as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.« less
Clinic Workflow Simulations using Secondary EHR Data
Hribar, Michelle R.; Biermann, David; Read-Brown, Sarah; Reznick, Leah; Lombardi, Lorinna; Parikh, Mansi; Chamberlain, Winston; Yackel, Thomas R.; Chiang, Michael F.
2016-01-01
Clinicians today face increased patient loads, decreased reimbursements and potential negative productivity impacts of using electronic health records (EHR), but have little guidance on how to improve clinic efficiency. Discrete event simulation models are powerful tools for evaluating clinical workflow and improving efficiency, particularly when they are built from secondary EHR timing data. The purpose of this study is to demonstrate that these simulation models can be used for resource allocation decision making as well as for evaluating novel scheduling strategies in outpatient ophthalmology clinics. Key findings from this study are that: 1) secondary use of EHR timestamp data in simulation models represents clinic workflow, 2) simulations provide insight into the best allocation of resources in a clinic, 3) simulations provide critical information for schedule creation and decision making by clinic managers, and 4) simulation models built from EHR data are potentially generalizable. PMID:28269861
A Two-Stage Probabilistic Approach to Manage Personal Worklist in Workflow Management Systems
NASA Astrophysics Data System (ADS)
Han, Rui; Liu, Yingbo; Wen, Lijie; Wang, Jianmin
The application of workflow scheduling in managing individual actor's personal worklist is one area that can bring great improvement to business process. However, current deterministic work cannot adapt to the dynamics and uncertainties in the management of personal worklist. For such an issue, this paper proposes a two-stage probabilistic approach which aims at assisting actors to flexibly manage their personal worklists. To be specific, the approach analyzes every activity instance's continuous probability of satisfying deadline at the first stage. Based on this stochastic analysis result, at the second stage, an innovative scheduling strategy is proposed to minimize the overall deadline violation cost for an actor's personal worklist. Simultaneously, the strategy recommends the actor a feasible worklist of activity instances which meet the required bottom line of successful execution. The effectiveness of our approach is evaluated in a real-world workflow management system and with large scale simulation experiments.
Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz
2016-01-01
As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971
Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz
2017-01-01
As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.
van Veen-Berkx, Elizabeth; van Dijk, Menno V; Cornelisse, Diederich C; Kazemier, Geert; Mokken, Fleur C
2016-08-01
A new method of scheduling anesthesia-controlled time (ACT) was implemented on July 1, 2012 in an academic inpatient operating room (OR) department. This study examined the relationship between this new scheduling method and OR performance. The new method comprised the development of predetermined time frames per anesthetic technique based on historical data of the actual time needed for anesthesia induction and emergence. Seven "anesthesia scheduling packages" (0 to 6) were established. Several options based on the quantity of anesthesia monitoring and the complexity of the patient were differentiated in time within each package. This was a quasi-experimental time-series design. Relevant data were divided into 4 equal periods of time. These time periods were compared with ANOVA with contrast analysis: an intervention, pre-intervention, and post-intervention contrast were tested. All emergency cases were excluded. A total of 34,976 inpatient elective cases performed from January 1, 2010 to December 31, 2014 were included for statistical analyses. The intervention contrast showed a significant decrease (p < 0.001) of 4.5% in the prediction error. The total number of cancellations decreased to 19.9%. The ANOVA with contrast analyses showed no significant differences with respect to under- and over-used OR time and raw use. Unanticipated results derived from this study, allowing for a smoother workflow: eg anesthesia nurses know exactly which medical equipment and devices need to be assembled and tested beforehand, based on the scheduled anesthesia package. Scheduling the 2 major components of a procedure (anesthesia- and surgeon-controlled time) more accurately leads to fewer case cancellations, lower prediction errors, and smoother OR workflow in a university hospital setting. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Development of a decentralized multi-axis synchronous control approach for real-time networks.
Xu, Xiong; Gu, Guo-Ying; Xiong, Zhenhua; Sheng, Xinjun; Zhu, Xiangyang
2017-05-01
The message scheduling and the network-induced delays of real-time networks, together with the different inertias and disturbances in different axes, make the synchronous control of the real-time network-based systems quite challenging. To address this challenge, a decentralized multi-axis synchronous control approach is developed in this paper. Due to the limitations of message scheduling and network bandwidth, error of the position synchronization is firstly defined in the proposed control approach as a subset of preceding-axis pairs. Then, a motion message estimator is designed to reduce the effect of network delays. It is proven that position and synchronization errors asymptotically converge to zero in the proposed controller with the delay compensation. Finally, simulation and experimental results show that the developed control approach can achieve the good position synchronization performance for the multi-axis motion over the real-time network. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...
2016-10-06
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Integrating Behavioral Health in Primary Care Using Lean Workflow Analysis: A Case Study.
van Eeghen, Constance; Littenberg, Benjamin; Holman, Melissa D; Kessler, Rodger
2016-01-01
Primary care offices are integrating behavioral health (BH) clinicians into their practices. Implementing such a change is complex, difficult, and time consuming. Lean workflow analysis may be an efficient, effective, and acceptable method for use during integration. The objectives of this study were to observe BH integration into primary care and to measure its impact. This was a prospective, mixed-methods case study in a primary care practice that served 8,426 patients over a 17-month period, with 652 patients referred to BH services. Secondary measures included primary care visits resulting in BH referrals, referrals resulting in scheduled appointments, time from referral to the scheduled appointment, and time from the referral to the first visit. Providers and staff were surveyed on the Lean method. Referrals increased from 23 to 37 per 1000 visits (P < .001). Referrals resulted in more scheduled (60% to 74%; P < .001) and arrived visits (44% to 53%; P = .025). Time from referral to the first scheduled visit decreased (hazard ratio, 1.60; 95% confidence interval, 1.37-1.88) as did time to first arrived visit (hazard ratio, 1.36; 95% confidence interval, 1.14-1.62). Survey responses and comments were positive. This pilot integration of BH showed significant improvements in treatment initiation and other measures. Strengths of Lean analysis included workflow improvement, system perspective, and project success. Further evaluation is indicated. © Copyright 2016 by the American Board of Family Medicine.
Electronic workflow for imaging in clinical research.
Hedges, Rebecca A; Goodman, Danielle; Sachs, Peter B
2014-08-01
In the transition from paper to electronic workflow, the University of Colorado Health System's implementation of a new electronic health record system (EHR) forced all clinical groups to reevaluate their practices including the infrastructure surrounding clinical trials. Radiological imaging is an important piece of many clinical trials and requires a high level of consistency and standardization. With EHR implementation, paper orders were manually transcribed into the EHR, digitizing an inefficient work flow. A team of schedulers, radiologists, technologists, research personnel, and EHR analysts worked together to optimize the EHR to accommodate the needs of research imaging protocols. The transition to electronic workflow posed several problems: (1) there needed to be effective communication throughout the imaging process from scheduling to radiologist interpretation. (2) The exam ordering process needed to be automated to allow scheduling of specific research studies on specific equipment. (3) The billing process needed to be controlled to accommodate radiologists already supported by grants. (4) There needed to be functionality allowing exams to finalize automatically skipping the PACS and interpretation process. (5) There needed to be a way to alert radiologists that a specialized research interpretation was needed on a given exam. These issues were resolved through the optimization of the "visit type," allowing a high-level control of an exam at the time of scheduling. Additionally, we added columns and fields to work queues displaying grant identification numbers. The build solutions we implemented reduced the mistakes made and increased imaging quality and compliance.
Improving patient access to an interventional US clinic.
Steele, Joseph R; Clarke, Ryan K; Terrell, John A; Brightmon, Tonya R
2014-01-01
A continuous quality improvement project was conducted to increase patient access to a neurointerventional ultrasonography (US) clinic. The clinic was experiencing major scheduling delays because of an increasing patient volume. A multidisciplinary team was formed that included schedulers, medical assistants, nurses, technologists, and physicians. The team created an Ishikawa diagram of the possible causes of the long wait time to the next available appointment and developed a flowchart of the steps involved in scheduling and completing a diagnostic US examination and biopsy. The team then implemented a staged intervention that included adjustments to staffing and room use (stage 1); new procedures for scheduling same-day add-on appointments (stage 2); and a lead technician rotation to optimize patient flow, staffing, and workflow (stage 3). Six months after initiation of the intervention, the mean time to the next available appointment had decreased from 25 days at baseline to 1 day, and the number of available daily appointments had increased from 38 to 55. These improvements resulted from a coordinated provider effort and had a net present value of more than $275,000. This project demonstrates that structural changes in staffing, workflow, and room use can substantially reduce scheduling delays for critical imaging procedures. © RSNA, 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Garcia Blas, Javier; Isaila, Florin
This paper explores novel techniques for improving the performance of many-task workflows based on the Swift scripting language. We propose novel programmer options for automated distributed data placement and task scheduling. These options trigger a data placement mechanism used for distributing intermediate workflow data over the servers of Hercules, a distributed key-value store that can be used to cache file system data. We demonstrate that these new mechanisms can significantly improve the aggregated throughput of many-task workflows with up to 86x, reduce the contention on the shared file system, exploit the data locality, and trade off locality and load balance.
N, Sadhasivam; R, Balamurugan; M, Pandi
2018-01-27
Objective: Epigenetic modifications involving DNA methylation and histone statud are responsible for the stable maintenance of cellular phenotypes. Abnormalities may be causally involved in cancer development and therefore could have diagnostic potential. The field of epigenomics refers to all epigenetic modifications implicated in control of gene expression, with a focus on better understanding of human biology in both normal and pathological states. Epigenomics scientific workflow is essentially a data processing pipeline to automate the execution of various genome sequencing operations or tasks. Cloud platform is a popular computing platform for deploying large scale epigenomics scientific workflow. Its dynamic environment provides various resources to scientific users on a pay-per-use billing model. Scheduling epigenomics scientific workflow tasks is a complicated problem in cloud platform. We here focused on application of an improved particle swam optimization (IPSO) algorithm for this purpose. Methods: The IPSO algorithm was applied to find suitable resources and allocate epigenomics tasks so that the total cost was minimized for detection of epigenetic abnormalities of potential application for cancer diagnosis. Result: The results showed that IPSO based task to resource mapping reduced total cost by 6.83 percent as compared to the traditional PSO algorithm. Conclusion: The results for various cancer diagnosis tasks showed that IPSO based task to resource mapping can achieve better costs when compared to PSO based mapping for epigenomics scientific application workflow. Creative Commons Attribution License
Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.
Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S
The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.
A framework for service enterprise workflow simulation with multi-agents cooperation
NASA Astrophysics Data System (ADS)
Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun
2013-11-01
Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.
Autonomous mission planning and scheduling: Innovative, integrated, responsive
NASA Technical Reports Server (NTRS)
Sary, Charisse; Liu, Simon; Hull, Larry; Davis, Randy
1994-01-01
Autonomous mission scheduling, a new concept for NASA ground data systems, is a decentralized and distributed approach to scientific spacecraft planning, scheduling, and command management. Systems and services are provided that enable investigators to operate their own instruments. In autonomous mission scheduling, separate nodes exist for each instrument and one or more operations nodes exist for the spacecraft. Each node is responsible for its own operations which include planning, scheduling, and commanding; and for resolving conflicts with other nodes. One or more database servers accessible to all nodes enable each to share mission and science planning, scheduling, and commanding information. The architecture for autonomous mission scheduling is based upon a realistic mix of state-of-the-art and emerging technology and services, e.g., high performance individual workstations, high speed communications, client-server computing, and relational databases. The concept is particularly suited to the smaller, less complex missions of the future.
Simulation of Etching in Chlorine Discharges Using an Integrated Feature Evolution-Plasma Model
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.; Biegel, Bryan (Technical Monitor)
2002-01-01
To better utilize its vast collection of heterogeneous resources that are geographically distributed across the United States, NASA is constructing a computational grid called the Information Power Grid (IPG). This paper describes various tools and techniques that we are developing to measure and improve the performance of a broad class of NASA applications when run on the IPG. In particular, we are investigating the areas of grid benchmarking, grid monitoring, user-level application scheduling, and decentralized system-level scheduling.
Low Latency Workflow Scheduling and an Application of Hyperspectral Brightness Temperatures
NASA Astrophysics Data System (ADS)
Nguyen, P. T.; Chapman, D. R.; Halem, M.
2012-12-01
New system analytics for Big Data computing holds the promise of major scientific breakthroughs and discoveries from the exploration and mining of the massive data sets becoming available to the science community. However, such data intensive scientific applications face severe challenges in accessing, managing and analyzing petabytes of data. While the Hadoop MapReduce environment has been successfully applied to data intensive problems arising in business, there are still many scientific problem domains where limitations in the functionality of MapReduce systems prevent its wide adoption by those communities. This is mainly because MapReduce does not readily support the unique science discipline needs such as special science data formats, graphic and computational data analysis tools, maintaining high degrees of computational accuracies, and interfacing with application's existing components across heterogeneous computing processors. We address some of these limitations by exploiting the MapReduce programming model for satellite data intensive scientific problems and address scalability, reliability, scheduling, and data management issues when dealing with climate data records and their complex observational challenges. In addition, we will present techniques to support the unique Earth science discipline needs such as dealing with special science data formats (HDF and NetCDF). We have developed a Hadoop task scheduling algorithm that improves latency by 2x for a scientific workflow including the gridding of the EOS AIRS hyperspectral Brightness Temperatures (BT). This workflow processing algorithm has been tested at the Multicore Computing Center private Hadoop based Intel Nehalem cluster, as well as in a virtual mode under the Open Source Eucalyptus cloud. The 55TB AIRS hyperspectral L1b Brightness Temperature record has been gridded at the resolution of 0.5x1.0 degrees, and we have computed a 0.9 annual anti-correlation to the El Nino Southern oscillation in the Nino 4 region, as well as a 1.9 Kelvin decadal Arctic warming in the 4u and 12u spectral regions. Additionally, we will present the frequency of extreme global warming events by the use of a normalized maximum BT in a grid cell relative to its local standard deviation. A low-latency Hadoop scheduling environment maintains data integrity and fault tolerance in a MapReduce data intensive Cloud environment while improving the "time to solution" metric by 35% when compared to a more traditional parallel processing system for the same dataset. Our next step will be to improve the usability of our Hadoop task scheduling system, to enable rapid prototyping of data intensive experiments by means of processing "kernels". We will report on the performance and experience of implementing these experiments on the NEX testbed, and propose the use of a graphical directed acyclic graph (DAG) interface to help us develop on-demand scientific experiments. Our workflow system works within Hadoop infrastructure as a replacement for the FIFO or FairScheduler, thus the use of Apache "Pig" latin or other Apache tools may also be worth investigating on the NEX system to improve the usability of our workflow scheduling infrastructure for rapid experimentation.
The role of human-automation consensus in multiple unmanned vehicle scheduling.
Cummings, M L; Clare, Andrew; Hart, Christin
2010-02-01
This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation's suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. In decentralized unmanned vehicle networks, operators who ignore the automation's requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.
Evaluation of the IWS Model 6000 SBR began in April 2004 when one SBR was taken off line and cleaned. The verification testing started July 1, 2004 and proceeded without interruption through June 30, 2005. All sixteen four-day sampling events were completed as scheduled, yielding...
Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han
2015-01-01
Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model. PMID:25856325
Optimizing Multiple QoS for Workflow Applications using PSO and Min-Max Strategy
NASA Astrophysics Data System (ADS)
Umar Ambursa, Faruku; Latip, Rohaya; Abdullah, Azizol; Subramaniam, Shamala
2017-08-01
Workflow scheduling under multiple QoS constraints is a complicated optimization problem. Metaheuristic techniques are excellent approaches used in dealing with such problem. Many metaheuristic based algorithms have been proposed, that considers various economic and trustworthy QoS dimensions. However, most of these approaches lead to high violation of user-defined QoS requirements in tight situation. Recently, a new Particle Swarm Optimization (PSO)-based QoS-aware workflow scheduling strategy (LAPSO) is proposed to improve performance in such situations. LAPSO algorithm is designed based on synergy between a violation handling method and a hybrid of PSO and min-max heuristic. Simulation results showed a great potential of LAPSO algorithm to handling user requirements even in tight situations. In this paper, the performance of the algorithm is anlysed further. Specifically, the impact of the min-max strategy on the performance of the algorithm is revealed. This is achieved by removing the violation handling from the operation of the algorithm. The results show that LAPSO based on only the min-max method still outperforms the benchmark, even though the LAPSO with the violation handling performs more significantly better.
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-01-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600
[Integration of the radiotherapy irradiation planning in the digital workflow].
Röhner, F; Schmucker, M; Henne, K; Momm, F; Bruggmoser, G; Grosu, A-L; Frommhold, H; Heinemann, F E
2013-02-01
At the Clinic of Radiotherapy at the University Hospital Freiburg, all relevant workflow is paperless. After implementing the Operating Schedule System (OSS) as a framework, all processes are being implemented into the departmental system MOSAIQ. Designing a digital workflow for radiotherapy irradiation planning is a large challenge, it requires interdisciplinary expertise and therefore the interfaces between the professions also have to be interdisciplinary. For every single step of radiotherapy irradiation planning, distinct responsibilities have to be defined and documented. All aspects of digital storage, backup and long-term availability of data were considered and have already been realized during the OSS project. After an analysis of the complete workflow and the statutory requirements, a detailed project plan was designed. In an interdisciplinary workgroup, problems were discussed and a detailed flowchart was developed. The new functionalities were implemented in a testing environment by the Clinical and Administrative IT Department (CAI). After extensive tests they were integrated into the new modular department system. The Clinic of Radiotherapy succeeded in realizing a completely digital workflow for radiotherapy irradiation planning. During the testing phase, our digital workflow was examined and afterwards was approved by the responsible authority.
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-06-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhiat, A.; Elekta, Sunnyvale, CA; Kanis, A.P.
Purpose: To extend a clinical Record and Verify (R&V) system to enable a safe and fast workflow for Plan-of-the-Day (PotD) adaptive treatments based on patient-specific plan libraries. Methods: Plan libraries for PotD adaptive treatments contain for each patient several pre-treatment generated treatment plans. They may be generated for various patient anatomies or CTV-PTV margins. For each fraction, a Cone Beam CT scan is acquired to support the selection of the plan that best fits the patient’s anatomy-of-the-day. To date, there are no commercial R&V systems that support PotD delivery strategies. Consequently, the clinical workflow requires many manual interventions. Moreover, multiplemore » scheduled plans have a high risk of excessive dose delivery. In this work we extended a commercial R&V system (MOSAIQ) to support PotD workflows using IQ-scripting. The PotD workflow was designed after extensive risk analysis of the manual procedure, and all identified risks were incorporated as logical checks. Results: All manual PotD activities were automated. The workflow first identifies if the patient is scheduled for PotD, then performs safety checks, and continues to treatment plan selection only if no issues were found. The user selects the plan to deliver from a list of candidate plans. After plan selection, the workflow makes the treatment fields of the selected plan available for delivery by adding them to the treatment calendar. Finally, control is returned to the R&V system to commence treatment. Additional logic was added to incorporate off-line changes such as updating the plan library. After extensive testing including treatment fraction interrupts and plan-library updates during the treatment course, the workflow is running successfully in a clinical pilot, in which 35 patients have been treated since October 2014. Conclusion: We have extended a commercial R&V system for improved safety and efficiency in library-based adaptive strategies enabling a wide-spread implementation of those strategies. This work was in part funded by a research grant of Elekta AB, Stockholm, Sweden.« less
Development of a pharmacy resident rotation to expand decentralized clinical pharmacy services.
Hill, John D; Williams, Jonathan P; Barnes, Julie F; Greenlee, Katie M; Cardiology, Bcps-Aq; Leonard, Mandy C
2017-07-15
The development of a pharmacy resident rotation to expand decentralized clinical pharmacy services is described. In an effort to align with the initiatives proposed within the ASHP Practice Advancement Initiative, the department of pharmacy at Cleveland Clinic, a 1,400-bed academic, tertiary acute care medical center in Cleveland, Ohio, established a goal to provide decentralized clinical pharmacy services for 100% of patient care units within the hospital. Patient care units that previously had no decentralized pharmacy services were evaluated to identify opportunities for expansion. Metrics analyzed included number of medication orders verified per hour, number of pharmacy dosing consultations, and number of patient discharge counseling sessions. A pilot study was conducted to assess the feasibility of this service and potential resident learning opportunities. A learning experience description was drafted, and feedback was solicited regarding the development of educational components utilized throughout the rotation. Pharmacists who were providing services to similar patient populations were identified to serve as preceptors. Staff pharmacists were deployed to previously uncovered patient care units, with pharmacy residents providing decentralized services on previously covered areas. A rotating preceptor schedule was developed based on geographic proximity and clinical expertise. An initial postimplementation assessment of this resident-driven service revealed that pharmacy residents provided a comparable level of pharmacy services to that of staff pharmacists. Feedback collected from nurses, physicians, and pharmacy staff also supported residents' ability to operate sufficiently in this role to optimize patient care. A learning experience developed for pharmacy residents in a large medical center enabled the expansion of decentralized clinical services without requiring additional pharmacist full-time equivalents. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Potential of knowledge discovery using workflows implemented in the C3Grid
NASA Astrophysics Data System (ADS)
Engel, Thomas; Fink, Andreas; Ulbrich, Uwe; Schartner, Thomas; Dobler, Andreas; Fritzsch, Bernadette; Hiller, Wolfgang; Bräuer, Benny
2013-04-01
With the increasing number of climate simulations, reanalyses and observations, new infrastructures to search and analyse distributed data are necessary. In recent years, the Grid architecture became an important technology to fulfill these demands. For the German project "Collaborative Climate Community Data and Processing Grid" (C3Grid) computer scientists and meteorologists developed a system that offers its users a webinterface to search and download climate data and use implemented analysis tools (called workflows) to further investigate them. In this contribution, two workflows that are implemented in the C3Grid architecture are presented: the Cyclone Tracking (CT) and Stormtrack workflow. They shall serve as an example on how to perform numerous investigations on midlatitude winterstorms on a large amount of analysis and climate model data without having an insight into the data source, program code and a low-to-moderate understanding of the theortical background. CT is based on the work of Murray and Simmonds (1991) to identify and track local minima in the mean sea level pressure (MSLP) field of the selected dataset. Adjustable thresholds for the curvature of the isobars as well as the minimum lifetime of a cyclone allow the distinction of weak subtropical heat low systems and stronger midlatitude cyclones e.g. in the Northern Atlantic. The user gets the resulting track data including statistics about the track density, average central pressure, average central curvature, cyclogenesis and cyclolysis as well as pre-built visualizations of these results. Stormtrack calculates the 2.5-6 day bandpassfiltered standard deviation of the geopotential height on a selected pressure level. Although this workflow needs much less computational effort compared to CT it shows structures that are in good agreement with the track density of the CT workflow. To what extent changes in the mid-level tropospheric storm track are reflected in trough density and intensity alteration of surface cyclones. A specific feature of C3Grid is the flexible Workflow Scheduling Service (WSS) which also allows for automated nightly analysis runs of CT, Stormtrack, etc. with different input parameter sets. The statistical results of these workflows can be accumulated afterwards by a scheduled final analysis step, thereby providing a tool for data intensive analytics for the massive amounts of climate model data accessible through C3Grid. First tests with these automated analysis workflows show promising results to speed up the investigation of high volume modeling data. This example is relevant to the thorough analysis of future changes in storminess in Europe and is just one example of the potential of knowledge discovery using automated workflows implemented in the C3Grid architecture.
Exploring Dental Providers’ Workflow in an Electronic Dental Record Environment
Schwei, Kelsey M; Cooper, Ryan; Mahnke, Andrea N.; Ye, Zhan
2016-01-01
Summary Background A workflow is defined as a predefined set of work steps and partial ordering of these steps in any environment to achieve the expected outcome. Few studies have investigated the workflow of providers in a dental office. It is important to understand the interaction of dental providers with the existing technologies at point of care to assess breakdown in the workflow which could contribute to better technology designs. Objective The study objective was to assess electronic dental record (EDR) workflows using time and motion methodology in order to identify breakdowns and opportunities for process improvement. Methods A time and motion methodology was used to study the human-computer interaction and workflow of dental providers with an EDR in four dental centers at a large healthcare organization. A data collection tool was developed to capture the workflow of dental providers and staff while they interacted with an EDR during initial, planned, and emergency patient visits, and at the front desk. Qualitative and quantitative analysis was conducted on the observational data. Results Breakdowns in workflow were identified while posting charges, viewing radiographs, e-prescribing, and interacting with patient scheduler. EDR interaction time was significantly different between dentists and dental assistants (6:20 min vs. 10:57 min, p = 0.013) and between dentists and dental hygienists (6:20 min vs. 9:36 min, p = 0.003). Conclusions On average, a dentist spent far less time than dental assistants and dental hygienists in data recording within the EDR. PMID:27437058
Workflow continuity--moving beyond business continuity in a multisite 24-7 healthcare organization.
Kolowitz, Brian J; Lauro, Gonzalo Romero; Barkey, Charles; Black, Harry; Light, Karen; Deible, Christopher
2012-12-01
As hospitals move towards providing in-house 24 × 7 services, there is an increasing need for information systems to be available around the clock. This study investigates one organization's need for a workflow continuity solution that provides around the clock availability for information systems that do not provide highly available services. The organization investigated is a large multifacility healthcare organization that consists of 20 hospitals and more than 30 imaging centers. A case analysis approach was used to investigate the organization's efforts. The results show an overall reduction in downtimes where radiologists could not continue their normal workflow on the integrated Picture Archiving and Communications System (PACS) solution by 94 % from 2008 to 2011. The impact of unplanned downtimes was reduced by 72 % while the impact of planned downtimes was reduced by 99.66 % over the same period. Additionally more than 98 h of radiologist impact due to a PACS upgrade in 2008 was entirely eliminated in 2011 utilizing the system created by the workflow continuity approach. Workflow continuity differs from high availability and business continuity in its design process and available services. Workflow continuity only ensures that critical workflows are available when the production system is unavailable due to scheduled or unscheduled downtimes. Workflow continuity works in conjunction with business continuity and highly available system designs. The results of this investigation revealed that this approach can add significant value to organizations because impact on users is minimized if not eliminated entirely.
Exploring Two Approaches for an End-to-End Scientific Analysis Workflow
NASA Astrophysics Data System (ADS)
Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; Paterno, Marc; Sehrish, Saba
2015-12-01
The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally, it is important for scientists to be able to share their workflows with collaborators. We have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC); the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In this paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.
Jones, Ryan T; Handsfield, Lydia; Read, Paul W; Wilson, David D; Van Ausdal, Ray; Schlesinger, David J; Siebers, Jeffrey V; Chen, Quan
2015-01-01
The clinical challenge of radiation therapy (RT) for painful bone metastases requires clinicians to consider both treatment efficacy and patient prognosis when selecting a radiation therapy regimen. The traditional RT workflow requires several weeks for common palliative RT schedules of 30 Gy in 10 fractions or 20 Gy in 5 fractions. At our institution, we have created a new RT workflow termed "STAT RAD" that allows clinicians to perform computed tomographic (CT) simulation, planning, and highly conformal single fraction treatment delivery within 2 hours. In this study, we evaluate the safety and feasibility of the STAT RAD workflow. A failure mode and effects analysis (FMEA) was performed on the STAT RAD workflow, including development of a process map, identification of potential failure modes, description of the cause and effect, temporal occurrence, and team member involvement in each failure mode, and examination of existing safety controls. A risk probability number (RPN) was calculated for each failure mode. As necessary, workflow adjustments were then made to safeguard failure modes of significant RPN values. After workflow alterations, RPN numbers were again recomputed. A total of 72 potential failure modes were identified in the pre-FMEA STAT RAD workflow, of which 22 met the RPN threshold for clinical significance. Workflow adjustments included the addition of a team member checklist, changing simulation from megavoltage CT to kilovoltage CT, alteration of patient-specific quality assurance testing, and allocating increased time for critical workflow steps. After these modifications, only 1 failure mode maintained RPN significance; patient motion after alignment or during treatment. Performing the FMEA for the STAT RAD workflow before clinical implementation has significantly strengthened the safety and feasibility of STAT RAD. The FMEA proved a valuable evaluation tool, identifying potential problem areas so that we could create a safer workflow. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Suftin, I.; Read, J. S.; Walker, J.
2013-12-01
Scientists prefer not having to be tied down to a specific machine or operating system in order to analyze local and remote data sets or publish work. Increasingly, analysis has been migrating to decentralized web services and data sets, using web clients to provide the analysis interface. While simplifying workflow access, analysis, and publishing of data, the move does bring with it its own unique set of issues. Web clients used for analysis typically offer workflows geared towards a single user, with steps and results that are often difficult to recreate and share with others. Furthermore, workflow results often may not be easily used as input for further analysis. Older browsers further complicate things by having no way to maintain larger chunks of information, often offloading the job of storage to the back-end server or trying to squeeze it into a cookie. It has been difficult to provide a concept of "session storage" or "workflow sharing" without a complex orchestration of the back-end for storage depending on either a centralized file system or database. With the advent of HTML5, browsers gained the ability to store more information through the use of the Web Storage API (a browser-cookie holds a maximum of 4 kilobytes). Web Storage gives us the ability to store megabytes of arbitrary data in-browser either with an expiration date or just for a session. This allows scientists to create, update, persist and share their workflow without depending on the backend to store session information, providing the flexibility for new web-based workflows to emerge. In the DSASWeb portal ( http://cida.usgs.gov/DSASweb/ ), using these techniques, the representation of every step in the analyst's workflow is stored as plain-text serialized JSON, which we can generate as a text file and provide to the analyst as an upload. This file may then be shared with others and loaded back into the application, restoring the application to the state it was in when the session file was generated. A user may then view results produced during that session or go back and alter input parameters, creating new results and producing new, unique sessions which they can then again share. This technique not only provides independence for the user to manage their session as they like, but also allows much greater freedom for the application provider to scale out without having to worry about carrying over user information or maintaining it in a central location.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Sewell, Christopher; Heitmann, Katrin
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less
Exploring Two Approaches for an End-to-End Scientific Analysis Workflow
Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; ...
2015-12-23
The advance of the scientific discovery process is accomplished by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally,more » it is important for scientists to be able to share their workflows with collaborators. Moreover we have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC), the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In our paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.« less
Partnership For Edge Physics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Manish
In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less
Analyzing data flows of WLCG jobs at batch job level
NASA Astrophysics Data System (ADS)
Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas
2015-05-01
With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.
New data model with better functionality for VLab
NASA Astrophysics Data System (ADS)
da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.
2009-12-01
The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.
NASA Astrophysics Data System (ADS)
Delventhal, D.; Schultz, D.; Diaz Velez, J. C.
2017-10-01
IceProd is a data processing and management framework developed by the IceCube Neutrino Observatory for processing of Monte Carlo simulations, detector data, and data driven analysis. It runs as a separate layer on top of grid and batch systems. This is accomplished by a set of daemons which process job workflow, maintaining configuration and status information on the job before, during, and after processing. IceProd can also manage complex workflow DAGs across distributed computing grids in order to optimize usage of resources. IceProd has recently been rewritten to increase its scaling capabilities, handle user analysis workflows together with simulation production, and facilitate the integration with 3rd party scheduling tools. IceProd 2, the second generation of IceProd, has been running in production for several months now. We share our experience setting up the system and things we’ve learned along the way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fries, Samuel B.; French, Shelane
2014-10-01
These Drupal Modules extend the functionality of Drupal by including specific styles for dates and tabs, publishing options for scheduled and immediate publication of content modes, field visibility in content forms, keyword block filters (taxonomy based), adding content nodes to a specified queue for display in views, and status display of workflow settings.
Using lean methodology to improve productivity in a hospital oncology pharmacy.
Sullivan, Peter; Soefje, Scott; Reinhart, David; McGeary, Catherine; Cabie, Eric D
2014-09-01
Quality improvements achieved by a hospital pharmacy through the use of lean methodology to guide i.v. compounding workflow changes are described. The outpatient oncology pharmacy of Yale-New Haven Hospital conducted a quality-improvement initiative to identify and implement workflow changes to support a major expansion of chemotherapy services. Applying concepts of lean methodology (i.e., elimination of non-value-added steps and waste in the production process), the pharmacy team performed a failure mode and effects analysis, workflow mapping, and impact analysis; staff pharmacists and pharmacy technicians identified 38 opportunities to decrease waste and increase efficiency. Three workflow processes (order verification, compounding, and delivery) accounted for 24 of 38 recommendations and were targeted for lean process improvements. The workflow was decreased to 14 steps, eliminating 6 non-value-added steps, and pharmacy staff resources and schedules were realigned with the streamlined workflow. The time required for pharmacist verification of patient-specific oncology orders was decreased by 33%; the time required for product verification was decreased by 52%. The average medication delivery time was decreased by 47%. The results of baseline and postimplementation time trials indicated a decrease in overall turnaround time to about 70 minutes, compared with a baseline time of about 90 minutes. The use of lean methodology to identify non-value-added steps in oncology order processing and the implementation of staff-recommended workflow changes resulted in an overall reduction in the turnaround time per dose. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Technology-driven dietary assessment: a software developer’s perspective
Buday, Richard; Tapia, Ramsey; Maze, Gary R.
2015-01-01
Dietary researchers need new software to improve nutrition data collection and analysis, but creating information technology is difficult. Software development projects may be unsuccessful due to inadequate understanding of needs, management problems, technology barriers or legal hurdles. Cost overruns and schedule delays are common. Barriers facing scientific researchers developing software include workflow, cost, schedule, and team issues. Different methods of software development and the role that intellectual property rights play are discussed. A dietary researcher must carefully consider multiple issues to maximize the likelihood of success when creating new software. PMID:22591224
Supporting Real-Time Operations and Execution through Timeline and Scheduling Aids
NASA Technical Reports Server (NTRS)
Marquez, Jessica J.; Pyrzak, Guy; Hashemi, Sam; Ahmed, Samia; McMillin, Kevin Edward; Medwid, Joseph Daniel; Chen, Diana; Hurtle, Esten
2013-01-01
Since 2003, the NASA Ames Research Center has been actively involved in researching and advancing the state-of-the-art of planning and scheduling tools for NASA mission operations. Our planning toolkit SPIFe (Scheduling and Planning Interface for Exploration) has supported a variety of missions and field tests, scheduling activities for Mars rovers as well as crew on-board International Space Station and NASA earth analogs. The scheduled plan is the integration of all the activities for the day/s. In turn, the agents (rovers, landers, spaceships, crew) execute from this schedule while the mission support team members (e.g., flight controllers) follow the schedule during execution. Over the last couple of years, our team has begun to research and validate methods that will better support users during realtime operations and execution of scheduled activities. Our team utilizes human-computer interaction principles to research user needs, identify workflow processes, prototype software aids, and user test these. This paper discusses three specific prototypes developed and user tested to support real-time operations: Score Mobile, Playbook, and Mobile Assistant for Task Execution (MATE).
Decentralized Grid Scheduling with Evolutionary Fuzzy Systems
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address the problem of finding workload exchange policies for decentralized Computational Grids using an Evolutionary Fuzzy System. To this end, we establish a non-invasive collaboration model on the Grid layer which requires minimal information about the participating High Performance and High Throughput Computing (HPC/HTC) centers and which leaves the local resource managers completely untouched. In this environment of fully autonomous sites, independent users are assumed to submit their jobs to the Grid middleware layer of their local site, which in turn decides on the delegation and execution either on the local system or on remote sites in a situation-dependent, adaptive way. We find for different scenarios that the exchange policies show good performance characteristics not only with respect to traditional metrics such as average weighted response time and utilization, but also in terms of robustness and stability in changing environments.
Exploiting multicore compute resources in the CMS experiment
NASA Astrophysics Data System (ADS)
Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration
2016-10-01
CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.
Recht, Michael; Macari, Michael; Lawson, Kirk; Mulholland, Tom; Chen, David; Kim, Danny; Babb, James
2013-03-01
The aim of this study was to evaluate all aspects of workflow in a large academic MRI department to determine whether process improvement (PI) efforts could improve key performance indicators (KPIs). KPI metrics in the investigators' MR imaging department include daily inpatient backlogs, on-time performance for outpatient examinations, examination volumes, appointment backlogs for pediatric anesthesia cases, and scan duration relative to time allotted for an examination. Over a 3-week period in April 2011, key members of the MR imaging department (including technologists, nurses, schedulers, physicians, and administrators) tracked all aspects of patient flow through the department, from scheduling to examination interpretation. Data were analyzed by the group to determine where PI could improve KPIs. Changes to MRI workflow were subsequently implemented, and KPIs were compared before (January 1, 2011, to April 30, 2011) and after (August 1, 2011, to December 31, 2011) using Mann-Whitney and Fisher's exact tests. The data analysis done during this PI led to multiple changes in the daily workflow of the MR department. In addition, a new sense of teamwork and empowerment was established within the MR staff. All of the measured KPIs showed statistically significant changes after the reengineering project. Intradepartmental PI efforts can significantly affect KPI metrics within an MR imaging department, making the process more patient centered. In addition, the process allowed significant growth without the need for additional equipment or personnel. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Pathmanathan, Angela U; van As, Nicholas J; Kerkmeijer, Linda G W; Christodouleas, John; Lawton, Colleen A F; Vesprini, Danny; van der Heide, Uulke A; Frank, Steven J; Nill, Simeon; Oelfke, Uwe; van Herk, Marcel; Li, X Allen; Mittauer, Kathryn; Ritter, Mark; Choudhury, Ananya; Tree, Alison C
2018-02-01
Radiation therapy to the prostate involves increasingly sophisticated delivery techniques and changing fractionation schedules. With a low estimated α/β ratio, a larger dose per fraction would be beneficial, with moderate fractionation schedules rapidly becoming a standard of care. The integration of a magnetic resonance imaging (MRI) scanner and linear accelerator allows for accurate soft tissue tracking with the capacity to replan for the anatomy of the day. Extreme hypofractionation schedules become a possibility using the potentially automated steps of autosegmentation, MRI-only workflow, and real-time adaptive planning. The present report reviews the steps involved in hypofractionated adaptive MRI-guided prostate radiation therapy and addresses the challenges for implementation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.
Muirhead, David; Aoun, Patricia; Powell, Michael; Juncker, Flemming; Mollerup, Jens
2010-08-01
The need for higher efficiency, maximum quality, and faster turnaround time is a continuous focus for anatomic pathology laboratories and drives changes in work scheduling, instrumentation, and management control systems. To determine the costs of generating routine, special, and immunohistochemical microscopic slides in a large, academic anatomic pathology laboratory using a top-down approach. The Pathology Economic Model Tool was used to analyze workflow processes at The Nebraska Medical Center's anatomic pathology laboratory. Data from the analysis were used to generate complete cost estimates, which included not only materials, consumables, and instrumentation but also specific labor and overhead components for each of the laboratory's subareas. The cost data generated by the Pathology Economic Model Tool were compared with the cost estimates generated using relative value units. Despite the use of automated systems for different processes, the workflow in the laboratory was found to be relatively labor intensive. The effect of labor and overhead on per-slide costs was significantly underestimated by traditional relative-value unit calculations when compared with the Pathology Economic Model Tool. Specific workflow defects with significant contributions to the cost per slide were identified. The cost of providing routine, special, and immunohistochemical slides may be significantly underestimated by traditional methods that rely on relative value units. Furthermore, a comprehensive analysis may identify specific workflow processes requiring improvement.
Communication network for decentralized remote tele-science during the Spacelab mission IML-2
NASA Technical Reports Server (NTRS)
Christ, Uwe; Schulz, Klaus-Juergen; Incollingo, Marco
1994-01-01
The ESA communication network for decentralized remote telescience during the Spacelab mission IML-2, called Interconnection Ground Subnetwork (IGS), provided data, voice conferencing, video distribution/conferencing and high rate data services to 5 remote user centers in Europe. The combination of services allowed the experimenters to interact with their experiments as they would normally do from the Payload Operations Control Center (POCC) at MSFC. In addition, to enhance their science results, they were able to make use of reference facilities and computing resources in their home laboratory, which typically are not available in the POCC. Characteristics of the IML-2 communications implementation were the adaptation to the different user needs based on modular service capabilities of IGS and the cost optimization for the connectivity. This was achieved by using a combination of traditional leased lines, satellite based VSAT connectivity and N-ISDN according to the simulation and mission schedule for each remote site. The central management system of IGS allows minimization of staffing and the involvement of communications personnel at the remote sites. The successful operation of IGS for IML-2 as a precursor network for the Columbus Orbital Facility (COF) has proven the concept for communications to support the operation of the COF decentralized scenario.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant.more » A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current clinical trials for IORT To discuss lumpectomy-scan-plan-treat workflow for IORT.« less
1980-12-01
yields. Figure 3.11-3 shcws this process. Figure 3.11-3220 METHANE FERMENTATION (ANAEROBIC DIGESTION ) A THREE STAGE PROCESS ORGANICS LCOMPOUNDS ACI DS...plant will provide electricity for about 45,000 people, and is scheduled for completion in 1982. Figure 3.12-4 illustrates a two - stage (high pressure and...condensed. The brine, after passing through the heat exchanger, is reinjected into the ground. 4 238 )- I Figure 3.124239 TWO STAGE , FLASHED STEAM POWER
NASA Astrophysics Data System (ADS)
Walker, J. I.; Blodgett, D. L.; Suftin, I.; Kunicki, T.
2013-12-01
High-resolution data for use in environmental modeling is increasingly becoming available at broad spatial and temporal scales. Downscaled climate projections, remotely sensed landscape parameters, and land-use/land-cover projections are examples of datasets that may exceed an individual investigation's data management and analysis capacity. To allow projects on limited budgets to work with many of these data sets, the burden of working with them must be reduced. The approach being pursued at the U.S. Geological Survey Center for Integrated Data Analytics uses standard self-describing web services that allow machine to machine data access and manipulation. These techniques have been implemented and deployed in production level server-based Web Processing Services that can be accessed from a web application or scripted workflow. Data publication techniques that allow machine-interpretation of large collections of data have also been implemented for numerous datasets at U.S. Geological Survey data centers as well as partner agencies and academic institutions. Discovery of data services is accomplished using a method in which a machine-generated metadata record holds content--derived from the data's source web service--that is intended for human interpretation as well as machine interpretation. A distributed search application has been developed that demonstrates the utility of a decentralized search of data-owner metadata catalogs from multiple agencies. The integrated but decentralized system of metadata, data, and server-based processing capabilities will be presented. The design, utility, and value of these solutions will be illustrated with applied science examples and success stories. Datasets such as the EPA's Integrated Climate and Land Use Scenarios, USGS/NASA MODIS derived land cover attributes, and downscaled climate projections from several sources are examples of data this system includes. These and other datasets, have been published as standard, self-describing, web services that provide the ability to inspect and subset the data. This presentation will demonstrate this file-to-web service concept and how it can be used from script-based workflows or web applications.
A Decentralized Scheduling Policy for a Dynamically Reconfigurable Production System
NASA Astrophysics Data System (ADS)
Giordani, Stefano; Lujak, Marin; Martinelli, Francesco
In this paper, the static layout of a traditional multi-machine factory producing a set of distinct goods is integrated with a set of mobile production units - robots. The robots dynamically change their work position to increment the product rate of the different typologies of products in respect to the fluctuations of the demands and production costs during a given time horizon. Assuming that the planning time horizon is subdivided into a finite number of time periods, this particularly flexible layout requires the definition and the solution of a complex scheduling problem, involving for each period of the planning time horizon, the determination of the position of the robots, i.e., the assignment to the respective tasks in order to minimize production costs given the product demand rates during the planning time horizon.
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
Strategies of organization and service for the critical-care laboratory.
Fleisher, M; Schwartz, M K
1990-08-01
Critical-care medicine requires rapidity of treatment decisions and clinical management. To meet the objectives of critical-care medicine, the critical-care laboratory must consider four major aspects of laboratory organization in addition to analytical responsibilities: specimen collection and delivery, training of technologists, selection of reliable instrumentation, and efficient data dissemination. One must also consider the advantages and disadvantages of centralization vs decentralization, the influence of such a laboratory on patient care and personnel needs, and the space required for optimal operation. Centralization may lead to workflow interruption and increased turnaround time (TAT); decentralization requires redundancy of instrumentation and staff but may shorten TAT. Minimal TAT is the hallmark of efficient laboratory service. We surveyed 55 laboratories in 33 hospitals and found that virtually all hospitals with 200 or more beds had a critical-care laboratory operating as a satellite of the main laboratory. We present data on actual TAT, although these were available in only eight of the 15 routine laboratories that provided emergency service and in eight of the 40 critical-care laboratories. In meeting the challenges of an increasing workload, a reduced clinical laboratory work force, and the need to reduce TAT, changes in traditional laboratory practice are mandatory. An increased reliance on whole-blood analysis, for example, should eliminate delays associated with sample preparation, reduce the potential hazards associated with centrifugation, and eliminate excess specimen handling.
From chart tracking to workflow management.
Srinivasan, P.; Vignes, G.; Venable, C.; Hazelwood, A.; Cade, T.
1994-01-01
The current interest in system-wide integration appears to be based on the assumption that an organization, by digitizing information and accepting a common standard for the exchange of such information, will improve the accessibility of this information and automatically experience benefits resulting from its more productive use. We do not dispute this reasoning, but assert that an organization's capacity for effective change is proportional to the understanding of the current structure among its personnel. Our workflow manager is based on the use of a Parameterized Petri Net (PPN) model which can be configured to represent an arbitrarily detailed picture of an organization. The PPN model can be animated to observe the model organization in action, and the results of the animation analyzed. This simulation is a dynamic ongoing process which changes with the system and allows members of the organization to pose "what if" questions as a means of exploring opportunities for change. We present, the "workflow management system" as the natural successor to the tracking program, incorporating modeling, scheduling, reactive planning, performance evaluation, and simulation. This workflow management system is more than adequate for meeting the needs of a paper chart tracking system, and, as the patient record is computerized, will serve as a planning and evaluation tool in converting the paper-based health information system into a computer-based system. PMID:7950051
Improved compliance by BPM-driven workflow automation.
Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin
2014-12-01
Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheu, R; Ghafar, R; Powers, A
Purpose: Demonstrate the effectiveness of in-house software in ensuring EMR workflow efficiency and safety. Methods: A web-based dashboard system (WBDS) was developed to monitor clinical workflow in real time using web technology (WAMP) through ODBC (Open Database Connectivity). Within Mosaiq (Elekta Inc), operational workflow is driven and indicated by Quality Check Lists (QCLs), which is triggered by automation software IQ Scripts (Elekta Inc); QCLs rely on user completion to propagate. The WBDS retrieves data directly from the Mosaig SQL database and tracks clinical events in real time. For example, the necessity of a physics initial chart check can be determinedmore » by screening all patients on treatment who have received their first fraction and who have not yet had their first chart check. Monitoring similar “real” events with our in-house software creates a safety net as its propagation does not rely on individual users input. Results: The WBDS monitors the following: patient care workflow (initial consult to end of treatment), daily treatment consistency (scheduling, technique, charges), physics chart checks (initial, EOT, weekly), new starts, missing treatments (>3 warning/>5 fractions, action required), and machine overrides. The WBDS can be launched from any web browser which allows the end user complete transparency and timely information. Since the creation of the dashboards, workflow interruptions due to accidental deletion or completion of QCLs were eliminated. Additionally, all physics chart checks were completed timely. Prompt notifications of treatment record inconsistency and machine overrides have decreased the amount of time between occurrence and execution of corrective action. Conclusion: Our clinical workflow relies primarily on QCLs and IQ Scripts; however, this functionality is not the panacea of safety and efficiency. The WBDS creates a more thorough system of checks to provide a safer and near error-less working environment.« less
Analysis of electric power industry restructuring
NASA Astrophysics Data System (ADS)
Al-Agtash, Salem Yahya
1998-10-01
This thesis evaluates alternative structures of the electric power industry in a competitive environment. One structure is based on the principle of creating a mandatory power pool to foster competition and manage system economics. The structure is PoolCo (pool coordination). A second structure is based on the principle of allowing independent multilateral trading and decentralized market coordination. The structure is DecCo (decentralized coordination). The criteria I use to evaluate these two structures are: economic efficiency, system reliability and freedom of choice. Economic efficiency evaluation considers strategic behavior of individual generators as well as behavioral variations of different classes of consumers. A supply-function equilibria model is characterized for deriving bidding strategies of competing generators under PoolCo. It is shown that asymmetric equilibria can exist within the capacities of generators. An augmented Lagrangian approach is introduced to solve iteratively for global optimal operations schedules. Under DecCo, the process involves solving iteratively for system operations schedules. The schedules reflect generators strategic behavior and brokers' interactions for arranging profitable trades, allocating losses and managing network congestion. In the determination of PoolCo and DecCo operations schedules, overall costs of power generation (start-up and shut-down costs and availability of hydro electric power) as well as losses and costs of transmission network are considered. For system reliability evaluation, I examine the effect of PoolCo and DecCo operating conditions on the system security. Random component failure perturbations are generated to simulate the actual system behavior. This is done using Monte Carlo simulation. Freedom of choice evaluation accounts for schemes' beneficial opportunities and capabilities to respond to consumers expressed preferences. An IEEE 24-bus test system is used to illustrate the concepts developed for economic efficiency evaluation. The system was tested over two years time period. The results indicate 2.6684 and 2.7269 percent of efficiency loss on average for PoolCo and DecCo, respectively. These values, however, do not represent forecasts of efficiency losses of PoolCo- and DecCo-based competitive industries. Rather, they are illustrations of the efficiency losses for the given IEEE test system and based on the modeling assumptions underlying framework development.
Collaborative Resource Allocation
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester
2007-01-01
Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.
Camporese, Alessandro
2004-06-01
The diagnosis of infectious diseases and the role of the microbiology laboratory are currently undergoing a process of change. The need for overall efficiency in providing results is now given the same importance as accuracy. This means that laboratories must be able to produce quality results in less time with the capacity to interpret the results clinically. To improve the clinical impact of microbiology results, the new challenge facing the microbiologist has become one of process management instead of pure analysis. A proper project management process designed to improve workflow, reduce analytical time, and provide the same high quality results without losing valuable time treating the patient, has become essential. Our objective was to study the impact of introducing automation and computerization into the microbiology laboratory, and the reorganization of the laboratory workflow, i.e. scheduling personnel to work shifts covering both the entire day and the entire week. In our laboratory, the introduction of automation and computerization, as well as the reorganization of personnel, thus the workflow itself, has resulted in an improvement in response time and greater efficiency in diagnostic procedures.
Scaling up ATLAS Event Service to production levels on opportunistic computing platforms
NASA Astrophysics Data System (ADS)
Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration
2016-10-01
Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
Flight and Operational Medicine Clinic (FOMC) Workflow Analysis
2014-03-14
Flight Medicine, Optometry, and Dental ) Base 4 MSME schedules all appointments required in the IFC (i.e., Flight Medicine, Optometry, and Dental ...IT Note: Base 1 Examinee completes Optometry, Dental , and Immunizations on the day of the Flight Medicine appointment Base 2 Examinee...completes Optometry and Immunizations prior to being seen in Flight Medicine Base 4 Examinee completes Optometry, Dental , and Immunizations on the day of
Scheduling Operational Operational-Level Courses of Action
2003-10-01
Process modelling and analysis – process synchronisation techniques Information and knowledge management – Collaborative planning systems – Workflow...logistics – Some tasks may consume resources The military user may wish to impose synchronisation constraints among tasks A military end state can be...effects, – constrained with resource and synchronisation considerations, and – lead to the achievement of conditions set in the end state. The COA is
geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling
NASA Astrophysics Data System (ADS)
Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.
2015-12-01
The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from various runs of geoKepler workflows. The communication between iPython and Kepler workflow executions is established through an iPython magic function for Kepler that we have implemented. In summary, geoKepler is an ecosystem that makes geospatial processing and analysis of any kind programmable, reusable, scalable and sharable.
Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932
Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.
Hayashi, Ken; Yoshida, Motoaki; Hayashi, Hideyuki
2009-03-01
To compare visual acuity (VA) from far to near distances, photopic and mesopic contrast VA, and contrast VA in the presence of a glare source (glare VA), between eyes with a new refractive multifocal intraocular lens (IOL) with added power of only +3.0 diopters and those with a monofocal IOL. Comparative, nonrandomized, interventional study. Forty-four eyes of 22 patients who were scheduled for implantation of a refractive multifocal IOL (Hoya SFX MV1; Tokyo, Japan) and 44 eyes of 22 patients scheduled for implantation of a monofocal IOL. All patients underwent phacoemulsification with bilateral implantation of either multifocal or monofocal IOLs. At approximately 3 months after surgery, monocular and binocular VA from far to near distances was measured using the all-distance vision tester (Kowa AS-15; Tokyo, Japan), whereas photopic and mesopic contrast VA and glare VA were examined using the Contrast Sensitivity Accurate Tester (Menicon CAT-2000, Nagoya, Japan). Pupillary diameter and the degree of IOL decentration and tilt were correlated with VA at all distances. Mean VA in both the multifocal and monofocal IOL groups decreased gradually from far to near distances. When comparing the 2 groups, however, both uncorrected and best distance-corrected intermediate VA at 0.5 m and near VA at 0.3 m in the multifocal IOL group were significantly better than those in the monofocal IOL group (P
A customizable, scalable scheduling and reporting system.
Wood, Jody L; Whitman, Beverly J; Mackley, Lisa A; Armstrong, Robert; Shotto, Robert T
2014-06-01
Scheduling is essential for running a facility smoothly and for summarizing activities in use reports. The Penn State Hershey Clinical Simulation Center has developed a scheduling interface that uses off-the-shelf components, with customizations that adapt to each institution's data collection and reporting needs. The system is designed using programs within the Microsoft Office 2010 suite. Outlook provides the scheduling component, while the reporting is performed using Access or Excel. An account with a calendar is created for the main schedule, with separate resource accounts created for each room within the center. The Outlook appointment form's 2 default tabs are used, in addition to a customized third tab. The data are then copied from the calendar into either a database table or a spreadsheet, where the reports are generated.Incorporating this system into an institution-wide structure allows integration of personnel lists and potentially enables all users to check the schedule from their desktop. Outlook also has a Web-based application for viewing the basic schedule from outside the institution, although customized data cannot be accessed. The scheduling and reporting functions have been used for a year at the Penn State Hershey Clinical Simulation Center. The schedule has increased workflow efficiency, improved the quality of recorded information, and provided more accurate reporting. The Penn State Hershey Clinical Simulation Center's scheduling and reporting system can be adapted easily to most simulation centers and can expand and change to meet future growth with little or no expense to the center.
Automating Mid- and Long-Range Scheduling for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Sorensen, Sugi; Tay, Peter; Carruth, Butch; Coffman, Adam; Wallace, Mike
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system is architected as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users who utilize the DSN (representing 37 projects including international partners and ground-based science and calibration users). The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. S(sup 3) has been used for negotiating schedules since April 2011, including the baseline schedules for three launching missions in late 2011. S(sup 3) supports a distributed scheduling model, in which changes can potentially be made by multiple users based on multiple schedule "workspaces" or versions of the schedule. This has led to several challenges in the design of the scheduling database, and of a change proposal workflow that allows users to concur with or to reject proposed schedule changes, and then counter-propose with alternative or additional suggested changes. This paper describes some key aspects of the S(sup 3) system and lessons learned from its operational deployment to date, focusing on the challenges of multi-user collaborative scheduling in a practical and mission-critical setting. We will also describe the ongoing project to extend S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
MO-E-BRD-01: Is Non-Invasive Image-Guided Breast Brachytherapy Good?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiatt, J.
2015-06-15
Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant.more » A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current clinical trials for IORT To discuss lumpectomy-scan-plan-treat workflow for IORT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libby, B.
2015-06-15
Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant.more » A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current clinical trials for IORT To discuss lumpectomy-scan-plan-treat workflow for IORT.« less
MO-E-BRD-02: Accelerated Partial Breast Irradiation in Brachytherapy: Is Shorter Better?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todor, D.
2015-06-15
Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant.more » A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current clinical trials for IORT To discuss lumpectomy-scan-plan-treat workflow for IORT.« less
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Patterson, Emily S.; Lowry, Svetlana Z.; Ramaiah, Mala; Gibbons, Michael C.; Brick, David; Calco, Robert; Matton, Greg; Miller, Anne; Makar, Ellen; Ferrer, Jorge A.
2015-01-01
Introduction: Human factors workflow analyses in healthcare settings prior to technology implemented are recommended to improve workflow in ambulatory care settings. In this paper we describe how insights from a workflow analysis conducted by NIST were implemented in a software prototype developed for a Veteran’s Health Administration (VHA) VAi2 innovation project and associated lessons learned. Methods: We organize the original recommendations and associated stages and steps visualized in process maps from NIST and the VA’s lessons learned from implementing the recommendations in the VAi2 prototype according to four stages: 1) before the patient visit, 2) during the visit, 3) discharge, and 4) visit documentation. NIST recommendations to improve workflow in ambulatory care (outpatient) settings and process map representations were based on reflective statements collected during one-hour discussions with three physicians. The development of the VAi2 prototype was conducted initially independently from the NIST recommendations, but at a midpoint in the process development, all of the implementation elements were compared with the NIST recommendations and lessons learned were documented. Findings: Story-based displays and templates with default preliminary order sets were used to support scheduling, time-critical notifications, drafting medication orders, and supporting a diagnosis-based workflow. These templates enabled customization to the level of diagnostic uncertainty. Functionality was designed to support cooperative work across interdisciplinary team members, including shared documentation sessions with tracking of text modifications, medication lists, and patient education features. Displays were customized to the role and included access for consultants and site-defined educator teams. Discussion: Workflow, usability, and patient safety can be enhanced through clinician-centered design of electronic health records. The lessons learned from implementing NIST recommendations to improve workflow in ambulatory care using an EHR provide a first step in moving from a billing-centered perspective on how to maintain accurate, comprehensive, and up-to-date information about a group of patients to a clinician-centered perspective. These recommendations point the way towards a “patient visit management system,” which incorporates broader notions of supporting workload management, supporting flexible flow of patients and tasks, enabling accountable distributed work across members of the clinical team, and supporting dynamic tracking of steps in tasks that have longer time distributions. PMID:26290887
Making Sense of Complexity with FRE, a Scientific Workflow System for Climate Modeling (Invited)
NASA Astrophysics Data System (ADS)
Langenhorst, A. R.; Balaji, V.; Yakovlev, A.
2010-12-01
A workflow is a description of a sequence of activities that is both precise and comprehensive. Capturing the workflow of climate experiments provides a record which can be queried or compared, and allows reproducibility of the experiments - sometimes even to the bit level of the model output. This reproducibility helps to verify the integrity of the output data, and enables easy perturbation experiments. GFDL's Flexible Modeling System Runtime Environment (FRE) is a production-level software project which defines and implements building blocks of the workflow as command line tools. The scientific, numerical and technical input needed to complete the workflow of an experiment is recorded in an experiment description file in XML format. Several key features add convenience and automation to the FRE workflow: ● Experiment inheritance makes it possible to define a new experiment with only a reference to the parent experiment and the parameters to override. ● Testing is a basic element of the FRE workflow: experiments define short test runs which are verified before the main experiment is run, and a set of standard experiments are verified with new code releases. ● FRE is flexible enough to support short runs with mere megabytes of data, to high-resolution experiments that run on thousands of processors for months, producing terabytes of output data. Experiments run in segments of model time; after each segment, the state is saved and the model can be checkpointed at that level. Segment length is defined by the user, but the number of segments per system job is calculated to fit optimally in the batch scheduler requirements. FRE provides job control across multiple segments, and tools to monitor and alter the state of long-running experiments. ● Experiments are entered into a Curator Database, which stores query-able metadata about the experiment and the experiment's output. ● FRE includes a set of standardized post-processing functions as well as the ability to incorporate user-level functions. FRE post-processing can take us all the way to the preparing of graphical output for a scientific audience, and publication of data on a public portal. ● Recent FRE development includes incorporating a distributed workflow to support remote computing.
NASA Year 2000 (Y2K) Program Plan
NASA Technical Reports Server (NTRS)
1998-01-01
NASA initiated the Year 2000 (Y2K) program in August 1996 to address the challenges imposed on Agency software, hardware, and firmware systems by the new millennium. The Agency program is centrally managed by the NASA Chief Information Officer, with decentralized execution of program requirements at each of the nine NASA Centers, Headquarters and the Jet Propulsion Laboratory. The purpose of this Program Plan is to establish Program objectives and performance goals; identify Program requirements; describe the management structure; and detail Program resources, schedules, and controls. Project plans are established for each NASA Center, Headquarters, and the Jet Propulsion Laboratory.
Optimizing Perioperative Decision Making: Improved Information for Clinical Workflow Planning
Doebbeling, Bradley N.; Burton, Matthew M.; Wiebke, Eric A.; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph
2012-01-01
Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40–70% of hospital revenues and 30–40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction. PMID:23304284
Optimizing perioperative decision making: improved information for clinical workflow planning.
Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph
2012-01-01
Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
Patient-Centered Appointment Scheduling Using Agent-Based Simulation
Turkcan, Ayten; Toscos, Tammy; Doebbeling, Brad N.
2014-01-01
Enhanced access and continuity are key components of patient-centered care. Existing studies show that several interventions such as providing same day appointments, walk-in services, after-hours care, and group appointments, have been used to redesign the healthcare systems for improved access to primary care. However, an intervention focusing on a single component of care delivery (i.e. improving access to acute care) might have a negative impact other components of the system (i.e. reduced continuity of care for chronic patients). Therefore, primary care clinics should consider implementing multiple interventions tailored for their patient population needs. We collected rapid ethnography and observations to better understand clinic workflow and key constraints. We then developed an agent-based simulation model that includes all access modalities (appointments, walk-ins, and after-hours access), incorporate resources and key constraints and determine the best appointment scheduling method that improves access and continuity of care. This paper demonstrates the value of simulation models to test a variety of alternative strategies to improve access to care through scheduling. PMID:25954423
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
An analysis of disruptions in aerospace/defense organizations that affect the supply chain
NASA Astrophysics Data System (ADS)
Dickerson, Toscha L.
The purpose of this quantitative study was to determine whether or not functions of procurement organizations structures' and aerospace suppliers were perceived as disruptions and to identify their effects on lead time and costs within a supply chain. An analysis of employees' perception of centralized and decentralized procurement functions, aerospace and defense suppliers, lead times of goods and services, price increases, and schedule delays was conducted. Prior studies are limited in regards to understanding how specific procurement functions affects an organization procurement structure. This non-experimental quantitative study allowed for a survey to be administered to aerospace and defense companies throughout the United States to obtain information from sourcing and procurement professionals with 5 or more years of experience. The current study utilized a 10 question survey based on the 5- point Likert -type scale to determine the findings. Through descriptive and inferential statistics, using regression analysis, standard deviation, and P-value; findings indicated that the majority of the participants surveyed perceived both centralized and decentralized procurement functions affected lead time and cost of goods and services resulted in a positive effect and were considered as supply chain disruptions.
Robust decentralized power system controller design: Integrated approach
NASA Astrophysics Data System (ADS)
Veselý, Vojtech
2017-09-01
A unique approach to the design of gain scheduled controller (GSC) is presented. The proposed design procedure is based on the Bellman-Lyapunov equation, guaranteed cost and robust stability conditions using the parameter dependent quadratic stability approach. The obtained feasible design procedures for robust GSC design are in the form of BMI with guaranteed convex stability conditions. The obtained design results and their properties are illustrated in the simultaneously design of controllers for simple model (6-order) turbogenerator. The results of the obtained design procedure are a PI automatic voltage regulator (AVR) for synchronous generator, a PI governor controller and a power system stabilizer for excitation system.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2014-05-01
To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.
Back pressure based multicast scheduling for fair bandwidth allocation.
Sarkar, Saswati; Tassiulas, Leandros
2005-09-01
We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.
Electronic data capture and DICOM data management in multi-center clinical trials
NASA Astrophysics Data System (ADS)
Haak, Daniel; Page, Charles-E.; Deserno, Thomas M.
2016-03-01
Providing eligibility, efficacy and security evaluation by quantitative and qualitative disease findings, medical imaging has become increasingly important in clinical trials. Here, subject's data is today captured in electronic case reports forms (eCRFs), which are offered by electronic data capture (EDC) systems. However, integration of subject's medical image data into eCRFs is insufficiently supported. Neither integration of subject's digital imaging and communications in medicine (DICOM) data, nor communication with picture archiving and communication systems (PACS), is possible. This aggravates the workflow of the study personnel, in special regarding studies with distributed data capture in multiple sites. Hence, in this work, a system architecture is presented, which connects an EDC system, a PACS and a DICOM viewer via the web access to DICOM objects (WADO) protocol. The architecture is implemented using the open source tools OpenClinica, DCM4CHEE and Weasis. The eCRF forms the primary endpoint for the study personnel, where subject's image data is stored and retrieved. Background communication with the PACS is completely hidden for the users. Data privacy and consistency is ensured by automatic de-identification and re-labelling of DICOM data with context information (e.g. study and subject identifiers), respectively. The system is exemplarily demonstrated in a clinical trial, where computer tomography (CT) data is de-centrally captured from the subjects and centrally read by a chief radiologists to decide on inclusion of the subjects in the trial. Errors, latency and costs in the EDC workflow are reduced, while, a research database is implicitly built up in the background.
Experimenting with Decentralization: The Politics of Change.
ERIC Educational Resources Information Center
Wohlstetter, Priscilla
The relationship between the political context of school districts and their choices of decentralization policy is explored in this paper. It was expected that district politics would affect decentralization policies in two ways: the form of decentralization adopted and the degree of change. The decision to decentralize in three large urban school…
Why We Don’t Come: Patient Perceptions on No-Shows
Lacy, Naomi L.; Paulman, Audrey; Reuter, Matthew D.; Lovejoy, Bruce
2004-01-01
PURPOSE Patients who schedule clinic appointments and fail to keep them have a negative impact on the workflow of a clinic in many ways. This study was conducted to identify the reasons patients in an urban family practice setting give for not keeping scheduled appointments. METHODS Semistructured interviews were conducted with 34 adult patients coming to the clinic for outpatient care. Interviews were audiotaped and transcribed verbatim. A multidisciplinary team used an immersion-crystallization organizing style to analyze the content of the qualitative interviews individually and in team meetings. RESULTS Participants identified 3 types of issues related to missing appointments without notifying the clinic staff: emotions, perceived disrespect, and not understanding the scheduling system. Although they discussed logistical issues of appointment keeping, participants did not identify these issues as key reasons for nonattendance. Appointment making among these participants was driven by immediate symptoms and a desire for self-care. At the same time, many of these participants experienced anticipatory fear and anxiety about both procedures and bad news. Participants did not feel obligated to keep a scheduled appointment in part because they felt disrespected by the health care system. The effect of this feeling was compounded by participants’ lack of understanding of the scheduling system. CONCLUSIONS The results of this study suggest that reducing no-show rates among patients who sometimes attend might be addressed by reviewing waiting times and participants’ perspectives of personal respect. PMID:15576538
Sweet, Burgunda V; Tamer, Helen R; Siden, Rivka; McCreadie, Scott R; McGregory, Michael E; Benner, Todd; Tankanow, Roberta M
2008-05-15
The development of a computerized system for protocol management, dispensing, inventory accountability, and billing by the investigational drug service (IDS) of a university health system is described. After an unsuccessful search for a commercial system that would accommodate the variation among investigational protocols and meet regulatory requirements, the IDS worked with the health-system pharmacy's information technology staff and informatics pharmacists to develop its own system. The informatics pharmacists observed work-flow and information capture in the IDS and identified opportunities for improved efficiency with an automated system. An iterative build-test-design process was used to provide the flexibility needed for individual protocols. The intent was to design a system that would support most IDS processes, using components that would allow automated backup and redundancies. A browser-based system was chosen to allow remote access. Servers, bar-code scanners, and printers were integrated into the final system design. Initial implementation involved 10 investigational protocols chosen on the basis of dispensing volume and complexity of study design. Other protocols were added over a two-year period; all studies whose drugs were dispensed from the IDS were added, followed by those for which the drugs were dispensed from decentralized pharmacy areas. The IDS briefly used temporary staff to free pharmacist and technician time for system implementation. Decentralized pharmacy areas that rarely dispense investigational drugs continue to use manual processes, with subsequent data transcription into the system. Through the university's technology transfer division, the system was licensed by an external company for sale to other IDSs. The WebIDS system has improved daily operations, enhanced safety and efficiency, and helped meet regulatory requirements for investigational drugs.
Fillatre, Yoann; Rondeau, David; Daguin, Antoine; Communal, Pierre-Yves
2016-01-01
This paper describes the determination of 256 multiclass pesticides in cypress and lemon essential oils (EOs) by the way of liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI/MS/MS) analysis using the scheduled selected reaction monitoring mode (sSRM) available on a hybrid quadrupole linear ion trap (QLIT) mass spectrometer. The performance of a sample preparation of lemon and cypress EOs based on dilution or evaporation under nitrogen assisted by a controlled heating were assessed. The best limits of quantification (LOQs) were achieved with the evaporation under nitrogen method giving LOQs≤10µgL(-1) for 91% of the pesticides. In addition the very satisfactory results obtained for recovery, repeatability and linearity showed that for EOs of relatively low evaporation temperature, a sample preparation based on evaporation under nitrogen is well adapted and preferable to dilution. By compiling these results with those previously published by some of us on lavandin EO, we proposed a workflow dedicated to multiresidue determination of pesticides in various EOs by LC-ESI/sSRM. Among the steps involved in this workflow, the protocol related to mass spectrometry proposes an alternative confirmation method to the classical SRM ratio criteria based on a sSRM survey scan followed by an information-dependent acquisition using the sensitive enhanced product ion (EPI) scan to generate MS/MS spectra then compared to a reference. The submitted workflow was applied to the case of lemon EOs samples highlighting for the first time the simultaneous detection of 20 multiclass pesticides in one EO. Some pesticides showed very high concentration levels with amounts greatly exceeding the mgL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.
Scheduling in the context of resident duty hour reform
2014-01-01
Fuelled by concerns about resident health and patient safety, there is a general trend in many jurisdictions toward limiting the maximum duration of consecutive work to between 14 and 16 hours. The goal of this article is to assist institutions and residency programs to make a smooth transition from the previous 24- to 36-hour call system to this new model. We will first give an overview of the main types of coverage systems and their relative merits when considering various aspects of patient care and resident pedagogy. We will then suggest a practical step-by-step approach to designing, implementing, and monitoring a scheduling system centred on clinical and educational needs in the context of resident duty hour reform. The importance of understanding the impetus for change and of assessing the need for overall workflow restructuring will be explored throughout this process. Finally, as a practical example, we will describe a large, university-based teaching hospital network’s transition from a traditional call-based system to a novel schedule that incorporates the new 16-hour duty limit. PMID:25561221
Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.
Sun, Kangkang; Sui, Shuai; Tong, Shaocheng
2018-04-01
This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.
Autonomic Management of Application Workflows on Hybrid Computing Infrastructure
Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; ...
2011-01-01
In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less
U-Form vs. M-Form: How to Understand Decision Autonomy Under Healthcare Decentralization?
Bustamante, Arturo Vargas
2016-01-01
For more than three decades healthcare decentralization has been promoted in developing countries as a way of improving the financing and delivery of public healthcare. Decision autonomy under healthcare decentralization would determine the role and scope of responsibility of local authorities. Jalal Mohammed, Nicola North, and Toni Ashton analyze decision autonomy within decentralized services in Fiji. They conclude that the narrow decision space allowed to local entities might have limited the benefits of decentralization on users and providers. To discuss the costs and benefits of healthcare decentralization this paper uses the U-form and M-form typology to further illustrate the role of decision autonomy under healthcare decentralization. This paper argues that when evaluating healthcare decentralization, it is important to determine whether the benefits from decentralization are greater than its costs. The U-form and M-form framework is proposed as a useful typology to evaluate different types of institutional arrangements under healthcare decentralization. Under this model, the more decentralized organizational form (M-form) is superior if the benefits from flexibility exceed the costs of duplication and the more centralized organizational form (U-form) is superior if the savings from economies of scale outweigh the costly decision-making process from the center to the regions. Budgetary and financial autonomy and effective mechanisms to maintain local governments accountable for their spending behavior are key decision autonomy variables that could sway the cost-benefit analysis of healthcare decentralization. PMID:27694684
Bradshaw, Jonathan L; Luthy, Richard G
2017-10-17
Infrastructure systems that use stormwater and recycled water to augment groundwater recharge through spreading basins represent cost-effective opportunities to diversify urban water supplies. However, technical questions remain about how these types of managed aquifer recharge systems should be designed; furthermore, existing planning tools are insufficient for performing robust design comparisons. Addressing this need, we present a model for identifying the best-case design and operation schedule for systems that deliver recycled water to underutilized stormwater spreading basins. Resulting systems are optimal with respect to life cycle costs and water deliveries. Through a case study of Los Angeles, California, we illustrate how delivering recycled water to spreading basins could be optimally implemented. Results illustrate trade-offs between centralized and decentralized configurations. For example, while a centralized Hyperion system could deliver more recycled water to the Hansen Spreading Grounds, this system incurs approximately twice the conveyance cost of a decentralized Tillman system (mean of 44% vs 22% of unit life cycle costs). Compared to existing methods, our model allows for more comprehensive and precise analyses of cost, water volume, and energy trade-offs among different design scenarios. This model can inform decisions about spreading basin operation policies and the development of new water supplies.
PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.
2014-05-27
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less
Optical zone centration: a retrospective analysis of the excimer laser after three years
NASA Astrophysics Data System (ADS)
Vervecken, Filip; Trau, Rene; Mertens, Erik L.; Vanhorenbeeck, R.; Van Aerde, F.; Zen, J.; Haustrate, F.; Tassignon, Marie J.
1996-12-01
The aim of this study was to evaluate the implication of the mechanical factor 'decentration' on the visual outcome after PRK. 100 eyes of 70 patients were included. The mean decentration was 0.27 mm +/- 0.18. Decentration was less than 0.5 mm in 84 percent of the cases. The importance of the decentration was investigated by the statistical correlation of decentration from the pupilcenter and the visual outcome. We did not find any statistical significant association for decentrations less than 1 mm. Our conclusion is that decentration, if less than 1 mm, does not play an important role in the final visual outcome after PRK.
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
Balancing Officer Community Manpower through Decentralization: Granular Programming Revisited (1REV)
2017-08-01
supply-demand imbalances Economic theory identifies costs and benefits associated with decentralization. On the benefits side, decentralized decision...patterns rather than costs . Granular programming as a decentralized, market-based initiative The costs and benefits of decentralized (instead of...paygrade-specific rates were based on average MPN costs by paygrade. The benefits of this approach to granular programming are that it is conceptually
Bustamante, Arturo Vargas
2016-06-07
For more than three decades healthcare decentralization has been promoted in developing countries as a way of improving the financing and delivery of public healthcare. Decision autonomy under healthcare decentralization would determine the role and scope of responsibility of local authorities. Jalal Mohammed, Nicola North, and Toni Ashton analyze decision autonomy within decentralized services in Fiji. They conclude that the narrow decision space allowed to local entities might have limited the benefits of decentralization on users and providers. To discuss the costs and benefits of healthcare decentralization this paper uses the U-form and M-form typology to further illustrate the role of decision autonomy under healthcare decentralization. This paper argues that when evaluating healthcare decentralization, it is important to determine whether the benefits from decentralization are greater than its costs. The U-form and M-form framework is proposed as a useful typology to evaluate different types of institutional arrangements under healthcare decentralization. Under this model, the more decentralized organizational form (M-form) is superior if the benefits from flexibility exceed the costs of duplication and the more centralized organizational form (U-form) is superior if the savings from economies of scale outweigh the costly decision-making process from the center to the regions. Budgetary and financial autonomy and effective mechanisms to maintain local governments accountable for their spending behavior are key decision autonomy variables that could sway the cost-benefit analysis of healthcare decentralization. © 2016 by Kerman University of Medical Sciences.
Decentralization and primary health care: some negative implications in developing countries.
Collins, C; Green, A
1994-01-01
Decentralization is a highly popular concept, being a key element of Primary Health Care policies. There are, however, certain negative implications of decentralization that must be taken into account. These are analyzed in this article with particular reference to developing countries. The authors criticize the tendency for decentralization to be associated with state limitations, and discuss the dilemma of relating decentralization, which is the enhancement of the different, to equity, which is the promotion of equivalence. Those situations in which decentralization can strengthen political domination are described. The authors conclude by setting out a checklist of warning questions and issues to be taken into account to ensure that decentralization genuinely facilitates the Primary Health Care orientation of health policy.
What supervisors want to know about decentralization.
Boissoneau, R; Belton, P
1991-06-01
Many organizations in various industries have tended to move away from strict centralization, yet some centralization is still vital to top management. With 19 of the 22 executives interviewed favoring or implementing some form of decentralization, it is probable that traditionally centralized organizations will follow the trend and begin to decentralize their organizational structures. The incentives and advantages of decentralization are too attractive to ignore. Decentralization provides responsibility, clear objectives, accountability for results, and more efficient and effective decision making. However, one must remember that decentralization can be overextended and that centralization is still viable in certain functions. Finding the correct balance between control and autonomy is a key to decentralization. Too much control and too much autonomy are the primary reasons for decentralization failures. In today's changing, competitive environment, structures must be continuously redefined, with the goal of finding an optimal balance between centralization and decentralization. Organizations are cautioned not to seek out and install a single philosopher-king to impose unified direction, but to unify leadership goals, participation, style, and control to develop improved methods of making all responsible leaders of one mind about the organization's needs and goals.
Decentralized control of units in smart grids for the support of renewable energy supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenschein, Michael, E-mail: Michael.Sonnenschein@Uni-Oldenburg.DE; Lünsdorf, Ontje, E-mail: Ontje.Luensdorf@OFFIS.DE; Bremer, Jörg, E-mail: Joerg.Bremer@Uni-Oldenburg.DE
Due to the significant environmental impact of power production from fossil fuels and nuclear fission, future energy systems will increasingly rely on distributed and renewable energy sources (RES). The electrical feed-in from photovoltaic (PV) systems and wind energy converters (WEC) varies greatly both over short and long time periods (from minutes to seasons), and (not only) by this effect the supply of electrical power from RES and the demand for electrical power are not per se matching. In addition, with a growing share of generation capacity especially in distribution grids, the top-down paradigm of electricity distribution is gradually replaced bymore » a bottom-up power supply. This altogether leads to new problems regarding the safe and reliable operation of power grids. In order to address these challenges, the notion of Smart Grids has been introduced. The inherent flexibilities, i.e. the set of feasible power schedules, of distributed power units have to be controlled in order to support demand–supply matching as well as stable grid operation. Controllable power units are e.g. combined heat and power plants, power storage systems such as batteries, and flexible power consumers such as heat pumps. By controlling the flexibilities of these units we are particularly able to optimize the local utilization of RES feed-in in a given power grid by integrating both supply and demand management measures with special respect to the electrical infrastructure. In this context, decentralized systems, autonomous agents and the concept of self-organizing systems will become key elements of the ICT based control of power units. In this contribution, we first show how a decentralized load management system for battery charging/discharging of electrical vehicles (EVs) can increase the locally used share of supply from PV systems in a low voltage grid. For a reliable demand side management of large sets of appliances, dynamic clustering of these appliances into uniformly controlled appliance sets is necessary. We introduce a method for self-organized clustering for this purpose and show how control of such clusters can affect load peaks in distribution grids. Subsequently, we give a short overview on how we are going to expand the idea of self-organized clusters of units into creating a virtual control center for dynamic virtual power plants (DVPP) offering products at a power market. For an efficient organization of DVPPs, the flexibilities of units have to be represented in a compact and easy to use manner. We give an introduction how the problem of representing a set of possibly 10{sup 100} feasible schedules can be solved by a machine-learning approach. In summary, this article provides an overall impression how we use agent based control techniques and methods of self-organization to support the further integration of distributed and renewable energy sources into power grids and energy markets. - Highlights: • Distributed load management for electrical vehicles supports local supply from PV. • Appliances can self-organize into so called virtual appliances for load control. • Dynamic VPPs can be controlled by extensively decentralized control centers. • Flexibilities of units can efficiently be represented by support-vector descriptions.« less
Cheung, Yvonne Y; Goodman, Eric M; Osunkoya, Tomiwa O
2016-01-01
Long wait times limit our ability to provide the right care at the right time and are commonly products of inefficient workflow. In 2013, the demand for musculoskeletal (MSK) procedures increased beyond our department's ability to provide efficient and timely service. We initiated a quality improvement (QI) project to increase efficiency and decrease patient time of stay. Our project team included three MSK radiologists, one senior resident, one technologist, one administrative assistant/scheduler, and the lead technologist. We adopted and followed the Lean Six Sigma DMAIC (define, measure, analyze, improve, and control) approach. The team used tools such as voice of the customer (VOC), along with affinity and SIPOC (supplier, input, process, output, customer) diagrams, to understand the current process, identify our customers, and develop a project charter in the define stage. During the measure stage, the team collected data, created a detailed process map, and identified wastes with the value stream mapping technique. Within the analyze phase, a fishbone diagram helped the team to identify critical root causes for long wait times. Scatter plots revealed relationships among time variables. Team brainstorming sessions generated improvement ideas, and selected ideas were piloted via plan, do, study, act (PDSA) cycles. The control phase continued to enable the team to monitor progress using box plots and scheduled reviews. Our project successfully decreased patient time of stay. The highly structured and logical Lean Six Sigma approach was easy to follow and provided a clear course of action with positive results. (©)RSNA, 2016.
Kolawole, Grace O.; Gilbert, Hannah N.; Dadem, Nancin Y.; Genberg, Becky L.; Agbaji, Oche O.
2017-01-01
Background. Decentralization of care and treatment for HIV infection in Africa makes services available in local health facilities. Decentralization has been associated with improved retention and comparable or superior treatment outcomes, but patient experiences are not well understood. Methods. We conducted a qualitative study of patient experiences in decentralized HIV care in Plateau State, north central Nigeria. Five decentralized care sites in the Plateau State Decentralization Initiative were purposefully selected. Ninety-three patients and 16 providers at these sites participated in individual interviews and focus groups. Data collection activities were audio-recorded and transcribed. Transcripts were inductively content analyzed to derive descriptive categories representing patient experiences of decentralized care. Results. Patient participants in this study experienced the transition to decentralized care as a series of “trade-offs.” Advantages cited included saving time and money on travel to clinic visits, avoiding dangers on the road, and the “family-like atmosphere” found in some decentralized clinics. Disadvantages were loss of access to ancillary services, reduced opportunities for interaction with providers, and increased risk of disclosure. Participants preferred decentralized services overall. Conclusion. Difficulty and cost of travel remain a fundamental barrier to accessing HIV care outside urban centers, suggesting increased availability of community-based services will be enthusiastically received. PMID:28331636
Kolawole, Grace O; Gilbert, Hannah N; Dadem, Nancin Y; Genberg, Becky L; Agaba, Patricia A; Okonkwo, Prosper; Agbaji, Oche O; Ware, Norma C
2017-01-01
Background. Decentralization of care and treatment for HIV infection in Africa makes services available in local health facilities. Decentralization has been associated with improved retention and comparable or superior treatment outcomes, but patient experiences are not well understood. Methods. We conducted a qualitative study of patient experiences in decentralized HIV care in Plateau State, north central Nigeria. Five decentralized care sites in the Plateau State Decentralization Initiative were purposefully selected. Ninety-three patients and 16 providers at these sites participated in individual interviews and focus groups. Data collection activities were audio-recorded and transcribed. Transcripts were inductively content analyzed to derive descriptive categories representing patient experiences of decentralized care. Results. Patient participants in this study experienced the transition to decentralized care as a series of "trade-offs." Advantages cited included saving time and money on travel to clinic visits, avoiding dangers on the road, and the "family-like atmosphere" found in some decentralized clinics. Disadvantages were loss of access to ancillary services, reduced opportunities for interaction with providers, and increased risk of disclosure. Participants preferred decentralized services overall. Conclusion. Difficulty and cost of travel remain a fundamental barrier to accessing HIV care outside urban centers, suggesting increased availability of community-based services will be enthusiastically received.
Bühren, Jens; Yoon, Geunyoung; MacRae, Scott; Huxlin, Krystel
2010-01-01
PURPOSE To simulate the simultaneous contribution of optical zone decentration and pupil dilation on retinal image quality using wavefront error data from a myopic photorefractive keratectomy (PRK) cat model. METHODS Wavefront error differences were obtained from five cat eyes 19±7 weeks (range: 12 to 24 weeks) after spherical myopic PRK for −6.00 diopters (D) (three eyes) and −10.00 D (two eyes). A computer model was used to simulate decentration of a 6-mm sub-aperture relative to the measured wavefront error difference. Changes in image quality (visual Strehl ratio based on the optical transfer function [VSOTF]) were computed for simulated decentrations from 0 to 1500 μm over pupil diameters of 3.5 to 6.0 mm in 0.5-mm steps. For each eye, a bivariate regression model was applied to calculate the simultaneous contribution of pupil dilation and decentration on the pre- to postoperative change of the log VSOTF. RESULTS Pupil diameter and decentration explained up to 95% of the variance of VSOTF change (adjusted R2=0.95). Pupil diameter had a higher impact on VSOTF (median β=−0.88, P<.001) than decentration (median β= −0.45, P<.001). If decentration-induced lower order aberrations were corrected, the impact of decentration further decreased (β= −0.26) compared to the influence of pupil dilation (β= −0.95). CONCLUSIONS Both pupil dilation and decentration of the optical zone affected the change of retinal image quality (VSOTF) after myopic PRK with decentration exerting a lower impact on VSOTF change. Thus, under physiological conditions pupil dilation is likely to have more effect on VSOTF change after PRK than optical zone decentration. PMID:20229950
Bühren, Jens; Yoon, Geunyoung; MacRae, Scott; Huxlin, Krystel
2010-03-01
To simulate the simultaneous contribution of optical zone decentration and pupil dilation on retinal image quality using wavefront error data from a myopic photorefractive keratectomy (PRK) cat model. Wavefront error differences were obtained from five cat eyes 19+/-7 weeks (range: 12 to 24 weeks) after spherical myopic PRK for -6.00 diopters (D) (three eyes) and -10.00 D (two eyes). A computer model was used to simulate decentration of a 6-mm sub-aperture relative to the measured wavefront error difference. Changes in image quality (visual Strehl ratio based on the optical transfer function [VSOTF]) were computed for simulated decentrations from 0 to 1500 mum over pupil diameters of 3.5 to 6.0 mm in 0.5-mm steps. For each eye, a bivariate regression model was applied to calculate the simultaneous contribution of pupil dilation and decentration on the pre- to postoperative change of the log VSOTF. Pupil diameter and decentration explained up to 95% of the variance of VSOTF change (adjusted R(2)=0.95). Pupil diameter had a higher impact on VSOTF (median beta=-0.88, P<.001) than decentration (median beta=-0.45, P<.001). If decentration-induced lower order aberrations were corrected, the impact of decentration further decreased (beta=-0.26) compared to the influence of pupil dilation (beta=-0.95). Both pupil dilation and decentration of the optical zone affected the change of retinal image quality (VSOTF) after myopic PRK with decentration exerting a lower impact on VSOTF change. Thus, under physiological conditions pupil dilation is likely to have more effect on VSOTF change after PRK than optical zone decentration. Copyright 2010, SLACK Incorporated.
Rethinking Partnerships on a Decentralized Campus
ERIC Educational Resources Information Center
Dufault, Katie H.
2017-01-01
Decentralization is an effective approach for structuring campus learning and success centers. McShane & Von Glinow (2007) describe decentralization as "an organizational model where decision authority and power are dispersed among units rather than held by a single small group of administrators" (p. 237). A decentralized structure…
Barbarito, Fulvio; Pinciroli, Francesco; Mason, John; Marceglia, Sara; Mazzola, Luca; Bonacina, Stefano
2012-08-01
Information technologies (ITs) have now entered the everyday workflow in a variety of healthcare providers with a certain degree of independence. This independence may be the cause of difficulty in interoperability between information systems and it can be overcome through the implementation and adoption of standards. Here we present the case of the Lombardy Region, in Italy, that has been able, in the last 10 years, to set up the Regional Social and Healthcare Information System, connecting all the healthcare providers within the region, and providing full access to clinical and health-related documents independently from the healthcare organization that generated the document itself. This goal, in a region with almost 10 millions citizens, was achieved through a twofold approach: first, the political and operative push towards the adoption of the Health Level 7 (HL7) standard within single hospitals and, second, providing a technological infrastructure for data sharing based on interoperability specifications recognized at the regional level for messages transmitted from healthcare providers to the central domain. The adoption of such regional interoperability specifications enabled the communication among heterogeneous systems placed in different hospitals in Lombardy. Integrating the Healthcare Enterprise (IHE) integration profiles which refer to HL7 standards are adopted within hospitals for message exchange and for the definition of integration scenarios. The IHE patient administration management (PAM) profile with its different workflows is adopted for patient management, whereas the Scheduled Workflow (SWF), the Laboratory Testing Workflow (LTW), and the Ambulatory Testing Workflow (ATW) are adopted for order management. At present, the system manages 4,700,000 pharmacological e-prescriptions, and 1,700,000 e-prescriptions for laboratory exams per month. It produces, monthly, 490,000 laboratory medical reports, 180,000 radiology medical reports, 180,000 first aid medical reports, and 58,000 discharge summaries. Hence, despite there being still work in progress, the Lombardy Region healthcare system is a fully interoperable social healthcare system connecting patients, healthcare providers, healthcare organizations, and healthcare professionals in a large and heterogeneous territory through the implementation of international health standards. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.
2016-12-01
The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
Atkinson, Sarah; Haran, Dave
2004-01-01
OBJECTIVE: To examine whether decentralization has improved health system performance in the State of Ceara, north-east Brazil. METHODS: Ceara is strongly committed to decentralization. A survey across 45 local (municipio) health systems collected data on performance and formal organization, including decentralization, informal management and local political culture. The indicators for informal management and local political culture were based on prior ethnographic research. Data were analysed using analysis of variance, Duncan's post-hoc test and multiple regression. FINDINGS: Decentralization was associated with improved performance, but only for 5 of our 22 performance indicators. Moreover, in the multiple regression, decentralization explained the variance in only one performance indicator; indicators for informal management and political culture appeared to be more important influences. However, some indicators for informal management were themselves associated with decentralization but not any of the political culture indicators. CONCLUSION: Good management practices in the study led to decentralized local health systems rather than vice versa. Any apparent association between decentralization and performance seems to be an artefact of the informal management, and the wider political culture in which a local health system is embedded strongly influences the performance of local health systems. PMID:15640917
Comparative Perspectives on Educational Decentralization: An Exercise in Contradiction?
ERIC Educational Resources Information Center
Weiler, Hans N.
1990-01-01
It is argued that policies decentralizing the governance of educational systems, although appealing in the abstract, tend to be fundamentally ambivalent and in conflict with powerful forces favoring centralization. Tensions surrounding the issue of decentralization are discussed, with emphasis on the relationship between decentralization and…
Dwicaksono, Adenantera; Fox, Ashley M
2018-06-01
Policy Points: For more than 3 decades, international development agencies have advocated health system decentralization to improve health system performance in low- and middle-income countries. We found little rigorous evidence documenting the impact of decentralization processes on health system performance or outcomes in part due to challenges in measuring such far-reaching and multifaceted system-level changes. We propose a renewed research agenda that focuses on discrete definitions of decentralization and how institutional factors and mechanisms affect health system performance and outcomes within the general context of decentralized governance structures. Despite the widespread adoption of decentralization reforms as a means to improve public service delivery in developing countries since the 1980s, empirical evidence of the role of decentralization on health system improvement is still limited and inconclusive. This study reviewed studies published from 2000 to 2016 with adequate research designs to identify evidence on whether and how decentralization processes have impacted health systems. We conducted a systematic review of peer-reviewed journal articles from the public health and social science literature. We searched for articles within 9 databases using predefined search terms reflecting decentralization and health system constructs. Inclusion criteria were original research articles, low- and middle-income country settings, quantifiable outcome measures, and study designs that use comparisons or statistical adjustments. We excluded studies in high-income country settings and/or published in a non-English language. Sixteen studies met our prespecified inclusion and exclusion criteria and were grouped based on outcomes measured: health system inputs (n = 3), performance (n = 7), and health outcomes (n = 7). Numerous studies addressing conceptual issues related to decentralization but without any attempt at empirical estimation were excluded. Overall, we found mixed results regarding the effects of decentralization on health system indicators with seemingly beneficial effects on health system performance and health outcomes. Only 10 studies were considered to have relatively low risks of bias. This study reveals the limited empirical knowledge of the impact of decentralization on health system performance. Mixed empirical findings on the role of decentralization on health system performance and outcomes highlight the complexity of decentralization processes and their systemwide effects. Thus, we propose a renewed research agenda that focuses on discrete definitions of decentralization and how institutional factors and mechanisms affect health system performance and outcomes within the general context of decentralized governance structures. © 2018 Milbank Memorial Fund.
Theory and applications survey of decentralized control methods
NASA Technical Reports Server (NTRS)
Athans, M.
1975-01-01
A nonmathematical overview is presented of trends in the general area of decentralized control strategies which are suitable for hierarchical systems. Advances in decentralized system theory are closely related to advances in the so-called stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools pertaining to the classical stochastic control problem are outlined. Particular attention is devoted to pitfalls in the mathematical problem formulation for decentralized control. Major conclusions are that any purely deterministic approach to multilevel hierarchical dynamic systems is unlikely to lead to realistic theories or designs, that the flow of measurements and decisions in a decentralized system should not be instantaneous and error-free, and that delays in information exchange in a decentralized system lead to reasonable approaches to decentralized control. A mathematically precise notion of aggregating information is not yet available.
Remote Monitoring for Follow-up of Patients with Cardiac Implantable Electronic Devices
Morichelli, Loredana; Varma, Niraj
2014-01-01
Follow-up of patients with cardiac implantable electronic devices is challenging due to the increasing number and technical complexity of devices coupled to increasing clinical complexity of patients. Remote monitoring (RM) offers the opportunity to optimise clinic workflow and to improve device monitoring and patient management. Several randomised clinical trials and registries have demonstrated that RM may reduce number of hospital visits, time required for patient follow-up, physician and nurse time, hospital and social costs. Furthermore, patient retention and adherence to follow-up schedule are significantly improved by RM. Continuous wireless monitoring of data stored in the device memory with automatic alerts allows early detection of device malfunctions and of events requiring clinical reaction, such as atrial fibrillation, ventricular arrhythmias and heart failure. Early reaction may improve patient outcome. RM is easy to use and patients showed a high level of acceptance and satisfaction. Implementing RM in daily practice may require changes in clinic workflow. To this purpose, new organisational models have been introduced. In spite of a favourable cost:benefit ratio, RM reimbursement still represents an issue in several European countries. PMID:26835079
Comparative LCA of decentralized wastewater treatment alternatives for non-potable urban reuse.
Opher, Tamar; Friedler, Eran
2016-11-01
Municipal wastewater (WW) effluent represents a reliable and significant source for reclaimed water, very much needed nowadays. Water reclamation and reuse has become an attractive option for conserving and extending available water sources. The decentralized approach to domestic WW treatment benefits from the advantages of source separation, which makes available simple small-scale systems and on-site reuse, which can be constructed on a short time schedule and occasionally upgraded with new technological developments. In this study we perform a Life Cycle Assessment to compare between the environmental impacts of four alternatives for a hypothetical city's water-wastewater service system. The baseline alternative is the most common, centralized approach for WW treatment, in which WW is conveyed to and treated in a large wastewater treatment plant (WWTP) and is then discharged to a stream. The three alternatives represent different scales of distribution of the WW treatment phase, along with urban irrigation and domestic non-potable water reuse (toilet flushing). The first alternative includes centralized treatment at a WWTP, with part of the reclaimed WW (RWW) supplied back to the urban consumers. The second and third alternatives implement de-centralized greywater (GW) treatment with local reuse, one at cluster level (320 households) and one at building level (40 households). Life cycle impact assessment results show a consistent disadvantage of the prevailing centralized approach under local conditions in Israel, where seawater desalination is the marginal source of water supply. The alternative of source separation and GW reuse at cluster level seems to be the most preferable one, though its environmental performance is only slightly better than GW reuse at building level. Centralized WW treatment with urban reuse of WWTP effluents is not advantageous over decentralized treatment of GW because the supply of RWW back to consumers is very costly in materials and energy. Electricity is a major driver of the impacts in most categories, pertaining mostly to potable water production and supply. Infrastructure was found to have a notable effect on metal depletion, human toxicity and freshwater and marine ecotoxicity. Sensitivity to major model parameters was analyzed. A shift to a larger share of renewable energy sources in the electricity mix results in a dramatic improvement in most impact categories. Switching to a mix of water sources, rather than the marginal source, leads to a significant reduction in most impacts. It is concluded that under the conditions tested, a decentralized approach to urban wastewater management is environmentally preferable to the common centralized system. It is worth exploring such options under different conditions as well, in cases which new urban infrastructure is planned or replacement of old infrastructure is required. Copyright © 2016 Elsevier Ltd. All rights reserved.
Walden, Anita; Nahm, Meredith; Barnett, M Edwina; Conde, Jose G; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E; Eisenstein, Eric L
2011-01-01
New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs.
Walden, Anita; Nahm, Meredith; Barnett, M. Edwina; Conde, Jose G.; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E.; Eisenstein, Eric L.
2012-01-01
Background New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. Methods We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. Main Outcome Measures The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Results Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Conclusion Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs. PMID:21335692
Networked Microgrids for Self-healing Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
This paper proposes a transformative architecture for the normal operation and self-healing of networked microgrids (MGs). MGs can support and interchange electricity with each other in the proposed infrastructure. The networked MGs are connected by a physical common bus and a designed two-layer cyber communication network. The lower layer is within each MG where the energy management system (EMS) schedules the MG operation; the upper layer links a number of EMSs for global optimization and communication. In the normal operation mode, the objective is to schedule dispatchable distributed generators (DGs), energy storage systems (ESs) and controllable loads to minimize themore » operation costs and maximize the supply adequacy of each MG. When a generation deficiency or fault happens in a MG, the model switches to the self-healing mode and the local generation capacities of other MGs can be used to support the on-emergency portion of the system. A consensus algorithm is used to distribute portions of the desired power support to each individual MG in a decentralized way. The allocated portion corresponds to each MG’s local power exchange target which is used by its EMS to perform the optimal schedule. The resultant aggregated power output of networked MGs will be used to provide the requested power support. Test cases demonstrate the effectiveness of the proposed methodology.« less
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
Using Economic Experiments to Test Electricity Policy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiesling, Lynne
2005-11-01
The industry's history of central generation, coordination, and regulation breeds a natural suspicion of whether or not decentralized coordination and a more market-based, decentralized regulatory approach can work. To see how people will behave in a decentralized environment with decentralized institutions, one must test the environment and institutions experimentally, with real people.
Educational Decentralization, Public Spending, and Social Justice in Nigeria
ERIC Educational Resources Information Center
Geo-Jaja, Macleans A.
2006-01-01
This study situates the process of educational decentralization in the narrower context of social justice. Its main object, however, is to analyze the implications of decentralization for strategies of equity and social justice in Nigeria. It starts from the premise that the early optimism that supported decentralization as an efficient and…
ERIC Educational Resources Information Center
Stinnette, Lynn J.
Administrators are looking at decentralization as a solution to issues troubling schools, teachers, and students. The notion of decentralization is accompanied by two assumptions. First, decentralization will produce an improvement in education because classroom decision making will be more responsive to the specific needs of a school. Second, in…
Decentralized Planning for Autonomous Agents Cooperating in Complex Missions
2010-09-01
Consensus - based decentralized auctions for robust task allocation ," IEEE Transactions on Robotics...Robotics, vol. 24, pp. 209-222, 2006. [44] H.-L. Choi, L. Brunet, and J. P. How, " Consensus - based decentralized auctions for robust task allocation ...2003. 123 [31] L. Brunet, " Consensus - Based Auctions for Decentralized Task Assignment," Master’s thesis, Dept.
The political economy of decentralization of health and social services in Canada.
Tsalikis, G
1989-01-01
A trend to decentralization in Canada's 'welfare state' has received support from the Left and from the Right. Some social critics of the Left expect decentralization to result in holistic services adjusted to local needs. Others, moreover, feel we are in the dawn of a new epoch in which major economic transformations are to bring about, through new class alliances and conflict, decentralization of power and a better quality of life in communities. These assumptions and their theoretical pitfalls are discussed here following an historical overview of the centralization/decentralization issue in Canadian social policy. It is argued that recent proposals of decentralization are a continuation of reactionary tendencies to constrain social expenditures, but not a path to better quality of life.
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
ERIC Educational Resources Information Center
Williams, R. David
This study reviews the literature on public school administration and on decentralization to establish the groundwork for an analysis of the administration of a decentralized school system and its media services, discusses some of the confusion in the centralization vs. decentralization debate, and presents a heuristic study of the administration…
ERIC Educational Resources Information Center
Welsh, Thomas; McGinn, Noel F.
Decentralization is arguably one of the most important phenomena to come on to the educational planning agenda in the last 15 years. Why a country should decentralize its educational decision-making process and which decisions should be decentralized are two questions that many decision-makers raise. This booklet is intended to provide educational…
Vargas Bustamante, Arturo
2010-09-01
This study investigates the effectiveness of centralized and decentralized health care providers in rural Mexico. It compares provider performance since both centralized and decentralized providers co-exist in rural areas of the country. The data are drawn from the 2003 household survey of Oportunidades, a comprehensive study of rural families from seven states in Mexico. The analyses compare out-of-pocket health care expenditures and utilization of preventive care among rural households with access to either centralized or decentralized health care providers. This study benefits from differences in timing of health care decentralization and from a quasi-random distribution of providers. Results show that overall centralized providers perform better. Households served by this organization report less regressive out-of-pocket health care expenditures (32% lower), and observe higher utilization of preventive services (3.6% more). Decentralized providers that were devolved to state governments in the early 1980s observe a slightly better performance than providers that were decentralized in the mid-1990s. These findings are robust to decentralization timing, heterogeneity in per capita government health expenditures, state and health infrastructure effects, and other confounders. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Decentralization of health care systems and health outcomes: Evidence from a natural experiment.
Jiménez-Rubio, Dolores; García-Gómez, Pilar
2017-09-01
While many countries worldwide are shifting responsibilities for their health systems to local levels of government, there is to date insufficient evidence about the potential impact of these policy reforms. We estimate the impact of decentralization of the health services on infant and neonatal mortality using a natural experiment: the devolution of health care decision making powers to Spanish regions. The devolution was implemented gradually and asymmetrically over a twenty-year period (1981-2002). The order in which the regions were decentralized was driven by political factors and hence can be considered exogenous to health outcomes. In addition, we exploit the dynamic effect of decentralization of health services and allow for heterogeneous effects by the two main types of decentralization implemented across regions: full decentralization (political and fiscal powers) versus political decentralization only. Our difference in differences results based on a panel dataset for the 50 Spanish provinces over the period 1980 to 2010 show that the lasting benefit of decentralization accrues only to regions which enjoy almost full fiscal and political powers and which are also among the richest regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Patzer, Karl-Heinz; Ardjomand, Payam; Göhring, Katharina; Klempt, Guido; Patzelt, Andreas; Redzich, Markus; Zebrowski, Mathias; Emmerich, Susanne; Schnell, Oliver
2018-05-01
Medical practices face challenges of time and cost pressures with scarce resources. Point-of-care testing (POCT) has the potential to accelerate processes compared to central laboratory testing and can increase satisfaction of physicians, staff members, and patients. The objective of this study was to evaluate the effects of introducing HbA1c POCT in practices specialized in diabetes. Three German practices that manage 400, 550, and 950 diabetes patients per year participated in this evaluation. The workflow and required time before and after POCT implementation (device: Alere Afinion AS100 Analyzer) was evaluated in each practice. Physician (n = 5), staff (n = 9), and patient (n = 298) satisfaction was assessed with questionnaires and interviews. After POCT implementation the number of required visits scheduled was reduced by 80% (88% vs 17.6%, P < .0001), the number of venous blood collections by 75% (91% vs 23%, P < .0001). Of patients, 82% (vs 13% prior to POCT implementation) were able to discuss their HbA1c values with treating physicians immediately during their first visit ( P < .0001). In two of the practices the POCT process resulted in significant time savings of approximately 20 and 22 working days per 1000 patients per year (95% CI 2-46; 95% CI 10-44). All physicians indicated that POCT HbA1c implementation improved the practice workflow and all experienced a relief of burden for the office and the patients. All staff members indicated that they found the POCT measurement easy to perform and experienced a relief of burden. The majority (61.3%) of patients found the capillary blood collection more pleasant and 83% saw an advantage in the immediate availability of HbA1c results. The implementation of HbA1c POCT leads to an improved practice workflow and increases satisfaction of physicians, staff members and patients.
The snow system: A decentralized medical data processing system.
Bellika, Johan Gustav; Henriksen, Torje Starbo; Yigzaw, Kassaye Yitbarek
2015-01-01
Systems for large-scale reuse of electronic health record data is claimed to have the potential to transform the current health care delivery system. In principle three alternative solutions for reuse exist: centralized, data warehouse, and decentralized solutions. This chapter focuses on the decentralized system alternative. Decentralized systems may be categorized into approaches that move data to enable computations or move computations to the where data is located to enable computations. We describe a system that moves computations to where the data is located. Only this kind of decentralized solution has the capabilities to become ideal systems for reuse as the decentralized alternative enables computation and reuse of electronic health record data without moving or exposing the information to outsiders. This chapter describes the Snow system, which is a decentralized medical data processing system, its components and how it has been used. It also describes the requirements this kind of systems need to support to become sustainable and successful in recruiting voluntary participation from health institutions.
Mohammed, Abrar Juhar; Inoue, Makoto
2014-06-15
This paper posits a Modified Actor-Power-Accountability Framework (MAPAF) that makes three major improvements on the Actor-Power-Accountability Framework (APAF) developed by Agrawal and Ribot (1999). These improvements emphasize the nature of decentralized property rights, linking the outputs of decentralization with its outcomes and the inclusion of contextual factors. Applying MAPAF to analyze outputs and outcomes from two major decentralized forest policies in Ethiopia, i.e., delegation and devolution, has demonstrated the following strengths of the framework. First, by incorporating vital bundles of property rights into APAF, MAPAF creates a common ground for exploring and comparing the extent of democratization achieved by different decentralizing reforms. Second, the inclusion of social and environmental outcomes in MAPAF makes it possible to link the output of decentralization with local level outcomes. Finally, the addition of contextual factors enhances MAPAF's explanatory power by providing room for investigating exogenous factors other than democratization that contribute to the outcomes of decentralization reforms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Intraocular lens design for treating high myopia based on individual eye model
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Zhaoqi; Wang, Yan; Zuo, Tong
2007-02-01
In this research, we firstly design the phakic intraocular lens (PIOL) based on individual eye model with optical design software ZEMAX. The individual PIOL is designed to correct the defocus and astigmatism, and then we compare the PIOL power calculated from the individual eye model with that from the experiential formula. Close values of PIOL power are obtained between the individual eye model and the formula, but the suggested method has more accuracy with more functions. The impact of PIOL decentration on human eye is evaluated, including rotation decentration, flat axis decentration, steep axis decentration and axial movement of PIOL, which is impossible with traditional method. To control the PIOL decentration errors, we give the limit values of PIOL decentration for the specific eye in this study.
Decentralization, democratization, and health: the Philippine experiment.
Langran, Irene V
2011-01-01
In 1991, the Philippines joined a growing list of countries that reformed health planning through decentralization. Reformers viewed decentralization as a tool that would solve multiple problems, leading to more meaningful democracy and more effective health planning. Today, nearly two decades after the passage of decentralization legislation, questions about the effectiveness of the reforms persist. Inadequate financing, inequity, and a lack of meaningful participation remain challenges, in many ways mirroring broader weaknesses of Philippine democracy. These concerns pose questions regarding the nature of contemporary decentralization, democratization, and health planning and whether these three strategies are indeed mutually enforcing.
The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle Control
2011-01-01
based decentralized auctions for robust task allocation ,[ IEEE Trans. Robot., vol. 25, no. 4, pp...operators can aid such systems by bringing their knowledge- based reasoning and experience to bear. Given a decentralized task planner and a goal- based ...experience to bear. Given a decentralized task planner and a goal- based operator interface for a network of unmanned vehicles in a search, track,
Tediosi, Fabrizio; Gabriele, Stefania; Longo, Francesco
2009-05-01
In many European countries, since the World War II, there has been a trend towards decentralization of health policy to lower levels of governments, while more recently there have been re-centralization processes. Whether re-centralization will be the new paradigm of European health policy or not is difficult to say. In the Italian National Health Service (SSN) decentralization raised two related questions that might be interesting for the international debate on decentralization in health care: (a) what sort of regulatory framework and institutional balances are required to govern decentralization in health care in a heterogeneous country under tough budget constraints? (b) how can it be ensured that the most advanced parts of the country remain committed to solidarity, supporting the weakest ones? To address these questions this article describes the recent trends in SSN funding and expenditure, it reviews the strategy adopted by the Italian government for governing the decentralization process and discusses the findings to draw policy conclusions. The main lessons emerging from this experience are that: (1) when the differences in administrative and policy skills, in socio-economic standards and social capital are wide, decentralization may lead to undesirable divergent evolution paths; (2) even in decentralized systems, the role of the Central government can be very important to contain health expenditure; (3) a strong governance of the Central government may help and not hinder the enforcement of decentralization; and (4) supporting the weakest Regions and maintaining inter-regional solidarity is hard but possible. In Italy, despite an increasing role of the Central government in steering the SSN, the pattern of regional decentralization of health sector decision making does not seem at risk. Nevertheless, the Italian case confirms the complexity of decentralization and re-centralization processes that sometimes can be paradoxically reinforcing each other.
Bossert, Thomas John; Mitchell, Andrew David
2011-01-01
Health sector decentralization has been widely adopted to improve delivery of health services. While many argue that institutional capacities and mechanisms of accountability required to transform decentralized decision-making into improvements in local health systems are lacking, few empirical studies exist which measure or relate together these concepts. Based on research instruments administered to a sample of 91 health sector decision-makers in 17 districts of Pakistan, this study analyzes relationships between three dimensions of decentralization: decentralized authority (referred to as "decision space"), institutional capacities, and accountability to local officials. Composite quantitative indicators of these three dimensions were constructed within four broad health functions (strategic and operational planning, budgeting, human resources management, and service organization/delivery) and on an overall/cross-function basis. Three main findings emerged. First, district-level respondents report varying degrees of each dimension despite being under a single decentralization regime and facing similar rules across provinces. Second, within dimensions of decentralization-particularly decision space and capacities-synergies exist between levels reported by respondents in one function and those reported in other functions (statistically significant coefficients of correlation ranging from ρ=0.22 to ρ=0.43). Third, synergies exist across dimensions of decentralization, particularly in terms of an overall indicator of institutional capacities (significantly correlated with both overall decision space (ρ=0.39) and accountability (ρ=0.23)). This study demonstrates that decentralization is a varied experience-with some district-level officials making greater use of decision space than others and that those who do so also tend to have more capacity to make decisions and are held more accountable to elected local officials for such choices. These findings suggest that Pakistan's decentralization policy should focus on synergies among dimensions of decentralization to encouraging more use of de jure decision space, work toward more uniform institutional capacity, and encourage greater accountability to local elected officials. Copyright © 2010 Elsevier Ltd. All rights reserved.
Highways and Urban Decentralization
DOT National Transportation Integrated Search
1998-01-01
This report documents a retrospective study of the relationship between highways and urban decentralization. We see decentralization as caused largely by the increased consumption of land by residents and businesses which occurs mainly because of hig...
Cole, Charles; Krampis, Konstantinos; Karagiannis, Konstantinos; Almeida, Jonas S; Faison, William J; Motwani, Mona; Wan, Quan; Golikov, Anton; Pan, Yang; Simonyan, Vahan; Mazumder, Raja
2014-01-27
Next-generation sequencing (NGS) technologies have resulted in petabytes of scattered data, decentralized in archives, databases and sometimes in isolated hard-disks which are inaccessible for browsing and analysis. It is expected that curated secondary databases will help organize some of this Big Data thereby allowing users better navigate, search and compute on it. To address the above challenge, we have implemented a NGS biocuration workflow and are analyzing short read sequences and associated metadata from cancer patients to better understand the human variome. Curation of variation and other related information from control (normal tissue) and case (tumor) samples will provide comprehensive background information that can be used in genomic medicine research and application studies. Our approach includes a CloudBioLinux Virtual Machine which is used upstream of an integrated High-performance Integrated Virtual Environment (HIVE) that encapsulates Curated Short Read archive (CSR) and a proteome-wide variation effect analysis tool (SNVDis). As a proof-of-concept, we have curated and analyzed control and case breast cancer datasets from the NCI cancer genomics program - The Cancer Genome Atlas (TCGA). Our efforts include reviewing and recording in CSR available clinical information on patients, mapping of the reads to the reference followed by identification of non-synonymous Single Nucleotide Variations (nsSNVs) and integrating the data with tools that allow analysis of effect nsSNVs on the human proteome. Furthermore, we have also developed a novel phylogenetic analysis algorithm that uses SNV positions and can be used to classify the patient population. The workflow described here lays the foundation for analysis of short read sequence data to identify rare and novel SNVs that are not present in dbSNP and therefore provides a more comprehensive understanding of the human variome. Variation results for single genes as well as the entire study are available from the CSR website (http://hive.biochemistry.gwu.edu/dna.cgi?cmd=csr). Availability of thousands of sequenced samples from patients provides a rich repository of sequence information that can be utilized to identify individual level SNVs and their effect on the human proteome beyond what the dbSNP database provides.
2014-01-01
Background Next-generation sequencing (NGS) technologies have resulted in petabytes of scattered data, decentralized in archives, databases and sometimes in isolated hard-disks which are inaccessible for browsing and analysis. It is expected that curated secondary databases will help organize some of this Big Data thereby allowing users better navigate, search and compute on it. Results To address the above challenge, we have implemented a NGS biocuration workflow and are analyzing short read sequences and associated metadata from cancer patients to better understand the human variome. Curation of variation and other related information from control (normal tissue) and case (tumor) samples will provide comprehensive background information that can be used in genomic medicine research and application studies. Our approach includes a CloudBioLinux Virtual Machine which is used upstream of an integrated High-performance Integrated Virtual Environment (HIVE) that encapsulates Curated Short Read archive (CSR) and a proteome-wide variation effect analysis tool (SNVDis). As a proof-of-concept, we have curated and analyzed control and case breast cancer datasets from the NCI cancer genomics program - The Cancer Genome Atlas (TCGA). Our efforts include reviewing and recording in CSR available clinical information on patients, mapping of the reads to the reference followed by identification of non-synonymous Single Nucleotide Variations (nsSNVs) and integrating the data with tools that allow analysis of effect nsSNVs on the human proteome. Furthermore, we have also developed a novel phylogenetic analysis algorithm that uses SNV positions and can be used to classify the patient population. The workflow described here lays the foundation for analysis of short read sequence data to identify rare and novel SNVs that are not present in dbSNP and therefore provides a more comprehensive understanding of the human variome. Variation results for single genes as well as the entire study are available from the CSR website (http://hive.biochemistry.gwu.edu/dna.cgi?cmd=csr). Conclusions Availability of thousands of sequenced samples from patients provides a rich repository of sequence information that can be utilized to identify individual level SNVs and their effect on the human proteome beyond what the dbSNP database provides. PMID:24467687
Peeling the Onion: Why Centralized Control / Decentralized Execution Works
2014-04-01
March–April 2014 Air & Space Power Journal | 24 Feature Peeling the Onion Why Centralized Control / Decentralized Execution Works Lt Col Alan Docauer...DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Peeling the Onion : Why Centralized Control / Decentralized Execution Works 5a...Air & Space Power Journal | 25 Docauer Peeling the Onion Feature What Is Centralized Control / Decentralized Execution? Emerging in the aftermath of
Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.
2006-01-01
This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.
Taking stock of decentralized disaster risk reduction in Indonesia
NASA Astrophysics Data System (ADS)
Grady, Anthony; Gersonius, Berry; Makarigakis, Alexandros
2016-09-01
The Sendai Framework, which outlines the global course on disaster risk reduction until 2030, places strong importance on the role of local government in disaster risk reduction. An aim of decentralization is to increase the influence and authority of local government in decision making. Yet, there is limited empirical evidence of the extent, character and effects of decentralization in current disaster risk reduction implementation, and of the barriers that are most critical to this. This paper evaluates decentralization in relation to disaster risk reduction in Indonesia, chosen for its recent actions to decentralize governance of DRR coupled with a high level of disaster risk. An analytical framework was developed to evaluate the various dimensions of decentralized disaster risk reduction, which necessitated the use of a desk study, semi-structured interviews and a gap analysis. Key barriers to implementation in Indonesia included: capacity gaps at lower institutional levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. However, any of these barriers are not unique to disaster risk reduction, and similar barriers have been observed for decentralization in other developing countries in other public sectors.
Dynamic Centralized and Decentralized Control Systems
DOT National Transportation Integrated Search
1977-09-01
This report develops a systematic method for designing suboptimal decentralized control systems. The method is then applied to the design of a decentralized controller for a freeway-corridor system. A freeway corridor is considered to be a system of ...
[Analysis of the healthcare service decentralization process in Côte d'Ivoire].
Soura, B D; Coulibaly, S S
2014-01-01
The decentralization of healthcare services is becoming increasingly important in strategies of public sector management. This concept is analyzed from various points of view, including legal, economic, political, and sociological. Several typologies have been proposed in the literature to analyze this decentralization process, which can take different forms ranging from simple deconcentration to more elaborate devolution. In some instances, decentralization can be analyzed by the degree of autonomy given to local authorities. This article applies these typologies to analyze the healthcare system decentralization process in Cote d'Ivoire. Special attention is paid to the new forms of community healthcare organizations. These decentralized structures enjoy a kind of autonomy, with characteristics closer to those of devolution. The model might serve as an example for population involvement in defining and managing healthcare problems in Cote d'Ivoire. We end with proposals for the improvement of the process.
A comparison of decentralized, distributed, and centralized vibro-acoustic control.
Frampton, Kenneth D; Baumann, Oliver N; Gardonio, Paolo
2010-11-01
Direct velocity feedback control of structures is well known to increase structural damping and thus reduce vibration. In multi-channel systems the way in which the velocity signals are used to inform the actuators ranges from decentralized control, through distributed or clustered control to fully centralized control. The objective of distributed controllers is to exploit the anticipated performance advantage of the centralized control while maintaining the scalability, ease of implementation, and robustness of decentralized control. However, and in seeming contradiction, some investigations have concluded that decentralized control performs as well as distributed and centralized control, while other results have indicated that distributed control has significant performance advantages over decentralized control. The purpose of this work is to explain this seeming contradiction in results, to explore the effectiveness of decentralized, distributed, and centralized vibro-acoustic control, and to expand the concept of distributed control to include the distribution of the optimization process and the cost function employed.
The Effect of Fiscal Decentralization on Under-five Mortality in Iran: A Panel Data Analysis.
Samadi, Ali Hussein; Keshtkaran, Ali; Kavosi, Zahra; Vahedi, Sajad
2013-11-01
Fiscal Decentralization (FD) in many cases is encouraged as a strong means of improving the efficiency and equity in the provision of public goods, such as healthcare services. This issue has urged the researchers to experimentally examine the relationship between fiscal decentralization indicators and health outcomes. In this study we examine the effect of Fiscal Decentralization in Medical Universities (FDMU) and Fiscal Decentralization in Provincial Revenues (FDPR) on Under-Five Mortality Rate (U5M) in provinces of Iran over the period between 2007 and 2010. We employed panel data methods in this article. The results of the Pesaran CD test demonstrated that most of the variables used in the analysis were cross-sectionally dependent. The Hausman test results suggested that fixed-effects were more appropriate to estimate our model. We estimated the fixed-effect model by using Driscoll-Kraay standard errors as a remedy for cross-sectional dependency. According to the findings of this research, fiscal decentralization in the health sector had a negative impact on U5M. On the other hand, fiscal decentralization in provincial revenues had a positive impact on U5M. In addition, U5M had a negative association with the density of physicians, hospital beds, and provincial GDP per capita, but a positive relationship with Gini coefficient and unemployment. The findings of our study indicated that fiscal decentralization should be emphasized in the health sector. The results suggest the need for caution in the implementation of fiscal decentralization in provincial revenues.
Fleming, Neil S; Becker, Edmund R; Culler, Steven D; Cheng, Dunlei; McCorkle, Russell; da Graca, Briget; Ballard, David J
2014-02-01
To estimate a commercially available ambulatory electronic health record's (EHR's) impact on workflow and financial measures. Administrative, payroll, and billing data were collected for 26 primary care practices in a fee-for-service network that rolled out an EHR on a staggered schedule from June 2006 through December 2008. An interrupted time series design was used. Staffing, visit intensity, productivity, volume, practice expense, payments received, and net income data were collected monthly for 2004-2009. Changes were evaluated 1-6, 7-12, and >12 months postimplementation. Data were accessed through a SQLserver database, transformed into SAS®, and aggregated by practice. Practice-level data were divided by full-time physician equivalents for comparisons across practices by month. Staffing and practice expenses increased following EHR implementation (3 and 6 percent after 12 months). Productivity, volume, and net income decreased initially but recovered to/close to preimplementation levels after 12 months. Visit intensity did not change significantly, and a secular trend offset the decrease in payments received. Expenses increased and productivity decreased following EHR implementation, but not as much or as persistently as might be expected. Longer term effects still need to be examined. © Health Research and Educational Trust.
Disturbance decoupling, decentralized control and the Riccati equation
NASA Technical Reports Server (NTRS)
Garzia, M. R.; Loparo, K. A.; Martin, C. F.
1981-01-01
The disturbance decoupling and optimal decentralized control problems are looked at using identical mathematical techniques. A statement of the problems and the development of their solution approach is presented. Preliminary results are given for the optimal decentralized control problem.
Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.
Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P
2015-11-01
We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.
Decentralized Quasi-Newton Methods
NASA Astrophysics Data System (ADS)
Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro
2017-05-01
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.
Decentralization can help reduce deforestation when user groups engage with local government.
Wright, Glenn D; Andersson, Krister P; Gibson, Clark C; Evans, Tom P
2016-12-27
Policy makers around the world tout decentralization as an effective tool in the governance of natural resources. Despite the popularity of these reforms, there is limited scientific evidence on the environmental effects of decentralization, especially in tropical biomes. This study presents evidence on the institutional conditions under which decentralization is likely to be successful in sustaining forests. We draw on common-pool resource theory to argue that the environmental impact of decentralization hinges on the ability of reforms to engage local forest users in the governance of forests. Using matching techniques, we analyze longitudinal field observations on both social and biophysical characteristics in a large number of local government territories in Bolivia (a country with a decentralized forestry policy) and Peru (a country with a much more centralized forestry policy). We find that territories with a decentralized forest governance structure have more stable forest cover, but only when local forest user groups actively engage with the local government officials. We provide evidence in support of a possible causal process behind these results: When user groups engage with the decentralized units, it creates a more enabling environment for effective local governance of forests, including more local government-led forest governance activities, fora for the resolution of forest-related conflicts, intermunicipal cooperation in the forestry sector, and stronger technical capabilities of the local government staff.
Hamood, Albert W.; Haddad, Sara A.; Otopalik, Adriane G.; Rosenbaum, Philipp
2015-01-01
Abstract The crustacean stomatogastric ganglion (STG) receives descending neuromodulatory inputs from three anterior ganglia: the paired commissural ganglia (CoGs), and the single esophageal ganglion (OG). In this paper, we provide the first detailed and quantitative analyses of the short- and long-term effects of removal of these descending inputs (decentralization) on the pyloric rhythm of the STG. Thirty minutes after decentralization, the mean frequency of the pyloric rhythm dropped from 1.20 Hz in control to 0.52 Hz. Whereas the relative phase of pyloric neuron activity was approximately constant across frequency in the controls, after decentralization this changed markedly. Nine control preparations kept for 5–6 d in vitro maintained pyloric rhythm frequencies close to their initial values. Nineteen decentralized preparations kept for 5–6 d dropped slightly in frequency from those seen at 30 min following decentralization, but then displayed stable activity over 6 d. Bouts of higher frequency activity were intermittently seen in both control and decentralized preparations, but the bouts began earlier and were more frequent in the decentralized preparations. Although the bouts may indicate that the removal of the modulatory inputs triggered changes in neuronal excitability, these changes did not produce obvious long-lasting changes in the frequency of the decentralized preparations. PMID:25914899
Decentralization can help reduce deforestation when user groups engage with local government
Wright, Glenn D.; Gibson, Clark C.; Evans, Tom P.
2016-01-01
Policy makers around the world tout decentralization as an effective tool in the governance of natural resources. Despite the popularity of these reforms, there is limited scientific evidence on the environmental effects of decentralization, especially in tropical biomes. This study presents evidence on the institutional conditions under which decentralization is likely to be successful in sustaining forests. We draw on common-pool resource theory to argue that the environmental impact of decentralization hinges on the ability of reforms to engage local forest users in the governance of forests. Using matching techniques, we analyze longitudinal field observations on both social and biophysical characteristics in a large number of local government territories in Bolivia (a country with a decentralized forestry policy) and Peru (a country with a much more centralized forestry policy). We find that territories with a decentralized forest governance structure have more stable forest cover, but only when local forest user groups actively engage with the local government officials. We provide evidence in support of a possible causal process behind these results: When user groups engage with the decentralized units, it creates a more enabling environment for effective local governance of forests, including more local government-led forest governance activities, fora for the resolution of forest-related conflicts, intermunicipal cooperation in the forestry sector, and stronger technical capabilities of the local government staff. PMID:27956644
Strategies of Educational Decentralization: Key Questions and Core Issues.
ERIC Educational Resources Information Center
Hanson, E. Mark
1998-01-01
Explains key issues and forces that shape organization and management strategies of educational decentralization, using examples from Colombia, Venezuela, Argentina, Nicaragua, and Spain. Core decentralization issues include national and regional goals, planning, political stress, resource distribution, infrastructure development, and job…
Bossert, Thomas J; Bowser, Diana M; Amenyah, Johnnie K
2007-03-01
Efficient logistics systems move essential medicines down the supply chain to the service delivery point, and then to the end user. Experts on logistics systems tend to see the supply chain as requiring centralized control to be most effective. However, many health reforms have involved decentralization, which experts fear has disrupted the supply chain and made systems less effective. There is no consensus on an appropriate methodology for assessing the effectiveness of decentralization in general, and only a few studies have attempted to address decentralization of logistics systems. This paper sets out a framework and methodology of a pioneering exploratory study that examines the experiences of decentralization in two countries, Guatemala and Ghana, and presents suggestive results of how decentralization affected the performance of their logistics systems. The analytical approach assessed decentralization using the principal author's 'decision space' approach, which defines decentralization as the degree of choice that local officials have over different health system functions. In this case the approach focused on 15 different logistics functions and measured the relationship between the degree of choice and indicators of performance for each of the functions. The results of both studies indicate that less choice (i.e. more centralized) was associated with better performance for two key functions (inventory control and information systems), while more choice (i.e. more decentralized) over planning and budgeting was associated with better performance. With different systems of procurement in Ghana and Guatemala, we found that a system with some elements of procurement that are centralized (selection of firms and prices fixed by national tender) was positively related in Guatemala but negatively related in Ghana, where a system of 'cash and carry' cost recovery allowed more local choice. The authors conclude that logistics systems can be effectively decentralized for some functions while others should remain centralized. These preliminary findings, however, should be subject to alternative methodologies to confirm the findings.
An Identity Based Key Exchange Protocol in Cloud Computing
NASA Astrophysics Data System (ADS)
Molli, Venkateswara Rao; Tiwary, Omkar Nath
2012-10-01
Workflow systems often use delegation to enhance the flexibility of authorization; delegation transfers privileges among users across different administrative domains and facilitates information sharing. We present an independently verifiable delegation mechanism, where a delegation credential can be verified without the participation of domain administrators. This protocol, called role-based cascaded delegation (RBCD), supports simple and efficient cross-domain delegation of authority. RBCD enables a role member to create delegations based on the dynamic needs of collaboration; in the meantime, a delegation chain canbe verified by anyone without the participation of role administrators. We also propose the Measurable Risk Adaptive decentralized Role-based Delegation framework to address this problem. Describe an efficient realization of RBCD by using aggregate signatures, where the authentication information for an arbitrarily long role-based delegation chain is captured by one short signature of constant size. RBCD enables a role member to create delegations based on the need of collaboration; in the meantime anyone can verify a delegation chain without the participation of role administrators. The protocol is general and can be realized by any signature scheme. We have described a specific realization with a hierarchical certificate-based encryption scheme that gives delegation compact credentials.
Linear time-invariant controller design for two-channel decentralized control systems
NASA Technical Reports Server (NTRS)
Desoer, Charles A.; Gundes, A. Nazli
1987-01-01
This paper analyzes a linear time-invariant two-channel decentralized control system with a 2 x 2 strictly proper plant. It presents an algorithm for the algebraic design of a class of decentralized compensators which stabilize the given plant.
Household Schooling Behaviors and Decentralization.
ERIC Educational Resources Information Center
Behrman, Jere R.; King, Elizabeth M.
2001-01-01
Presents a simple framework for (1) demonstrating how households determine schooling investments through choice and voice; and (2) considering effects of decentralization on household behaviors, given information problems. Some aspects of decentralization may increase efficiency; others may be neutral or decrease efficiency. Further research is…
Leadership in Decentralized Schools.
ERIC Educational Resources Information Center
Madsen, Jean
1997-01-01
Summarizes a study that examined principals' leadership in three private schools and its implications for decentralized public schools. With the increase of charter and privatized managed schools, principals will need to redefine their leadership styles. Private schools, as decentralized entities, offer useful perspectives on developing school…
Control and stabilization of decentralized systems
NASA Technical Reports Server (NTRS)
Byrnes, Christopher I.; Gilliam, David; Martin, Clyde F.
1989-01-01
Proceeding from the problem posed by the need to stabilize the motion of two helicopters maneuvering a single load, a methodology is developed for the stabilization of classes of decentralized systems based on a more algebraic approach, which involves the external symmetries of decentralized systems. Stabilizing local-feedback laws are derived for any class of decentralized systems having a semisimple algebra of symmetries; the helicopter twin-lift problem, as well as certain problems involving the stabilization of discretizations of distributed parameter problems, have just such algebras of symmetries.
2017-04-28
Regional Air Component Commander (the Leader) 5 CC-DC- DE Solution to A2/AD – Distributed Theater Air Control System (the System) 9 CC-DC- DE ... Control , Decentralized Execution” to a new framework of “Centralized Command, Distributed Control , and Decentralized Execution” (CC-DC- DE ).4 5 This...USAF C2 challenges in A2/AD environments describes a three-part Centralized Command, Distributed Control , and Decentralized Execution (CC-DC- DE
Anokbonggo, W W; Ogwal-Okeng, J W; Ross-Degnan, D; Aupont, O
2004-02-01
In Uganda, the decentralization of administrative functions, management, and responsibility for health care to districts, which began in 1994, resulted in fundamental changes in health care delivery. Since the introduction of the policy in Uganda, little information has been available on stakeholders' perceptions about the benefits of the policy and how decentralization affected health care delivery. To identify the perceptions and beliefs of key stakeholders on the impact and process of decentralization and on the operations of health services in two districts in Uganda, and to report their suggestions to improve future implementation of similar policies. We used qualitative research methods that included focus group discussions with 90 stakeholders from both study districts. The sample population comprised of 12 health workers from the two hospitals, 11 district health administrators, and 67 Local Council Leaders. Perceptions and concerns of stakeholders on the impact of decentralization on district health services. There was a general consensus that decentralization empowered local administrative and political decision-making. Among stakeholders, the policy was perceived to have created a sense of ownership and responsibility. Major problems that were said to be associated with decentralization included political harassment of civil servants, increased nepotism, inadequate financial resources, and mismanagement of resources. This study elicited perceptions about critical factors upon which successful implementation of the decentralization policy depended. These included: appreciation of the role of all stakeholders by district politicians; adequate availability and efficient utilization of resources; reasonably developed infrastructure prior to the policy change; appropriate sensitisation and training of those implementing policies; and the good will and active involvement of the local community. In the absence of these factors, implementation of decentralization of services to districts may not immediately make economic and administrative sense.
On decentralized design: Rationale, dynamics, and effects on decision-making
NASA Astrophysics Data System (ADS)
Chanron, Vincent
The focus of this dissertation is the design of complex systems, including engineering systems such as cars, airplanes, and satellites. Companies who design these systems are under constant pressure to design better products that meet customer expectations, and competition forces them to develop them faster. One of the responses of the industry to these conflicting challenges has been the decentralization of the design responsibilities. The current lack of understanding of the dynamics of decentralized design processes is the main motivation for this research, and places value on the descriptive base. It identifies the main reasons and the true benefits for companies to decentralize the design of their products. It also demonstrates the limitations of this approach by listing the relevant issues and problems created by the decentralization of decisions. Based on these observations, a game-theoretic approach to decentralized design is proposed to model the decisions made during the design process. The dynamics are modeled using mathematical formulations inspired from control theory. Building upon this formalism, the issue of convergence in decentralized design is analyzed: the equilibrium points of the design space are identified and convergent and divergent patterns are recognized. This rigorous investigation of the design process provides motivation and support for proposing new approaches to decentralized design problems. Two methods are developed, which aim at improving the design process in two ways: decreasing the product development time, and increasing the optimality of the final design. The frame of these methods are inspired by eigenstructure decomposition and set-based design, respectively. The value of the research detailed within this dissertation is in the proposed methods which are built upon the sound mathematical formalism developed. The contribution of this work is two fold: rigorous investigation of the design process, and practical support to decision-making in decentralized environments.
Educational decentralization, public spending, and social justice in Nigeria
NASA Astrophysics Data System (ADS)
Geo-Jaja, Macleans A.
2007-01-01
This study situates the process of educational decentralization in the narrower context of social justice. Its main object, however, is to analyze the implications of decentralization for strategies of equity and social justice in Nigeria. It starts from the premise that the early optimism that supported decentralization as an efficient and effective educational reform tool has been disappointed. The author maintains that decentralization — on its own — cannot improve education service delivery, the capacities of subordinate governments, or the integration of social policy in broader development goals. If the desired goals are to be met, public spending must be increased, greater tax revenues must be secured, and macro-economic stabilization must be achieved without re-instituting the welfare state.
Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.
Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor
2016-01-01
In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.
Decentralized Decision Making Toward Educational Goals.
ERIC Educational Resources Information Center
Monahan, William W.; Johnson, Homer M.
This monograph provides guidelines to help those school districts considering a more decentralized form of management. The authors discuss the levels at which different types of decisions should be made, describe the changing nature of the educational environment, identify different centralization-decentralization models, and suggest a flexible…
Decentralized Budgeting in Education: Model Variations and Practitioner Perspectives.
ERIC Educational Resources Information Center
Hall, George; Metsinger, Jackie; McGinnis, Patricia
In educational settings, decentralized budgeting refers to various fiscal practices that disperse budgeting responsibility away from central administration to the line education units. This distributed decision-making is common to several financial management models. Among the many financial management models that employ decentralized budgeting…
Decentralization and equity of resource allocation: evidence from Colombia and Chile.
Bossert, Thomas J.; Larrañaga, Osvaldo; Giedion, Ursula; Arbelaez, José Jesus; Bowser, Diana M.
2003-01-01
OBJECTIVE: To investigate the relation between decentralization and equity of resource allocation in Colombia and Chile. METHODS: The "decision space" approach and analysis of expenditures and utilization rates were used to provide a comparative analysis of decentralization of the health systems of Colombia and Chile. FINDINGS: Evidence from Colombia and Chile suggests that decentralization, under certain conditions and with some specific policy mechanisms, can improve equity of resource allocation. In these countries, equitable levels of per capita financial allocations at the municipal level were achieved through different forms of decentralization--the use of allocation formulae, adequate local funding choices and horizontal equity funds. Findings on equity of utilization of services were less consistent, but they did show that increased levels of funding were associated with increased utilization. This suggests that improved equity of funding over time might reduce inequities of service utilization. CONCLUSION: Decentralization can contribute to, or at least maintain, equitable allocation of health resources among municipalities of different incomes. PMID:12751417
Reliable Decentralized Control of Fuzzy Discrete-Event Systems and a Test Algorithm.
Liu, Fuchun; Dziong, Zbigniew
2013-02-01
A framework for decentralized control of fuzzy discrete-event systems (FDESs) has been recently presented to guarantee the achievement of a given specification under the joint control of all local fuzzy supervisors. As a continuation, this paper addresses the reliable decentralized control of FDESs in face of possible failures of some local fuzzy supervisors. Roughly speaking, for an FDES equipped with n local fuzzy supervisors, a decentralized supervisor is called k-reliable (1 ≤ k ≤ n) provided that the control performance will not be degraded even when n - k local fuzzy supervisors fail. A necessary and sufficient condition for the existence of k-reliable decentralized supervisors of FDESs is proposed by introducing the notions of M̃uc-controllability and k-reliable coobservability of fuzzy language. In particular, a polynomial-time algorithm to test the k-reliable coobservability is developed by a constructive methodology, which indicates that the existence of k-reliable decentralized supervisors of FDESs can be checked with a polynomial complexity.
Joseph, T K; Kartha, C P
1982-01-01
Centring of spectacle lenses is much neglected field of ophthalmology. The prismatic effect caused by wrong centring results in a phoria on the eye muscles which in turn causes persistent eyestrain. The theory of visual axis, optical axis and angle alpha is discussed. Using new methods the visual axis and optical axis of 35 subjects were measured. The results were computed for facial asymmetry, parallax error, angle alpha and also decentration for near vision. The results show that decentration is required on account of each of these factors. Considerable correction is needed in the vertical direction, a fact much neglected nowadays; and vertical decentration results in vertical phoria which is more symptomatic than horizontal phorias. Angle Alpha was computed for each of these patients. A new devise called 'The Kerala Decentration Meter' using the pinhole method for measuring the degree of decentration from the datum centre of the frame, and capable of correcting all the factors described above, is shown with diagrams.
SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog
NASA Astrophysics Data System (ADS)
Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely
2014-05-01
Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.
Ertan, Aylin; Karacal, Humeyra
2008-10-01
To compare accuracy of LASIK flap and INTACS centration following femtosecond laser application in normal and keratoconic eyes. This is a retrospective case series comprising 133 eyes of 128 patients referred for refractive surgery. All eyes were divided into two groups according to preoperative diagnosis: group 1 (LASIK group) comprised 74 normal eyes of 72 patients undergoing LASIK with a femtosecond laser (IntraLase), and group 2 (INTACS group) consisted of 59 eyes of 39 patients with keratoconus for whom INTACS were implanted using a femtosecond laser (IntraLase). Decentration of the LASIK flap and INTACS was analyzed using Pentacam. Temporal decentration was 612.56 +/- 384.24 microm (range: 30 to 2120 microm) in the LASIK group and 788.33 +/- 500.34 microm (range: 30 to 2450 microm) in the INTACS group. A statistically significant difference was noted between the groups in terms of decentration (P < .05). Regression analysis showed that the amount of decentration of the LASIK flap and INTACS correlated with the central corneal thickness in the LASIK group and preoperative sphere and cylinder in the INTACS group, respectively. Decentration with the IntraLase occurred in most cases, especially in keratoconic eyes. The applanation performed for centralization during IntraLase application may flatten and shift the pupil center, and thus cause decentralization of the LASIK flap and INTACS. Central corneal thickness in the LASIK group and preoperative sphere and cylinder in the INTACS group proved to be statistically significant parameters associated with decentration.
Decentralization or centralization: striking a balance.
Dirschel, K M
1994-09-01
An Executive Vice President for Nursing can provide the necessary link to meet diverse clinical demands when encountering centralization--decentralization decisions. Centralized communication links hospital departments giving nurses a unified voice. Decentralization acknowledges the need for diversity and achieves the right balance of uniformity through a responsive communications network.
Centralization vs. Decentralization: A Location Analysis Approach for Librarians
ERIC Educational Resources Information Center
Raffel, Jeffrey; Shishko, Robert
1972-01-01
An application of location theory to the question of centralized versus decentralized library facilities for a university, with relevance for special libraries is presented. The analysis provides models for a single library, for two or more libraries, or for decentralized facilities. (6 references) (Author/NH)
The Paradox of Decentralizing Schools: Lessons from Business, Government, and the Catholic Church.
ERIC Educational Resources Information Center
Murphy, Jerome T.
1989-01-01
By the year 2000, school decentralization could become another unfortunate, ineffectual pendulum swing. According to this article, a dynamic, ever-changing system of decentralization and centralization balances the benefits of local administrative autonomy with the pursuit of unified goals and helps each leadership level understand its…
On Deciding How to Decide: To Centralize or Decentralize.
ERIC Educational Resources Information Center
Chaffee, Ellen Earle
Issues concerning whether to centralize or decentralize decision-making are addressed, with applications for colleges. Centralization/decentralization (C/D) must be analyzed with reference to a particular decision. Three components of C/D are locus of authority, breadth of participation, and relative contribution by the decision-maker's staff. C/D…
Centralization Versus Decentralization: A Location Analysis Approach for Librarians.
ERIC Educational Resources Information Center
Shishko, Robert; Raffel, Jeffrey
One of the questions that seems to perplex many university and special librarians is whether to move in the direction of centralizing or decentralizing the library's collections and facilities. Presented is a theoretical approach, employing location theory, to the library centralization-decentralization question. Location theory allows the analyst…
Effects of Decentralization on School Resources
ERIC Educational Resources Information Center
Ahlin, Asa; Mork, Eva
2008-01-01
Sweden has undertaken major national reforms of its school sector, which, consequently, has been classified as one of the most decentralized ones in the OECD. This paper investigates whether local tax base, grants, and preferences affected local school resources differently as decentralization took place. We find that municipal tax base affects…
Responsibility Center Management: Lessons from 25 Years of Decentralized Management.
ERIC Educational Resources Information Center
Strauss, Jon C.; Curry, John R.
Decentralization of authority is a natural act in universities, but decentralization of responsibility is not. A problem faced by universities is the decoupling of academic authority from financial responsibility. The solution proposed in this book for the coupling is Responsibility Center Management (RCM), also called Revenue Responsibility…
A Review of Characteristics and Experiences of Decentralization of Education
ERIC Educational Resources Information Center
Mwinjuma, Juma Saidi; Kadir, Suhaida bte Abd.; Hamzah, Azimi; Basri, Ramli
2015-01-01
This paper scrutinizes decentralization of education with reference to some countries around the world. We consider discussion on decentralization to be complex, critical and broad question in the contemporary education planning, administration and politics of education reforms. Even though the debate on and implementation of decentralization…
40 CFR 51.353 - Network type and program evaluation.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., decentralized, or a hybrid of the two at the State's discretion, but shall be demonstrated to achieve the same... § 51.351 or 51.352 of this subpart. For decentralized programs other than those meeting the design.... (a) Presumptive equivalency. A decentralized network consisting of stations that only perform...
Visualizing the Collective Learner through Decentralized Networks
ERIC Educational Resources Information Center
Castro, Juan Carlos
2015-01-01
Understandings of decentralized networks are increasingly used to describe a way to structure curriculum and pedagogy. It is often understood as a structural model to organize pedagogical and curricular relationships in which there is no center. While this is important it also bears introducing into the discourse that decentralized networks are…
On l(1): Optimal decentralized performance
NASA Technical Reports Server (NTRS)
Sourlas, Dennis; Manousiouthakis, Vasilios
1993-01-01
In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.
Survey of decentralized control methods. [for large scale dynamic systems
NASA Technical Reports Server (NTRS)
Athans, M.
1975-01-01
An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.
Formation Flying With Decentralized Control in Libration Point Orbits
NASA Technical Reports Server (NTRS)
Folta, David; Carpenter, J. Russell; Wagner, Christoph
2000-01-01
A decentralized control framework is investigated for applicability of formation flying control in libration orbits. The decentralized approach, being non-hierarchical, processes only direct measurement data, in parallel with the other spacecraft. Control is accomplished via linearization about a reference libration orbit with standard control using a Linear Quadratic Regulator (LQR) or the GSFC control algorithm. Both are linearized about the current state estimate as with the extended Kalman filter. Based on this preliminary work, the decentralized approach appears to be feasible for upcoming libration missions using distributed spacecraft.
Organizational decentralization in radiology.
Aas, I H Monrad
2006-01-01
At present, most hospitals have a department of radiology where images are captured and interpreted. Decentralization is the opposite of centralization and means 'away from the centre'. With a Picture Archiving and Communication System (PACS) and broadband communications, transmitting radiology images between sites will be far easier than before. Qualitative interviews of 26 resource persons were performed in Norway. There was a response rate of 90%. Decentralization of radiology interpretations seems less relevant than centralization, but several forms of decentralization have a role to play. The respondents mentioned several advantages, including exploitation of capacity and competence. They also mentioned several disadvantages, including splitting professional communities and reduced contact between radiologists and clinicians. With the new technology decentralization and centralization of image interpretation are important possibilities in organizational change. This will be important for the future of teleradiology.
Effects of health care decentralization in Spain from a citizens' perspective.
Antón, José-Ignacio; Muñoz de Bustillo, Rafael; Fernández Macías, Enrique; Rivera, Jesús
2014-05-01
The aim of this article is to analyze the impact of the decentralization of the public national health system in Spain on citizens' satisfaction with different dimensions of primary and hospital care. Using micro-data from the Health Barometer 1996-2009 and taking advantage of the exogeneity of the different pace of decentralization across Spain using a difference-in-differences strategy, we find that, in general, decentralization has not improved citizens' satisfaction with different features of the health services. In our base model, we find that there are even some small negative effects on a subset of variables. Sensitivity analysis confirms that there is no empirical evidence for supporting that decentralization has had a positive impact on citizens' satisfaction with health care. We outline several possible reasons for this.
Partially Decentralized Control Architectures for Satellite Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bauer, Frank H.
2002-01-01
In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.
Lee, Emily; Grooms, Richard; Mamidala, Soumya; Nagy, Paul
2014-12-01
Value stream mapping (VSM) is a very useful technique to visualize and quantify the complex workflows often seen in clinical environments. VSM brings together multidisciplinary teams to identify parts of processes, collect data, and develop interventional ideas. An example involving pediatric MRI with general anesthesia VSM is outlined. As the process progresses, the map shows a large delay between the fax referral and the date of the scheduled and registered appointment. Ideas for improved efficiency and metrics were identified to measure improvement within a 6-month period, and an intervention package was developed for the department. Copyright © 2014. Published by Elsevier Inc.
Data Integration Tool: Permafrost Data Debugging
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.
2017-12-01
We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
de Castro, Alberto; Rosales, Patricia; Marcos, Susana
2007-03-01
To measure tilt and decentration of intraocular lenses (IOLs) with Scheimpflug and Purkinje imaging systems in physical model eyes with known amounts of tilt and decentration and patients. Instituto de Optica Daza de Valdés, Consejo Superior de Investigaciones Científicas, Madrid, Spain. Measurements of IOL tilt and decentration were obtained using a commercial Scheimpflug system (Pentacam, Oculus), custom algorithms, and a custom-built Purkinje imaging apparatus. Twenty-five Scheimpflug images of the anterior segment of the eye were obtained at different meridians. Custom algorithms were used to process the images (correction of geometrical distortion, edge detection, and curve fittings). Intraocular lens tilt and decentration were estimated by fitting sinusoidal functions to the projections of the pupillary axis and IOL axis in each image. The Purkinje imaging system captures pupil images showing reflections of light from the anterior corneal surface and anterior and posterior lens surfaces. Custom algorithms were used to detect the Purkinje image locations and estimate IOL tilt and decentration based on a linear system equation and computer eye models with individual biometry. Both methods were validated with a physical model eye in which IOL tilt and decentration can be set nominally. Twenty-one eyes of 12 patients with IOLs were measured with both systems. Measurements of the physical model eye showed an absolute discrepancy between nominal and measured values of 0.279 degree (Purkinje) and 0.243 degree (Scheimpflug) for tilt and 0.094 mm (Purkinje) and 0.228 mm (Scheimpflug) for decentration. In patients, the mean tilt was less than 2.6 degrees and the mean decentration less than 0.4 mm. Both techniques showed mirror symmetry between right eyes and left eyes for tilt around the vertical axis and for decentration in the horizontal axis. Both systems showed high reproducibility. Validation experiments on physical model eyes showed slightly higher accuracy with the Purkinje method than the Scheimpflug imaging method. Horizontal measurements of patients with both techniques were highly correlated. The IOLs tended to be tilted and decentered nasally in most patients.
Agile parallel bioinformatics workflow management using Pwrake.
Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro
2011-09-08
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.
Agile parallel bioinformatics workflow management using Pwrake
2011-01-01
Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows. PMID:21899774
Predicting crystalline lens fall caused by accommodation from changes in wavefront error
He, Lin; Applegate, Raymond A.
2011-01-01
PURPOSE To illustrate and develop a method for estimating crystalline lens decentration as a function of accommodative response using changes in wavefront error and show the method and limitations using previously published data (2004) from 2 iridectomized monkey eyes so that clinicians understand how spherical aberration can induce coma, in particular in intraocular lens surgery. SETTINGS College of Optometry, University of Houston, Houston, USA. DESIGN Evaluation of diagnostic test or technology. METHODS Lens decentration was estimated by displacing downward the wavefront error of the lens with respect to the limiting aperture (7.0 mm) and ocular first surface wavefront error for each accommodative response (0.00 to 11.00 diopters) until measured values of vertical coma matched previously published experimental data (2007). Lens decentration was also calculated using an approximation formula that only included spherical aberration and vertical coma. RESULTS The change in calculated vertical coma was consistent with downward lens decentration. Calculated downward lens decentration peaked at approximately 0.48 mm of vertical decentration in the right eye and approximately 0.31 mm of decentration in the left eye using all Zernike modes through the 7th radial order. Calculated lens decentration using only coma and spherical aberration formulas was peaked at approximately 0.45 mm in the right eye and approximately 0.23 mm in the left eye. CONCLUSIONS Lens fall as a function of accommodation was quantified noninvasively using changes in vertical coma driven principally by the accommodation-induced changes in spherical aberration. The newly developed method was valid for a large pupil only. PMID:21700108
Optical performance of toric intraocular lenses in the presence of decentration.
Zhang, Bin; Ma, Jin-Xue; Liu, Dan-Yan; Du, Ying-Hua; Guo, Cong-Rong; Cui, Yue-Xian
2015-01-01
To evaluate the optical performance of toric intraocular lenses (IOLs) after decentration and with different pupil diameters, but with the IOL astigmatic axis aligned. Optical performances of toric T5 and SN60AT spherical IOLs after decentration were tested on a theoretical pseudophakic model eye based on the Hwey-Lan Liou schematic eye using the Zemax ray-tracing program. Changes in optical performance were analyzed in model eyes with 3-mm, 4-mm, and 5-mm pupil diameters and decentered from 0.25 mm to 0.75 mm with an interval of 5° at the meridian direction from 0° to 90°. The ratio of the modulation transfer function (MTF) between a decentered and a centered IOL (MTFDecentration/MTFCentration) was calculated to analyze the decrease in optical performance. Optical performance of the toric IOL remained unchanged when IOLs were decentered in any meridian direction. The MTFs of the two IOLs decreased, whereas optical performance remained equivalent after decentration. The MTFDecentration/MTFCentration ratios of the IOLs at a decentration from 0.25 mm to 0.75 mm were comparable in the toric and SN60AT IOLs. After decentration, MTF decreased further, with the MTF of the toric IOL being slightly lower than that of the SN60AT IOL. Imaging qualities of the two IOLs decreased when the pupil diameter and the degree of decentration increased, but the decrease was similar in the toric and spherical IOLs. Toric IOLs were comparable to spherical IOLs in terms of tolerance to decentration at the correct axial position.
Optical performance of toric intraocular lenses in the presence of decentration
Zhang, Bin; Ma, Jin-Xue; Liu, Dan-Yan; Du, Ying-Hua; Guo, Cong-Rong; Cui, Yue-Xian
2015-01-01
AIM To evaluate the optical performance of toric intraocular lenses (IOLs) after decentration and with different pupil diameters, but with the IOL astigmatic axis aligned. METHODS Optical performances of toric T5 and SN60AT spherical IOLs after decentration were tested on a theoretical pseudophakic model eye based on the Hwey-Lan Liou schematic eye using the Zemax ray-tracing program. Changes in optical performance were analyzed in model eyes with 3-mm, 4-mm, and 5-mm pupil diameters and decentered from 0.25 mm to 0.75 mm with an interval of 5° at the meridian direction from 0° to 90°. The ratio of the modulation transfer function (MTF) between a decentered and a centered IOL (MTFDecentration/MTFCentration) was calculated to analyze the decrease in optical performance. RESULTS Optical performance of the toric IOL remained unchanged when IOLs were decentered in any meridian direction. The MTFs of the two IOLs decreased, whereas optical performance remained equivalent after decentration. The MTFDecentration/MTFCentration ratios of the IOLs at a decentration from 0.25 mm to 0.75 mm were comparable in the toric and SN60AT IOLs. After decentration, MTF decreased further, with the MTF of the toric IOL being slightly lower than that of the SN60AT IOL. Imaging qualities of the two IOLs decreased when the pupil diameter and the degree of decentration increased, but the decrease was similar in the toric and spherical IOLs. CONCLUSIONS Toric IOLs were comparable to spherical IOLs in terms of tolerance to decentration at the correct axial position. PMID:26309871
ERIC Educational Resources Information Center
Walker, William G.
This report outlines the history of the centralization-decentralization dilemma in the goverance of organizations, discusses two types of centralization-decentralization continua, and suggests further research. The first type of continuum discussed -- the traditional American -- refers to decisionmaking in the areas of public debate and partisan…
Providing leadership to a decentralized total quality process.
Diederich, J J; Eisenberg, M
1993-01-01
Integrating total quality management into the culture of an organization and the daily work of employees requires a decentralized leadership structure that encourages all employees to become involved. This article, based upon the experience of the University of Michigan Hospitals Professional Services Divisional Lead Team, outlines a process for decentralizing the total quality management process.
The Effect of Political Decentralization on School Leadership in German Vocational Schools
ERIC Educational Resources Information Center
Gessler, Michael; Ashmawy, Iman K.
2016-01-01
In this explorative qualitative study the effect of political decentralization on vocational school leadership is investigated. Through conducting structural interviews with 15 school principals in the states of Bremen and Lower Saxony in Germany, the study was able to conclude that political decentralization entails the creation of elected bodies…
ERIC Educational Resources Information Center
Huerta, Luis A.
2009-01-01
This article analyzes how macrolevel institutional forces persist and limit the expansion of decentralized schools that attempt to challenge normative definitions and practices of traditional school organizations. Using qualitative case study methodology, the analysis focuses on how one decentralized charter school navigated and reconciled its…
After Decentralization: Delimitations and Possibilities within New Fields
ERIC Educational Resources Information Center
Wahlstrom, Ninni
2008-01-01
The shift from a centralized to a decentralized school system can be seen as a solution to an uncertain problem. Through analysing the displacements in the concept of equivalence within Sweden's decentralized school system, this study illustrates how the meaning of the concept of equivalence shifts over time, from a more collective target…
ERIC Educational Resources Information Center
Winardi
2017-01-01
Decentralization is acknowledged as the handover of government from central government to local government, including giving broader authority to local governments to manage education. This study aims to discovering education development gap between regions in Indonesia as a result of decentralization. This research method uses descriptive…
ERIC Educational Resources Information Center
Nasrullah
2016-01-01
The roles of educational, political, and community leaders in the decentralization of education in Khyber Pakhtunkhwa (KPK) province of Pakistan were explored in this phenomenological research. Examined were leaders' perceptions and understandings of decentralization and its effects on teacher instructional practices, process of student learning,…
Decentralization in Education: Technical Demands as a Critical Ingredient.
ERIC Educational Resources Information Center
Hannaway, Jane
The implications of decentralization reform on the amount of serious attention and effort that teachers give to teaching and learning activities are explored in this paper. The discussion is informed by the results of two case studies of school districts recognized as exemplary cases of decentralization. The first section describes limitations of…
Lega, Federico; Sargiacomo, Massimo; Ianni, Luca
2010-11-01
In this paper, we aim to discuss the implications and lessons that can be learnt from the ongoing process of federalism affecting the Italian National Health System (INHS). Many countries are currently taking decisions concerning the decentralization or re-centralization of their health-care systems, with several key issues that are illustrated in the recent history of the INHS. The decentralization process of INHS has produced mixed results, as some regions took advantage of it to strengthen their systems, whereas others were not capable of developing an effective steering role. We argue that the mutual reinforcement of the decentralization and recentralization processes is not paradoxical, but is actually an effective way for the State to maintain control over the equity and efficiency of its health-care system while decentralizing at a regional level. In this perspective, we provide evidence backing up some of the assumptions made in previous works as well as new food-for thought - specifically on how governmentality and federalism should meet - to reshape the debate on decentralization in health care.
Indirect decentralized repetitive control
NASA Technical Reports Server (NTRS)
Lee, Soo Cheol; Longman, Richard W.
1993-01-01
Learning control refers to controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented a theory of indirect decentralized learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper extends these results to apply to the indirect repetitive control problem in which a periodic (i.e., repetitive) command is given to a control system. Decentralized indirect repetitive control algorithms are presented that have guaranteed convergence to zero tracking error under very general conditions. The original motivation of the repetitive control and learning control fields was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the desired trajectory. Decentralized repetitive control is natural for this application because the feedback control for link rotations is normally implemented in a decentralized manner, treating each link as if it is independent of the other links.
Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang
2011-07-01
In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.
The effect of social influence on cognitive development and school performance.
Hartmann, E; Eri, T J; Skinstad, A H
1989-01-01
Three months before school entrance a sample of 29 children and their mothers was tested for degree of decentred child educability and degree of decentred maternal teaching. Mother and child were tested in two different situations, thus preventing interdependency between the measures of mother and child. Four months after school entrance, teacher judgements of school performance were obtained. A strong correspondence between degree of decentred child educability and degree of decentred maternal teaching was demonstrated. Degree of decentred maternal teaching and degree of decentred child educability were found to be good predictors of school performance, accounting for respectively 45 and 33% of the variance in school performance. In contrast a test of school readiness only accounted for 2% of the variance. A test of intelligence given after the teacher judgement accounted for 31% of the variance. The fact that the mother seems to be a better predictor of her child's school performance than the child himself, supports the assumption that parents, particularly mothers, are important mediators between the child and the outer world.
Centralized versus decentralized decision-making for recycled material flows.
Hong, I-Hsuan; Ammons, Jane C; Realff, Matthew J
2008-02-15
A reverse logistics system is a network of transportation logistics and processing functions that collect, consolidate, refurbish, and demanufacture end-of-life products. This paper examines centralized and decentralized models of decision-making for material flows and associated transaction prices in reverse logistics networks. We compare the application of a centralized model for planning reverse production systems, where a single planner is acquainted with all of the system information and has the authority to determine decision variables for the entire system, to a decentralized approach. In the decentralized approach, the entities coordinate between tiers of the system using a parametrized flow function and compete within tiers based on reaching a price equilibrium. We numerically demonstrate the increase in the total net profit of the centralized system relative to the decentralized one. This implies that one may overestimate the system material flows and profit if the system planner utilizes a centralized viewto predict behaviors of independent entities in the system and that decentralized contract mechanisms will require careful design to avoid losses in the efficiency and scope of these systems.
[Multimodal document management in radiotherapy].
Fahrner, H; Kirrmann, S; Röhner, F; Schmucker, M; Hall, M; Heinemann, F
2013-12-01
After incorporating treatment planning and the organisational model of treatment planning in the operating schedule system (BAS, "Betriebsablaufsystem"), complete document qualities were embedded in the digital environment. The aim of this project was to integrate all documents independent of their source (paper-bound or digital) and to make content from the BAS available in a structured manner. As many workflow steps as possible should be automated, e.g. assigning a document to a patient in the BAS. Additionally it must be guaranteed that at all times it could be traced who, when, how and from which source documents were imported into the departmental system. Furthermore work procedures should be changed that the documentation conducted either directly in the departmental system or from external systems can be incorporated digitally and paper document can be completely avoided (e.g. documents such as treatment certificate, treatment plans or documentation). It was a further aim, if possible, to automate the removal of paper documents from the departmental work flow, or even to make such paper documents superfluous. In this way patient letters for follow-up appointments should automatically generated from the BAS. Similarly patient record extracts in the form of PDF files should be enabled, e.g. for controlling purposes. The available document qualities were analysed in detail by a multidisciplinary working group (BAS-AG) and after this examination and assessment of the possibility of modelling in our departmental workflow (BAS) they were transcribed into a flow diagram. The gathered specifications were implemented in a test environment by the clinical and administrative IT group of the department of radiation oncology and subsequent to a detailed analysis introduced into clinical routine. The department has succeeded under the conditions of the aforementioned criteria to embed all relevant documents in the departmental workflow via continuous processes. Since the completion of the concepts and the implementation in our test environment 15,000 documents were introduced into the departmental workflow following routine approval. Furthermore approximately 5000 appointment letters for patient aftercare per year were automatically generated by the BAS. In addition patient record extracts in the form of PDF files for the medical services of the healthcare insurer can be generated.
NASA Astrophysics Data System (ADS)
Gali, Raja L.; Roth, Christopher G.; Smith, Elizabeth; Dave, Jaydev K.
2018-03-01
In digital radiography, computed radiography (CR) technology is based on latent image capture by storage phosphors whereas direct radiography (DR) technology is based either on indirect conversion using a scintillator or direct conversion using a photoconductor. DR-based portable imaging systems may enhance workflow efficiency. The purpose of this work was to investigate changes in workflow efficiency at a tertiary healthcare center after transitioning from CR to DR technology for imaging with portable x-ray units. An IRB exemption was obtained. Data for all inpatient-radiographs acquired with portable x-ray units from July-2014 till June-2015 (period 1) with CR technology (AMX4 or AMX4+ portable unit from GE Healthcare, NX workstation from Agfa Healthcare for digitization), from July-2015 till June-2016 (period 2) with DR technology (Carestream DRX-Revolution x-ray units and DRX-1C image receptors) and from July-2016 till January-2017 (period 3; same DR technology) were extracted using Centricity RIS-IC (GE Healthcare). Duration between the imaging-examination scheduled time and completed time (timesch-com) was calculated and compared using non-parametric tests (between the three time periods with corrections for multiple comparisons; three time periods were used to identify if there were any other potential temporal trends not related to transitioning from CR to DR). IBM's SPSS package was used for statistical analysis. Overall data was obtained from 33131, 32194, and 18015 cases in periods 1, 2 and 3, respectively. Independent-Samples Kruskal-Wallis test revealed a statistically significant difference in timesch-com across the three time periods (χ2(2, n= 83,340) = 2053, p < 0.001). The timesch-com was highest for period 1 i.e., radiographs acquired with CR technology (median: 64 minutes) and it decreased significantly for radiographs acquired with DR technology in periods 2 (median: 49 minutes; p < 0.001) and 3 (median∶ 44 minutes; p < 0.001). Overall, adoption of DR technology resulted in a drop in timesch-com by 27% relative to the use of CR technology. Transitioning from CR to DR was associated with improved workflow efficiency for radiographic imaging with portable x-ray units.
Using Analytics to Support Petabyte-Scale Science on the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Ganguly, S.; Nemani, R. R.
2014-12-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. Analytics within NEX occurs at several levels - data, workflows, science and knowledge. At the data level, we are focusing on collecting and analyzing any information that is relevant to efficient acquisition, processing and management of data at the smallest granularity, such as files or collections. This includes processing and analyzing all local and many external metadata that are relevant to data quality, size, provenance, usage and other attributes. This then helps us better understand usage patterns and improve efficiency of data handling within NEX. When large-scale workflows are executed on NEX, we capture information that is relevant to processing and that can be analyzed in order to improve efficiencies in job scheduling, resource optimization, or data partitioning that would improve processing throughput. At this point we also collect data provenance as well as basic statistics of intermediate and final products created during the workflow execution. These statistics and metrics form basic process and data QA that, when combined with analytics algorithms, helps us identify issues early in the production process. We have already seen impact in some petabyte-scale projects, such as global Landsat processing, where we were able to reduce processing times from days to hours and enhance process monitoring and QA. While the focus so far has been mostly on support of NEX operations, we are also building a web-based infrastructure that enables users to perform direct analytics on science data - such as climate predictions or satellite data. Finally, as one of the main goals of NEX is knowledge acquisition and sharing, we began gathering and organizing information that associates users and projects with data, publications, locations and other attributes that can then be analyzed as a part of the NEX knowledge graph and used to greatly improve advanced search capabilities. Overall, we see data analytics at all levels as an important part of NEX as we are continuously seeking improvements in data management, workflow processing, use of resources, usability and science acceleration.
Electronic Medical Business Operations System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, D. T.; Metcalf, J. R.; North, M. P.
Electronic Management of medical records has taken a back seat both in private industry and in the government. Record volumes continue to rise every day and management of these paper records is inefficient and very expensive. In 2005, the White House announced support for the development of electronic medical records across the federal government. In 2006, the DOE issued 10 CFR 851 requiring all medical records be electronically available by 2015. The Y-12 National Security Complex is currently investing funds to develop a comprehensive EMR to incorporate the requirements of an occupational health facility which are common across the Nuclearmore » Weapons Complex (NWC). Scheduling, workflow, and data capture from medical surveillance, certification, and qualification examinations are core pieces of the system. The Electronic Medical Business Operations System (EMBOS) will provide a comprehensive health tool solution to 10 CFR 851 for Y-12 and can be leveraged to the Nuclear Weapon Complex (NWC); all site in the NWC must meet the requirements of 10 CFR 851 which states that all medical records must be electronically available by 2015. There is also potential to leverage EMBOS to the private4 sector. EMBOS is being developed and deployed in phases. When fully deployed the EMBOS will be a state-of-the-art web-enabled integrated electronic solution providing a complete electronic medical record (EMR). EMBOS has been deployed and provides a dynamic electronic medical history and surveillance program (e.g., Asbestos, Hearing Conservation, and Respirator Wearer) questionnaire. Table 1 below lists EMBOS capabilities and data to be tracked. Data to be tracked: Patient Demographics Current/Historical; Physical Examination Data; Employee Medical Health History; Medical Surveillance Programs; Patient and Provider Schedules; Medical Qualification/Certifications; Laboratory Data; Standardized Abnormal Lab Notifications; Prescription Medication Tracking and Dispensing; Allergies; Non-Occupational Illness and Injury Visits; Occupational Recommendations/Restrictions; Diagnosis/Vital Signs/Blood Pressures; Immunizations; Return to Work Visits Capabilities: Targeted Health Assessments; Patient Input Capabilities for Questionnaires; Medical Health History; Surveillance Programs; Human Reliability Program; Scheduling; Automated Patient Check-in/Check-out; Provider & Patient Workflow; Laboratory Interface & Device Integration; Human Reliability Program Processing; Interoperability with SAP, IH, IS, RADCON; Coding: ICED-9/10; Desktop Integration; Interface/Storage of Digital X-Rays (PACS)« less
REEF: Retainable Evaluator Execution Framework
Weimer, Markus; Chen, Yingda; Chun, Byung-Gon; Condie, Tyson; Curino, Carlo; Douglas, Chris; Lee, Yunseong; Majestro, Tony; Malkhi, Dahlia; Matusevych, Sergiy; Myers, Brandon; Narayanamurthy, Shravan; Ramakrishnan, Raghu; Rao, Sriram; Sears, Russell; Sezgin, Beysim; Wang, Julia
2015-01-01
Resource Managers like Apache YARN have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low-level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault-tolerance, task scheduling and coordination) and re-implement common mechanisms (e.g., caching, bulk-data transfers). This paper presents REEF, a development framework that provides a control-plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource re-use for data caching, and state management abstractions that greatly ease the development of elastic data processing work-flows on cloud platforms that support a Resource Manager service. REEF is being used to develop several commercial offerings such as the Azure Stream Analytics service. Furthermore, we demonstrate REEF development of a distributed shell application, a machine learning algorithm, and a port of the CORFU [4] system. REEF is also currently an Apache Incubator project that has attracted contributors from several instititutions.1 PMID:26819493
Decentralized Online Social Networks
NASA Astrophysics Data System (ADS)
Datta, Anwitaman; Buchegger, Sonja; Vu, Le-Hung; Strufe, Thorsten; Rzadca, Krzysztof
Current Online social networks (OSN) are web services run on logically centralized infrastructure. Large OSN sites use content distribution networks and thus distribute some of the load by caching for performance reasons, nevertheless there is a central repository for user and application data. This centralized nature of OSNs has several drawbacks including scalability, privacy, dependence on a provider, need for being online for every transaction, and a lack of locality. There have thus been several efforts toward decentralizing OSNs while retaining the functionalities offered by centralized OSNs. A decentralized online social network (DOSN) is a distributed system for social networking with no or limited dependency on any dedicated central infrastructure. In this chapter we explore the various motivations of a decentralized approach to online social networking, discuss several concrete proposals and types of DOSN as well as challenges and opportunities associated with decentralization.
Kolehmainen-Aitken, Riitta-Liisa
2004-01-01
Designers and implementers of decentralization and other reform measures have focused much attention on financial and structural reform measures, but ignored their human resource implications. Concern is mounting about the impact that the reallocation of roles and responsibilities has had on the health workforce and its management, but the experiences and lessons of different countries have not been widely shared. This paper examines evidence from published literature on decentralization's impact on the demand side of the human resource equation, as well as the factors that have contributed to the impact. The elements that make such an impact analysis exceptionally complex are identified. They include the mode of decentralization that a country is implementing, the level of responsibility for the salary budget and pay determination, and the civil service status of transferred health workers. The main body of the paper is devoted to examining decentralization's impact on human resource issues from three different perspectives: that of local health managers, health workers themselves, and national health leaders. These three groups have different concerns in the human resource realm, and consequently, have been differently affected by decentralization processes. The paper concludes with recommendations regarding three key concerns that national authorities and international agencies should give prompt attention to. They are (1) defining the essential human resource policy, planning and management skills for national human resource managers who work in decentralized countries, and developing training programs to equip them with such skills; (2) supporting research that focuses on improving the knowledge base of how different modes of decentralization impact on staffing equity; and (3) identifying factors that most critically influence health worker motivation and performance under decentralization, and documenting the most cost-effective best practices to improve them. Notable experiences from South Africa, Ghana, Indonesia and Mexico are shared in an annex. PMID:15144558
Value Driven Information Processing and Fusion
2016-03-01
consensus approach allows a decentralized approach to achieve the optimal error exponent of the centralized counterpart, a conclusion that is signifi...SECURITY CLASSIFICATION OF: The objective of the project is to develop a general framework for value driven decentralized information processing...including: optimal data reduction in a network setting for decentralized inference with quantization constraint; interactive fusion that allows queries and
Optimizing MRI Logistics: Prospective Analysis of Performance, Efficiency, and Patient Throughput.
Beker, Kevin; Garces-Descovich, Alejandro; Mangosing, Jason; Cabral-Goncalves, Ines; Hallett, Donna; Mortele, Koenraad J
2017-10-01
The objective of this study is to optimize MRI logistics through evaluation of MRI workflow and analysis of performance, efficiency, and patient throughput in a tertiary care academic center. For 2 weeks, workflow data from two outpatient MRI scanners were prospectively collected and stratified by value added to the process (i.e., value-added time, business value-added time, or non-value-added time). Two separate time cycles were measured: the actual MRI process cycle as well as the complete length of patient stay in the department. In addition, the impact and frequency of delays across all observations were measured. A total of 305 MRI examinations were evaluated, including body (34.1%), neurologic (28.9%), musculoskeletal (21.0%), and breast examinations (16.1%). The MRI process cycle lasted a mean of 50.97 ± 24.4 (SD) minutes per examination; the mean non-value-added time was 13.21 ± 18.77 minutes (25.87% of the total process cycle time). The mean length-of-stay cycle was 83.51 ± 33.63 minutes; the mean non-value-added time was 24.33 ± 24.84 minutes (29.14% of the total patient stay). The delay with the highest frequency (5.57%) was IV or port placement, which had a mean delay of 22.82 minutes. The delay with the greatest impact on time was MRI arthrography for which joint injection of contrast medium was necessary but was not accounted for in the schedule (mean delay, 42.2 minutes; frequency, 1.64%). Of 305 patients, 34 (11.15%) did not arrive at or before their scheduled time. Non-value-added time represents approximately one-third of the total MRI process cycle and patient length of stay. Identifying specific delays may expedite the application of targeted improvement strategies, potentially increasing revenue, efficiency, and overall patient satisfaction.
Decentralized control of large flexible structures by joint decoupling
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Juang, Jer-Nan
1994-01-01
This paper presents a novel method to design decentralized controllers for large complex flexible structures by using the idea of joint decoupling. Decoupling of joint degrees of freedom from the interior degrees of freedom is achieved by setting the joint actuator commands to cancel the internal forces exerting on the joint degrees of freedom. By doing so, the interactions between substructures are eliminated. The global structure control design problem is then decomposed into several substructure control design problems. Control commands for interior actuators are set to be localized state feedback using decentralized observers for state estimation. The proposed decentralized controllers can operate successfully at the individual substructure level as well as at the global structure level. Not only control design but also control implementation is decentralized. A two-component mass-spring-damper system is used as an example to demonstrate the proposed method.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
COINSTAC: Decentralizing the future of brain imaging analysis
Ming, Jing; Verner, Eric; Sarwate, Anand; Kelly, Ross; Reed, Cory; Kahleck, Torran; Silva, Rogers; Panta, Sandeep; Turner, Jessica; Plis, Sergey; Calhoun, Vince
2017-01-01
In the era of Big Data, sharing neuroimaging data across multiple sites has become increasingly important. However, researchers who want to engage in centralized, large-scale data sharing and analysis must often contend with problems such as high database cost, long data transfer time, extensive manual effort, and privacy issues for sensitive data. To remove these barriers to enable easier data sharing and analysis, we introduced a new, decentralized, privacy-enabled infrastructure model for brain imaging data called COINSTAC in 2016. We have continued development of COINSTAC since this model was first introduced. One of the challenges with such a model is adapting the required algorithms to function within a decentralized framework. In this paper, we report on how we are solving this problem, along with our progress on several fronts, including additional decentralized algorithms implementation, user interface enhancement, decentralized regression statistic calculation, and complete pipeline specifications. PMID:29123643
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
The social control of energy: A case for the promise of decentralized solar technologies
NASA Astrophysics Data System (ADS)
Gilmer, R. W.
1980-05-01
Decentralized solar technology and centralized electric utilities were contrasted in the ways they assign property rights in capital and energy output; in the assignment of operational control; and in the means of monitoring, policing, and enforcing property rights. An analogy was drawn between the decision of an energy consumer to use decentralized solar and the decision of a firm to vertically integrate, that is, to extend the boundary of a the firm to vertically integrate, that is, to extend the boundary of the firm by making inputs or further processing output. Decentralized solar energy production offers the small energy consumer the chance to cut ties to outside suppliers--to vertically integrate energy production into the home or business. The development of this analogy provides insight into important noneconomic aspects of solar energy, and it points clearly to the lighter burdens of social management offered by decentralized solar technology.
The Impact of Electronic Health Records on Workflow and Financial Measures in Primary Care Practices
Fleming, Neil S; Becker, Edmund R; Culler, Steven D; Cheng, Dunlei; McCorkle, Russell; da Graca, Briget; Ballard, David J
2014-01-01
Objective To estimate a commercially available ambulatory electronic health record’s (EHR’s) impact on workflow and financial measures. Data Sources/Study Setting Administrative, payroll, and billing data were collected for 26 primary care practices in a fee-for-service network that rolled out an EHR on a staggered schedule from June 2006 through December 2008. Study Design An interrupted time series design was used. Staffing, visit intensity, productivity, volume, practice expense, payments received, and net income data were collected monthly for 2004–2009. Changes were evaluated 1–6, 7–12, and >12 months postimplementation. Data Collection/Extraction Methods Data were accessed through a SQLserver database, transformed into SAS®, and aggregated by practice. Practice-level data were divided by full-time physician equivalents for comparisons across practices by month. Principal Findings Staffing and practice expenses increased following EHR implementation (3 and 6 percent after 12 months). Productivity, volume, and net income decreased initially but recovered to/close to preimplementation levels after 12 months. Visit intensity did not change significantly, and a secular trend offset the decrease in payments received. Conclusions Expenses increased and productivity decreased following EHR implementation, but not as much or as persistently as might be expected. Longer term effects still need to be examined. PMID:24359533
The LHCb software and computing upgrade for Run 3: opportunities and challenges
NASA Astrophysics Data System (ADS)
Bozzi, C.; Roiser, S.; LHCb Collaboration
2017-10-01
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
NASA Astrophysics Data System (ADS)
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
Fischman, Daniel
2010-01-01
Patients' connectedness to their providers has been shown to influence the success of preventive health and disease management programs. Lean Six Sigma methodologies were employed to study workflow processes, patient-physician familiarity, and appointment compliance to improve continuity of care in an internal medicine residency clinic. We used a rapid-cycle test to evaluate proposed improvements to the baseline-identified factors impeding efficient clinic visits. Time-study, no-show, and patient-physician familiarity data were collected to evaluate the effect of interventions to improve clinic efficiency and continuity of medical care. Forty-seven patients were seen in each of the intervention and control groups. The wait duration between the end of triage and the resident-patient encounter was statistically shorter for the intervention group. Trends toward shorter wait times for medical assistant triage and total encounter were also seen in the intervention group. On all measures of connectedness, both the physicians and patients in the intervention group showed a statistically significant increased familiarity with each other. This study shows that incremental changes in workflow processes in a residency clinic can have a significant impact on practice efficiency and adherence to scheduled visits for preventive health care and chronic disease management. This project used a structured "Plan-Do-Study-Act" approach.
2010-01-01
Background Histologic samples all funnel through the H&E microtomy staining area. Here manual processes intersect with semi-automated processes creating a bottleneck. We compare alternate work processes in anatomic pathology primarily in the H&E staining work cell. Methods We established a baseline measure of H&E process impact on personnel, information management and sample flow from historical workload and production data and direct observation. We compared this to performance after implementing initial Lean process modifications, including workstation reorganization, equipment relocation and workflow levelling, and the Ventana Symphony stainer to assess the impact on productivity in the H&E staining work cell. Results Average time from gross station to assembled case decreased by 2.9 hours (12%). Total process turnaround time (TAT) exclusive of processor schedule changes decreased 48 minutes/case (4%). Mean quarterly productivity increased 8.5% with the new methods. Process redesign reduced the number of manual steps from 219 to 182, a 17% reduction. Specimen travel distance was reduced from 773 ft/case to 395 ft/case (49%) overall, and from 92 to 53 ft/case in the H&E cell (42% improvement). Conclusions Implementation of Lean methods in the H&E work cell of histology can result in improved productivity, improved through-put and case availability parameters including TAT. PMID:20181123
The Effect of School Autonomy and School Internal Decentralization on Students' Reading Literacy
ERIC Educational Resources Information Center
Maslowski, Ralf; Scheerens, Jaap; Luyten, Hans
2007-01-01
Over the past 2 decades, a large number of countries have been engaged in the decentralization of decision-making to schools. Although the motives and incentives for school autonomy are often diverse, it is commonly believed that decentralization will enhance the quality of schooling. Based on a secondary analysis of data from OECD's Programme for…
ERIC Educational Resources Information Center
Candoli, I. C.; Leu, Donald J.
This analysis draws on a variety of experiences with and models of centralized and decentralized school systems now in existence. The decentralized model or profile posed for consideration is intended as a basis for the development of a process by which indigenous models can be established for any locale as unique local variables are identified…
Multi-Agent Task Negotiation Among UAVs to Defend Against Swarm Attacks
2012-03-01
are based on economic models [39]. Auction methods of task coordination also attempt to deal with agents dealing with noisy, dynamic environments...August 2006. [34] M. Alighanbari, “ Robust and decentralized task assignment algorithms for uavs,” Ph.D. dissertation, Massachusetts Institute of Technology...Implicit Coordination . . . . . . . . . . . . . 12 2.4 Decentralized Algorithm B - Market- Based . . . . . . . . . . . . . . . . 12 2.5 Decentralized
Efficient decentralized consensus protocols
NASA Technical Reports Server (NTRS)
Lakshman, T. V.; Agrawala, A. K.
1986-01-01
Decentralized consensus protocols are characterized by successive rounds of message interchanges. Protocols which achieve a consensus in one round of message interchange require O(N-squared) messages, where N is the number of participants. In this paper, a communication scheme, based on finite projective planes, which requires only O(N sq rt N) messages for each round is presented. Using this communication scheme, decentralized consensus protocols which achieve a consensus within two rounds of message interchange are developed. The protocols are symmetric, and the communication scheme does not impose any hierarchical structure. The scheme is illustrated using blocking and nonblocking commit protocols, decentralized extrema finding, and computation of the sum function.
Vazquez, Luis A; Jurado, Francisco; Castaneda, Carlos E; Santibanez, Victor
2018-02-01
This paper presents a continuous-time decentralized neural control scheme for trajectory tracking of a two degrees of freedom direct drive vertical robotic arm. A decentralized recurrent high-order neural network (RHONN) structure is proposed to identify online, in a series-parallel configuration and using the filtered error learning law, the dynamics of the plant. Based on the RHONN subsystems, a local neural controller is derived via backstepping approach. The effectiveness of the decentralized neural controller is validated on a robotic arm platform, of our own design and unknown parameters, which uses industrial servomotors to drive the joints.
A review on full-scale decentralized wastewater treatment systems: techno-economical approach.
Singh, Nitin Kumar; Kazmi, A A; Starkl, M
2015-01-01
As a solution to the shortcomings of centralized systems, over the last two decades large numbers of decentralized wastewater treatment plants of different technology types have been installed all over the world. This paper aims at deriving lessons learned from existing decentralized wastewater treatment plants that are relevant for smaller towns (and peri-urban areas) as well as rural communities in developing countries, such as India. Only full-scale implemented decentralized wastewater treatment systems are reviewed in terms of performance, land area requirement, capital cost, and operation and maintenance costs. The results are presented in tables comparing different technology types with respect to those parameters.
A modified approach to controller partitioning
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Veillette, Robert J.
1993-01-01
The idea of computing a decentralized control law for the integrated flight/propulsion control of an aircraft by partitioning a given centralized controller is investigated. An existing controller partitioning methodology is described, and a modified approach is proposed with the objective of simplifying the associated controller approximation problem. Under the existing approach, the decentralized control structure is a variable in the partitioning process; by contrast, the modified approach assumes that the structure is fixed a priori. Hence, the centralized controller design may take the decentralized control structure into account. Specifically, the centralized controller may be designed to include all the same inputs and outputs as the decentralized controller; then, the two controllers may be compared directly, simplifying the partitioning process considerably. Following the modified approach, a centralized controller is designed for an example aircraft mode. The design includes all the inputs and outputs to be used in a specified decentralized control structure. However, it is shown that the resulting centralized controller is not well suited for approximation by a decentralized controller of the given structure. The results indicate that it is not practical in general to cast the controller partitioning problem as a direct controller approximation problem.
Fully decentralized estimation and control for a modular wheeled mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutambara, A.G.O.; Durrant-Whyte, H.F.
2000-06-01
In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less
On decentralized adaptive full-order sliding mode control of multiple UAVs.
Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin
2017-11-01
In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.
2003-01-01
Interacting agents that interleave planning and execution must reach consensus on their commitments to each other. In domains where agents have varying degrees of interaction and different constraints on communication and computation, agents will require different coordination protocols in order to efficiently reach consensus in real time. We briefly describe a largely unexplored class of real-time, distributed planning problems (inspired by interacting spacecraft missions), new challenges they pose, and a general approach to solving the problems. These problems involve self-interested agents that have infrequent communication but collaborate on joint activities. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities in a dynamic environment, a soft, real-time approach to reaching consensus during execution with limited communication, and a foundation for customizing protocols for negotiating planner interactions. We apply SHAC to a realistic simulation of interacting Mars missions and illustrate the simplicity of protocol development.
Continual coordination through shared activities
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.
2003-01-01
Interacting agents that interleave planning and execution must reach consensus on their commitments to each other. In domains where agents have varying degrees of interaction and different constraints on communication and computation, agents will require different coordination protocols in order to efficiently reach consensus in real time. We briefly describe a largely unexplored class of realtime, distributed planning problems (inspired by interacting spacecraft missions), new challenges they pose, and a general approach to solving the problems. These problems involve self-interested agents that have infrequent communication but collaborate on joint activities. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities over the lifetimes of separate missions, a soft, real-time approach to reaching consensus during execution with limited communication, and a foundation for customizing protocols for negotiating planner interactions. We apply SHAC to a realistic simulation of interacting Mars missions and illustrate the simplicity of protocol development.
Precision Formation Keeping at L2 Using the Autonomous Formation Flying Sensor
NASA Technical Reports Server (NTRS)
McLoughlin, Terence H.; Campbell, Mark
2004-01-01
Recent advances in formation keeping for large numbers of spacecraft using the Autonomous Formation Flying are presented. This sensor, currently under development at JPL, has been identified as a key component in future formation flying spacecraft missions. The sensor provides accurate range and bearing measurements between pairs of spacecraft using GPS technology. Previous theoretical work by the authors has focused on developing a decentralized scheduling algorithm to control the tasking of such a sensor between the relative range and bearing measurements to each node in the formation. The resulting algorithm has been modified to include switching constraints in the sensor. This paper also presents a testbed for real time validation of a sixteen-node formation based on the Stellar Imager mission. Key aspects of the simulation include minimum fuel maneuvers based on free-body dynamics and a three body propagator for simulating the formation at L2.
Argumentation for coordinating shared activities
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.; Schaffer, Steven R.
2004-01-01
an increasing need for space missions to be able to collaboratively (and competitively) develop plans both within and across missions. In addition, interacting spacecraft that interleave onboard planning and execution must reach consensus on their commitments to each other prior to execution. In domains where missions have varying degrees of interaction and different constraints on communication and computation, the missions will require different coordination protocols in order to efficiently reach consensus with in their imposed deadlines. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities over the lifetimes of multiple agents and a foundation for customizing protocols for negotiating planner interactions. We investigate variations of a few simple protocols based on argumentation and distributed constraints satisfaction techniques and evaluate their abilities to reach consistent solutions according to computation, time, and communication costs in an abstract domain where spacecraft propose joint measurements.
Lessons learned from a pharmacy practice model change at an academic medical center.
Knoer, Scott J; Pastor, John D; Phelps, Pamela K
2010-11-01
The development and implementation of a new pharmacy practice model at an academic medical center are described. Before the model change, decentralized pharmacists responsible for order entry and verification and clinical specialists were both present on the care units. Staff pharmacists were responsible for medication distribution and sterile product preparation. The decentralized pharmacists handling orders were not able to use their clinical training, the practice model was inefficient, and few clinical services were available during evenings and weekends. A task force representing all pharmacy department roles developed a process and guiding principles for the model change, collected data, and decided on a model. Teams consisting of decentralized pharmacists, decentralized pharmacy technicians, and team leaders now work together to meet patients' pharmacy needs and further departmental safety, quality, and cost-saving goals. Decentralized service hours have been expanded through operational efficiencies, including use of automation (e.g., computerized provider order entry, wireless computers on wheels used during rounds with physician teams). Nine clinical specialist positions were replaced by five team leader positions and four pharmacists functioning in decentralized roles. Additional staff pharmacist positions were shifted into decentralized roles, and the hospital was divided into areas served by teams including five to eight pharmacists. Technicians are directly responsible for medication distribution. No individual's job was eliminated. The new practice model allowed better alignment of staff with departmental goals, expanded pharmacy hours and services, more efficient medication distribution, improved employee engagement, and a staff succession plan.
Decentralized care for multidrug-resistant tuberculosis: a systematic review and meta-analysis.
Ho, Jennifer; Byrne, Anthony L; Linh, Nguyen N; Jaramillo, Ernesto; Fox, Greg J
2017-08-01
To assess the effectiveness of decentralized treatment and care for patients with multidrug-resistant (MDR) tuberculosis, in comparison with centralized approaches. We searched ClinicalTrials.gov, the Cochrane library, Embase®, Google Scholar, LILACS, PubMed®, Web of Science and the World Health Organization's portal of clinical trials for studies reporting treatment outcomes for decentralized and centralized care of MDR tuberculosis. The primary outcome was treatment success. When possible, we also evaluated, death, loss to follow-up, treatment adherence and health-system costs. To obtain pooled relative risk (RR) estimates, we performed random-effects meta-analyses. Eight studies met the eligibility criteria for review inclusion. Six cohort studies, with 4026 participants in total, reported on treatment outcomes. The pooled RR estimate for decentralized versus centralized care for treatment success was 1.13 (95% CI: 1.01-1.27). The corresponding estimate for loss to follow-up was RR: 0.66 (95% CI: 0.38-1.13), for death RR: 1.01 (95% CI: 0.67-1.52) and for treatment failure was RR: 1.07 (95% CI: 0.48-2.40). Two of three studies evaluating health-care costs reported lower costs for the decentralized models of care than for the centralized models. Treatment success was more likely among patients with MDR tuberculosis treated using a decentralized approach. Further studies are required to explore the effectiveness of decentralized MDR tuberculosis care in a range of different settings.
Decentralized care for multidrug-resistant tuberculosis: a systematic review and meta-analysis
Byrne, Anthony L; Linh, Nguyen N; Jaramillo, Ernesto; Fox, Greg J
2017-01-01
Abstract Objective To assess the effectiveness of decentralized treatment and care for patients with multidrug-resistant (MDR) tuberculosis, in comparison with centralized approaches. Methods We searched ClinicalTrials.gov, the Cochrane library, Embase®, Google Scholar, LILACS, PubMed®, Web of Science and the World Health Organization’s portal of clinical trials for studies reporting treatment outcomes for decentralized and centralized care of MDR tuberculosis. The primary outcome was treatment success. When possible, we also evaluated, death, loss to follow-up, treatment adherence and health-system costs. To obtain pooled relative risk (RR) estimates, we performed random-effects meta-analyses. Findings Eight studies met the eligibility criteria for review inclusion. Six cohort studies, with 4026 participants in total, reported on treatment outcomes. The pooled RR estimate for decentralized versus centralized care for treatment success was 1.13 (95% CI: 1.01–1.27). The corresponding estimate for loss to follow-up was RR: 0.66 (95% CI: 0.38–1.13), for death RR: 1.01 (95% CI: 0.67–1.52) and for treatment failure was RR: 1.07 (95% CI: 0.48–2.40). Two of three studies evaluating health-care costs reported lower costs for the decentralized models of care than for the centralized models. Conclusion Treatment success was more likely among patients with MDR tuberculosis treated using a decentralized approach. Further studies are required to explore the effectiveness of decentralized MDR tuberculosis care in a range of different settings. PMID:28804170
Nathan, Lisa M.; Shi, Quihu; Plewniak, Kari; Zhang, Charles; Nsabimana, Damien; Sklar, Marc; Mutimura, Eugene; Merkatz, Irwin R.; Einstein, Mark H.; Anastos, Kathryn
2015-01-01
To evaluate the effectiveness of decentralizing ambulatory reproductive and intrapartum services to increase rates of antenatal care (ANC) utilization and skilled attendance at birth (SAB) in Rwanda. A prospective cohort study was implemented with one control and two intervention sites: decentralized ambulatory reproductive healthcare and decentralized intrapartum care. Multivariate logistic regression analysis was performed with primary outcome of lack of SAB and secondary outcome of ≥3 ANC visits. 536 women were entered in the study. Distance lived from delivery site significantly predicted SAB (p = 0.007), however distance lived to ANC site did not predict ≥3 ANC visits (p = 0.81). Neither decentralization of ambulatory reproductive healthcare (p = 0.10) nor intrapartum care (p = 0.40) was significantly associated with SAB. The control site had the greatest percentage of women receive ≥3 ANC visits (p < 0.001). Receiving <3 ANC visits was associated with a 3.98 times greater odds of not having SAB (p = 0.001). No increase in adverse outcomes was found with decentralization of ambulatory reproductive health care or intrapartum care. The factors that predict utilization of physically accessible services in rural Africa are complex. Decentralization of services may be one strategy to increase rates of SAB and ANC utilization, but selection biases may have precluded accurate analysis. Efforts to increase ANC utilization may be a worthwhile investment to increase SAB. PMID:25652061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.
2011-07-04
A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adaptedmore » by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.« less
DEWEY: the DICOM-enabled workflow engine system.
Erickson, Bradley J; Langer, Steve G; Blezek, Daniel J; Ryan, William J; French, Todd L
2014-06-01
Workflow is a widely used term to describe the sequence of steps to accomplish a task. The use of workflow technology in medicine and medical imaging in particular is limited. In this article, we describe the application of a workflow engine to improve workflow in a radiology department. We implemented a DICOM-enabled workflow engine system in our department. We designed it in a way to allow for scalability, reliability, and flexibility. We implemented several workflows, including one that replaced an existing manual workflow and measured the number of examinations prepared in time without and with the workflow system. The system significantly increased the number of examinations prepared in time for clinical review compared to human effort. It also met the design goals defined at its outset. Workflow engines appear to have value as ways to efficiently assure that complex workflows are completed in a timely fashion.
Extended Decentralized Linear-Quadratic-Gaussian Control
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2000-01-01
A straightforward extension of a solution to the decentralized linear-Quadratic-Gaussian problem is proposed that allows its use for commonly encountered classes of problems that are currently solved with the extended Kalman filter. This extension allows the system to be partitioned in such a way as to exclude the nonlinearities from the essential algebraic relationships that allow the estimation and control to be optimally decentralized.
Tran, Bach Xuan; Nguyen, Long Hoang; Phan, Huong Thu Thi; Nguyen, Linh Khanh; Latkin, Carl A
2015-09-17
Integrating and decentralizing services are essential to increase the accessibility and provide comprehensive care for methadone patients. Moreover, they assure the sustainability of a HIV/AIDS prevention program by reducing the implementation cost. This study aimed to measure the preference of patients enrolling in a MMT program for integrated and decentralized MMT clinics and then further examine related factors. A cross-sectional study was conducted among 510 patients receiving methadone at 3 clinics in Hanoi. Structured questionnaires were used to collect data about the preference for integrated and decentralized MMT services. Covariates including socio-economic status; health-related quality of life (using EQ-5D-5 L instrument) and HIV status; history of drug use along with MMT treatment; and exposure to the discrimination within family and community were also investigated. Multivariate logistic regression with polynomial fractions was used to identify the determinants of preference for integrative and decentralized models. Of 510 patients enrolled, 66.7 and 60.8 % preferred integrated and decentralized models, respectively. The main reason for preferring the integrative model was the convenience of use of various services (53.2 %), while more privacy (43.5 %) was the primary reason to select stand-alone model. People preferred the decentralized model primarily because of travel cost reduction (95.0 %), while the main reason for not selecting the model was increased privacy (7.7 %). After adjusting for covariates, factors influencing the preference for integrative model were poor socioeconomic status, anxiety/depression, history of drug rehabilitation, and ever disclosed health status; while exposure to community discrimination inversely associated with this preference. In addition, people who were self-employed, had a longer duration of MMT, and use current MMT with comprehensive HIV services were less likely to select decentralized model. In conclusion, the study confirmed the high preference of MMT patients for the integrative and decentralized MMT service delivery models. The convenience of healthcare services utilization and reduction of geographical barriers were the main reasons to use those models within drug use populations in Vietnam. Countering community stigma and encouraging communication between patients and their societies needed to be considered when implementing those models.
Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.
Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa
2012-05-04
Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.
Barista: A Framework for Concurrent Speech Processing by USC-SAIL
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.
2016-01-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047
Job-sharing in nuclear medicine: an 8-year experience (1998-2006).
Als, Claudine; Brautigam, Peter
2006-01-01
Job-sharing is generally defined as a situation in which a single professional position is held in common by two separate individuals, who alternatively, on a timely basis, deal with the workload and the responsibilities. The aim of the present paper is to discuss prerequisites and characteristics of job-sharing by medical doctors and implications in a department of nuclear medicine. Job-sharing facilitates the combination of family life with professional occupation and prevents burnout. The time schedule applied by job-sharers is relevant: will both partners work for half-days, half-weeks, or rather alternatively during one to two consecutive weeks? This crucial choice, depending on personal as well as on professional circumstances, certainly influences the workflow of the department.
NASA Technical Reports Server (NTRS)
2000-01-01
Oak Grove Reactor, developed by Oak Grove Systems, is a new software program that allows users to integrate workflow processes. It can be used with portable communication devices. The software can join e-mail, calendar/scheduling and legacy applications into one interactive system via the web. Priority tasks and due dates are organized and highlighted to keep the user up to date with developments. Reactor works with existing software and few new skills are needed to use it. Using a web browser, a user can can work on something while other users can work on the same procedure or view its status while it is being worked on at another site. The software was developed by the Jet Propulsion Lab and originally put to use at Johnson Space Center.
ERP (enterprise resource planning) systems can streamline healthcare business functions.
Jenkins, E K; Christenson, E
2001-05-01
Enterprise resource planning (ERP) software applications are designed to facilitate the systemwide integration of complex processes and functions across a large enterprise consisting of many internal and external constituents. Although most currently available ERP applications generally are tailored to the needs of the manufacturing industry, many large healthcare systems are investigating these applications. Due to the significant differences between manufacturing and patient care, ERP-based systems do not easily translate to the healthcare setting. In particular, the lack of clinical standardization impedes the use of ERP systems for clinical integration. Nonetheless, an ERP-based system can help a healthcare organization integrate many functions, including patient scheduling, human resources management, workload forecasting, and management of workflow, that are not directly dependent on clinical decision making.
Barista: A Framework for Concurrent Speech Processing by USC-SAIL.
Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S
2014-05-01
We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.
Decentralized Interleaving of Paralleled Dc-Dc Buck Converters: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit
We present a decentralized control strategy that yields switch interleaving among parallel connected dc-dc buck converters without communication. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work represents the first fully decentralized strategy formore » switch interleaving of paralleled dc-dc buck converters.« less
Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu
2013-08-01
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu
2013-01-01
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089
Fonseca, Elize Massard da; Nunn, Amy; Souza-Junior, Paulo Borges; Bastos, Francisco Inácio; Ribeiro, José Mendes
2007-09-01
This paper assesses how decentralization of resources and initiatives by the Brazilian National SDT/AIDS Program has impacted the transfer of funds for programs to prevent HIV/AIDS among injecting drug users in Rio de Janeiro, Brazil (1999-2006). The effects of the decentralization policy on Rio de Janeiro's Syringe Exchange Programs (SEPs) are assessed in detail. Decentralization effectively took place in Rio de Janeiro in 2006, with the virtual elimination of any direct transfer from the Federal government. The elimination of direct transfers forced SEPs to seek alternative funding sources. The structure of local SEPs appears to be weak and has been further undermined by current funding constraints. Of 22 SEPs operating in 2002, only two are still operational in 2006, basically funded by the State Health Secretariat and one municipal government. The current discontinuity of SEP operations may favor the resurgence of AIDS in the IDU population. A more uniform, regulated decentralization process is thus needed.
Decentralized Patrolling Under Constraints in Dynamic Environments.
Shaofei Chen; Feng Wu; Lincheng Shen; Jing Chen; Ramchurn, Sarvapali D
2016-12-01
We investigate a decentralized patrolling problem for dynamic environments where information is distributed alongside threats. In this problem, agents obtain information at a location, but may suffer attacks from the threat at that location. In a decentralized fashion, each agent patrols in a designated area of the environment and interacts with a limited number of agents. Therefore, the goal of these agents is to coordinate to gather as much information as possible while limiting the damage incurred. Hence, we model this class of problem as a transition-decoupled partially observable Markov decision process with health constraints. Furthermore, we propose scalable decentralized online algorithms based on Monte Carlo tree search and a factored belief vector. We empirically evaluate our algorithms on decentralized patrolling problems and benchmark them against the state-of-the-art online planning solver. The results show that our approach outperforms the state-of-the-art by more than 56% for six agents patrolling problems and can scale up to 24 agents in reasonable time.
Decentralized indirect methods for learning automata games.
Tilak, Omkar; Martin, Ryan; Mukhopadhyay, Snehasis
2011-10-01
We discuss the application of indirect learning methods in zero-sum and identical payoff learning automata games. We propose a novel decentralized version of the well-known pursuit learning algorithm. Such a decentralized algorithm has significant computational advantages over its centralized counterpart. The theoretical study of such a decentralized algorithm requires the analysis to be carried out in a nonstationary environment. We use a novel bootstrapping argument to prove the convergence of the algorithm. To our knowledge, this is the first time that such analysis has been carried out for zero-sum and identical payoff games. Extensive simulation studies are reported, which demonstrate the proposed algorithm's fast and accurate convergence in a variety of game scenarios. We also introduce the framework of partial communication in the context of identical payoff games of learning automata. In such games, the automata may not communicate with each other or may communicate selectively. This comprehensive framework has the capability to model both centralized and decentralized games discussed in this paper.
The recent process of decentralization and democratic management of education in Brazil
NASA Astrophysics Data System (ADS)
Santos Filho, José Camilo Dos
1993-09-01
Brazilian society is beginning a new historical period in which the principle of decentralization is beginning to predominate over centralization, which held sway during the last 25 years. In contrast to recent Brazilian history, there is now a search for political, democratic and participatory decentralization more consonant with grass-roots aspirations. The first section of this article presents a brief analysis of some decentralization policies implemented by the military regime of 1964, and discusses relevant facts related to the resistance of civil society to state authoritarianism, and to the struggle for the democratization and organization of civil society up to the end of the 1970s. The second section analyzes some new experiences of democratic public school administration initiated in the 1970s and 1980s. The final section discusses the move toward decentralization and democratization of public school administration in the new Federal and State Constitutions, and in the draft of the new Law of National Education.
Taming instabilities in power grid networks by decentralized control
NASA Astrophysics Data System (ADS)
Schäfer, B.; Grabow, C.; Auer, S.; Kurths, J.; Witthaut, D.; Timme, M.
2016-05-01
Renewables will soon dominate energy production in our electric power system. And yet, how to integrate renewable energy into the grid and the market is still a subject of major debate. Decentral Smart Grid Control (DSGC) was recently proposed as a robust and decentralized approach to balance supply and demand and to guarantee a grid operation that is both economically and dynamically feasible. Here, we analyze the impact of network topology by assessing the stability of essential network motifs using both linear stability analysis and basin volume for delay systems. Our results indicate that if frequency measurements are averaged over sufficiently large time intervals, DSGC enhances the stability of extended power grid systems. We further investigate whether DSGC supports centralized and/or decentralized power production and find it to be applicable to both. However, our results on cycle-like systems suggest that DSGC favors systems with decentralized production. Here, lower line capacities and lower averaging times are required compared to those with centralized production.
Inferring Clinical Workflow Efficiency via Electronic Medical Record Utilization
Chen, You; Xie, Wei; Gunter, Carl A; Liebovitz, David; Mehrotra, Sanjay; Zhang, He; Malin, Bradley
2015-01-01
Complexity in clinical workflows can lead to inefficiency in making diagnoses, ineffectiveness of treatment plans and uninformed management of healthcare organizations (HCOs). Traditional strategies to manage workflow complexity are based on measuring the gaps between workflows defined by HCO administrators and the actual processes followed by staff in the clinic. However, existing methods tend to neglect the influences of EMR systems on the utilization of workflows, which could be leveraged to optimize workflows facilitated through the EMR. In this paper, we introduce a framework to infer clinical workflows through the utilization of an EMR and show how such workflows roughly partition into four types according to their efficiency. Our framework infers workflows at several levels of granularity through data mining technologies. We study four months of EMR event logs from a large medical center, including 16,569 inpatient stays, and illustrate that over approximately 95% of workflows are efficient and that 80% of patients are on such workflows. At the same time, we show that the remaining 5% of workflows may be inefficient due to a variety of factors, such as complex patients. PMID:26958173
Workflow management systems in radiology
NASA Astrophysics Data System (ADS)
Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim
1998-07-01
In a situation of shrinking health care budgets, increasing cost pressure and growing demands to increase the efficiency and the quality of medical services, health care enterprises are forced to optimize or complete re-design their processes. Although information technology is agreed to potentially contribute to cost reduction and efficiency improvement, the real success factors are the re-definition and automation of processes: Business Process Re-engineering and Workflow Management. In this paper we discuss architectures for the use of workflow management systems in radiology. We propose to move forward from information systems in radiology (RIS, PACS) to Radiology Management Systems, in which workflow functionality (process definitions and process automation) is implemented through autonomous workflow management systems (WfMS). In a workflow oriented architecture, an autonomous workflow enactment service communicates with workflow client applications via standardized interfaces. In this paper, we discuss the need for and the benefits of such an approach. The separation of workflow management system and application systems is emphasized, and the consequences that arise for the architecture of workflow oriented information systems. This includes an appropriate workflow terminology, and the definition of standard interfaces for workflow aware application systems. Workflow studies in various institutions have shown that most of the processes in radiology are well structured and suited for a workflow management approach. Numerous commercially available Workflow Management Systems (WfMS) were investigated, and some of them, which are process- oriented and application independent, appear suitable for use in radiology.
Jealousy Graphs: Structure and Complexity of Decentralized Stable Matching
2013-01-01
REPORT Jealousy Graphs: Structure and Complexity of Decentralized Stable Matching 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The stable matching...Franceschetti 858-822-2284 3. DATES COVERED (From - To) Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 - Jealousy Graphs: Structure and...market. Using this structure, we are able to provide a ner analysis of the complexity of a subclass of decentralized matching markets. Jealousy
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
Centralization or decentralization of facial structures in Korean young adults.
Yoo, Ja-Young; Kim, Jeong-Nam; Shin, Kang-Jae; Kim, Soon-Heum; Choi, Hyun-Gon; Jeon, Hyun-Soo; Koh, Ki-Seok; Song, Wu-Chul
2013-05-01
It is well known that facial beauty is dictated by facial type, and harmony between the eyes, nose, and mouth. Furthermore, facial impression is judged according to the overall facial contour and the relationship between the facial structures. The aims of the present study were to determine the optimal criteria for the assessment of gathering or separation of the facial structures and to define standardized ratios for centralization or decentralization of the facial structures.Four different lengths were measured, and 2 indexes were calculated from standardized photographs of 551 volunteers. Centralization and decentralization were assessed using the width index (interpupillary distance / facial width) and height index (eyes-mouth distance / facial height). The mean ranges of the width index and height index were 42.0 to 45.0 and 36.0 to 39.0, respectively. The width index did not differ with sex, but males had more decentralized faces, and females had more centralized faces, vertically. The incidence rate of decentralized faces among the men was 30.3%, and that of centralized faces among the women was 25.2%.The mean ranges in width and height indexes have been determined in a Korean population. Faces with width and height index scores under and over the median ranges are determined to be "centralized" and "decentralized," respectively.
Kavvada, Olga; Horvath, Arpad; Stokes-Draut, Jennifer R; Hendrickson, Thomas P; Eisenstein, William A; Nelson, Kara L
2016-12-20
Nonpotable water reuse (NPR) is one option for conserving valuable freshwater resources. Decentralization can improve distribution system efficiency by locating treatment closer to the consumer; however, small treatment systems may have higher unit energy and greenhouse-gas (GHG) emissions. This research explored the trade-off between residential NPR systems using a life-cycle approach to analyze the energy use and GHG emissions. Decentralized and centralized NPR options are compared to identify where decentralized systems achieve environmental advantages over centralized reuse alternatives, and vice versa, over a range of scales and spatial and demographic conditions. For high-elevation areas far from the centralized treatment plant, decentralized NPR could lower energy use by 29% and GHG emissions by 28%, but in low-elevation areas close to the centralized treatment plant, decentralized reuse could be higher by up to 85% (energy) and 49% (GHG emissions) for the scales assessed (20-2000 m 3 /day). Direct GHG emissions from the treatment processes were found to be highly uncertain and variable and were not included in the analysis. The framework presented can be used as a planning support tool to reveal the environmental impacts of integrating decentralized NPR with existing centralized wastewater infrastructure and can be adapted to evaluate different treatment technology scales for reuse.
Financial management systems under decentralization and their effect on malaria control in Uganda.
Kivumbi, George W; Nangendo, Florence; Ndyabahika, Boniface Rutagira
2004-01-01
A descriptive case study with multiple sites and a single level of analysis was carried out in four purposefully selected administrative districts of Uganda to investigate the effect of financial management systems under decentralization on malaria control. Data were primarily collected from 36 interviews with district managers, staff at health units and local leaders. A review of records and documents related to decentralization at the central and district level was also used to generate data for the study. We found that a long, tedious, and bureaucratic process combined with lack of knowledge in working with new financial systems by several actors characterized financial flow under decentralization. This affected the timely use of financial resources for malaria control in that there were funds in the system that could not be accessed for use. We were also told that sometimes these funds were returned to the central government because of non-use due to difficulties in accessing them and/or stringent conditions not to divert them to other uses. Our data showed that a cocktail of bureaucratic control systems, corruption and incompetence make the financial management system under decentralization counter-productive for malaria control. The main conclusion is that good governance through appropriate and efficient financial management systems is very important for effective malaria control under decentralization.
Bioinformatics workflows and web services in systems biology made easy for experimentalists.
Jimenez, Rafael C; Corpas, Manuel
2013-01-01
Workflows are useful to perform data analysis and integration in systems biology. Workflow management systems can help users create workflows without any previous knowledge in programming and web services. However the computational skills required to build such workflows are usually above the level most biological experimentalists are comfortable with. In this chapter we introduce workflow management systems that reuse existing workflows instead of creating them, making it easier for experimentalists to perform computational tasks.
Valdes-Stauber, J; Putzhammer, A; Kilian, R
2014-05-01
Psychiatric outpatient clinics (PIAs) are an indispensable care service for crisis intervention and multidisciplinary treatment of people suffering from severe and persistent mental disorders. The decentralization of outpatient clinics can be understood as a further step in the deinstitutionalization process. This cross-sectional study (n=1,663) compared the central outpatient clinic with the decentralized teams for the year 2010 by means of analyses of variance, χ(2)-tests and robust multivariate regression models. The longitudinal assessment (descriptively and by means of Prais-Winsten regression models for time series) was based on all hospitalizations for the two decentralized teams (n = 6,693) according to partial catchment areas for the time period 2002-2010 in order to examine trends after their installation in the year 2007. Decentralized teams were found to be similar with respect to the care profile but cared for relatively more patients suffering from dementia, addictive and mood disorders but not for those suffering from schizophrenia and personality disorders. Decentralized teams showed less outpatient care costs as well as psychopharmacological expenses but a lower contact frequency than the central outpatient clinic. Total expenses for psychiatric care were not significantly different and assessed hospitalization variables (e.g. total number of annual admissions, cumulative length of inpatient-stay and annual hospitalizations per patient) changed slightly 3 years after installation of the decentralized teams. The number of admissions of people suffering from schizophrenia decreased whereas those for mood and stress disorders increased. Decentralized outpatient teams seemed to reach patients in rural regions who previously were not reached by the central outpatient clinic. Economic figures indicate advantages for the installation of such teams because care expenses are not higher than for patients treated in centralized outpatient clinics and because hospitalization figures for the whole catchment area did not increase.
Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support
2012-01-01
Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org. PMID:22559942
TU-H-CAMPUS-JeP3-01: Towards Robust Adaptive Radiation Therapy Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boeck, M; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To set up a framework combining robust treatment planning with adaptive reoptimization in order to maintain high treatment quality, to respond to interfractional variations and to identify those patients who will benefit the most from an adaptive fractionation schedule. Methods: We propose adaptive strategies based on stochastic minimax optimization for a series of simulated treatments on a one-dimensional patient phantom. The plan should be able to handle anticipated systematic and random errors and is applied during the first fractions. Information on the individual geometric variations is gathered at each fraction. At scheduled fractions, the impact of the measured errorsmore » on the delivered dose distribution is evaluated. For a patient that receives a dose that does not satisfy specified plan quality criteria, the plan is reoptimized based on these individual measurements using one of three different adaptive strategies. The reoptimized plan is then applied during future fractions until a new scheduled adaptation becomes necessary. In the first adaptive strategy the measured systematic and random error scenarios and their assigned probabilities are updated to guide the robust reoptimization. The focus of the second strategy lies on variation of the fraction of the worst scenarios taken into account during robust reoptimization. In the third strategy the uncertainty margins around the target are recalculated with the measured errors. Results: By studying the effect of the three adaptive strategies combined with various adaptation schedules on the same patient population, the group which benefits from adaptation is identified together with the most suitable strategy and schedule. Preliminary computational results indicate when and how best to adapt for the three different strategies. Conclusion: A workflow is presented that provides robust adaptation of the treatment plan throughout the course of treatment and useful measures to identify patients in need for an adaptive treatment strategy.« less
Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
Sheriff, R; Banks, A
2001-01-01
Organization change efforts have led to critically examining the structure of education and development departments within hospitals. This qualitative study evaluated an education and development model in an academic health sciences center. The model combines centralization and decentralization. The study results can be used by staff development educators and administrators when organization structure is questioned. This particular model maximizes the benefits and minimizes the limitations of centralized and decentralized structures.
Decentralized Control of Autonomous Vehicles
2003-01-01
Autonomous Vehicles by John S. Baras, Xiaobo Tan, Pedram Hovareshti CSHCN TR 2003-8 (ISR TR 2003-14) Report Documentation Page Form ApprovedOMB No. 0704...AND SUBTITLE Decentralized Control of Autonomous Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Decentralized Control of Autonomous Vehicles ∗ John S. Baras, Xiaobo Tan, and Pedram
Decentralized regulation of dynamic systems. [for controlling large scale linear systems
NASA Technical Reports Server (NTRS)
Chu, K. C.
1975-01-01
A special class of decentralized control problem is discussed in which the objectives of the control agents are to steer the state of the system to desired levels. Each agent is concerned about certain aspects of the state of the entire system. The state and control equations are given for linear time-invariant systems. Stability and coordination, and the optimization of decentralized control are analyzed, and the information structure design is presented.
Decentralized Interleaving of Paralleled Dc-Dc Buck Converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit
We present a decentralized control strategy that yields switch interleaving among parallel-connected dc-dc buck converters. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform and no communication between different controllers is needed. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work presents themore » first fully decentralized strategy for switch interleaving in paralleled dc-dc buck converters.« less
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Decentralized Estimation and Control for Preserving the Strong Connectivity of Directed Graphs.
Sabattini, Lorenzo; Secchi, Cristian; Chopra, Nikhil
2015-10-01
In order to accomplish cooperative tasks, decentralized systems are required to communicate among each other. Thus, maintaining the connectivity of the communication graph is a fundamental issue. Connectivity maintenance has been extensively studied in the last few years, but generally considering undirected communication graphs. In this paper, we introduce a decentralized control and estimation strategy to maintain the strong connectivity property of directed communication graphs. In particular, we introduce a hierarchical estimation procedure that implements power iteration in a decentralized manner, exploiting an algorithm for balancing strongly connected directed graphs. The output of the estimation system is then utilized for guaranteeing preservation of the strong connectivity property. The control strategy is validated by means of analytical proofs and simulation results.
Decentralized Bayesian search using approximate dynamic programming methods.
Zhao, Yijia; Patek, Stephen D; Beling, Peter A
2008-08-01
We consider decentralized Bayesian search problems that involve a team of multiple autonomous agents searching for targets on a network of search points operating under the following constraints: 1) interagent communication is limited; 2) the agents do not have the opportunity to agree in advance on how to resolve equivalent but incompatible strategies; and 3) each agent lacks the ability to control or predict with certainty the actions of the other agents. We formulate the multiagent search-path-planning problem as a decentralized optimal control problem and introduce approximate dynamic heuristics that can be implemented in a decentralized fashion. After establishing some analytical properties of the heuristics, we present computational results for a search problem involving two agents on a 5 x 5 grid.
Information logistics: A production-line approach to information services
NASA Technical Reports Server (NTRS)
Adams, Dennis; Lee, Chee-Seng
1991-01-01
Logistics can be defined as the process of strategically managing the acquisition, movement, and storage of materials, parts, and finished inventory (and the related information flow) through the organization and its marketing channels in a cost effective manner. It is concerned with delivering the right product to the right customer in the right place at the right time. The logistics function is composed of inventory management, facilities management, communications unitization, transportation, materials management, and production scheduling. The relationship between logistics and information systems is clear. Systems such as Electronic Data Interchange (EDI), Point of Sale (POS) systems, and Just in Time (JIT) inventory management systems are important elements in the management of product development and delivery. With improved access to market demand figures, logisticians can decrease inventory sizes and better service customer demand. However, without accurate, timely information, little, if any, of this would be feasible in today's global markets. Information systems specialists can learn from logisticians. In a manner similar to logistics management, information logistics is concerned with the delivery of the right data, to the ring customer, at the right time. As such, information systems are integral components of the information logistics system charged with providing customers with accurate, timely, cost-effective, and useful information. Information logistics is a management style and is composed of elements similar to those associated with the traditional logistics activity: inventory management (data resource management), facilities management (distributed, centralized and decentralized information systems), communications (participative design and joint application development methodologies), unitization (input/output system design, i.e., packaging or formatting of the information), transportations (voice, data, image, and video communication systems), materials management (data acquisition, e.g., EDI, POS, external data bases, data entry) and production scheduling (job, staff, and project scheduling).
Intraoperative centration during small incision lenticule extraction (SMILE)
Wong, John X.; Wong, Elizabeth P.; Htoon, Hla M.; Mehta, Jodhbir S.
2017-01-01
Abstract To evaluate intraoperative decentration from pupil center and kappa intercept during small incision lenticule extraction (SMILE) and its impact on visual outcomes. This was a retrospective noncomparative case series. A total of 164 eyes that underwent SMILE at the Singapore National Eye Center were included. Screen captures of intraoperative videos were analyzed. Preoperative and 3 month postoperative vision and refractive data were analyzed against decentration. The mean preoperative spherical equivalent (SE) was −5.84 ± 1.77. The mean decentration from the pupil center and from kappa intercept were 0.13 ± 0.06 mm and 0.47mm ± 0.25 mm, respectively. For efficacy and predictability, 69.6% and 95.0% of eyes achieved a visual acuity (VA) of 20/20 and 20/30, respectively, while 83.8% and 97.2% of eyes were within ±0.5D and ±1.0D of the targeted SE. When analyzed across 3 groups of decentration from the pupil center (<0.1 mm, 0.1–0.2 mm, and >0.2 mm), there was no statistically significant association between decentration, safety, efficacy, and predictability. When analyzed across 4 groups of decentration from kappa intercept (<0.2 mm, 0.2–<0.4 mm, 0.4–<0.6 mm, and ≥0.6 mm), there was a trend toward higher efficacy for eyes with decentration of kappa intercept between 0.4 and <0.6 mm (P = .097). A total of 85.4% of eyes in the 0.4 to <0.6 mm group had unaided distance VA of 20/20 or better, as compared to only 57.8% of eyes in ≥0.6 mm group. Decentration of 0.13 mm from the pupil center does not result in compromised visual outcomes. Decentration of greater than 0.6 mm from the kappa intercept may result in compromised visual outcomes. There was a trend toward better efficacy in eyes which had decentered treatment from 0.4 to <0.6 mm from the kappa intercept. Patients with a large kappa intercept (>0.6 mm) should have their lenticule created 0.4 to 0.6 mm from the kappa intercept and not close to the pupil. PMID:28422822
Achieving production-level use of HEP software at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.
2015-12-01
HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
[Development of an ophthalmological clinical information system for inpatient eye clinics].
Kortüm, K U; Müller, M; Babenko, A; Kampik, A; Kreutzer, T C
2015-12-01
In times of increased digitalization in healthcare, departments of ophthalmology are faced with the challenge of introducing electronic clinical health records (EHR); however, specialized software for ophthalmology is not available with most major EHR sytems. The aim of this project was to create specific ophthalmological user interfaces for large inpatient eye care providers within a hospitalwide EHR. Additionally the integration of ophthalmic imaging systems, scheduling and surgical documentation should be achieved. The existing EHR i.s.h.med (Siemens, Germany) was modified using advanced business application programming (ABAP) language to create specific ophthalmological user interfaces for reproduction and moreover optimization of the clinical workflow. A user interface for documentation of ambulatory patients with eight tabs was designed. From June 2013 to October 2014 a total of 61,551 patient contact details were documented. For surgical documentation a separate user interface was set up. Digital clinical orders for documentation of registration and scheduling of operations user interfaces were also set up. A direct integration of ophthalmic imaging modalities could be established. An ophthalmologist-orientated EHR for outpatient and surgical documentation for inpatient clinics was created and successfully implemented. By incorporation of imaging procedures the foundation of future smart/big data analyses was created.
Schoenrock, Danielle L; Hartkopf, Katherine; Boeckelman, Carrie
2016-12-01
The development and implementation of a centralized, pharmacist-run population health program were pursued within a health system to increase patient exposure to comprehensive medication reviews (CMRs) and improve visit processes. Program implementation included choosing appropriate pilot pharmacy locations, developing a feasible staffing model, standardizing the workflow, and creating a patient referral process. The impact on patient exposure, specific interventions, and the sustainability of the program were evaluated over a seven-month period. A total of 96 CMRs were scheduled during the data collection period. Attendance at scheduled CMRs was 54% (52 visits); there were 25 cancellations (26%) and 19 no-shows (20%). Since program implementation, there has been more than a twofold increase (2.08) in the number of CMRs completed within the health system. On average, all aspects of each patient visit took 1.78 hours to complete. Pharmacists spent 28% of scheduled time on CMR tasks and 72% of time on telephone calls and technical tasks to maintain appointments. A pharmacist-run CMR program helped to elevate the role of the community pharmacist in a health system and to improve patient exposure to CMRs. Sustaining a centralized CMR program requires support from other members of the health-system team so that pharmacists can spend more time providing patient care and less time on the technical tasks involved. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
A real-time architecture for time-aware agents.
Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V
2004-06-01
This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.
Li, Meiyan; Zhao, Jing; Miao, Huamao; Shen, Yang; Sun, Ling; Tian, Mi; Wadium, Elizabeth; Zhou, Xingtao
2014-05-20
To measure decentration following femtosecond laser small incision lenticule extraction (SMILE) for the correction of myopia and myopic astigmatism in the early learning curve, and to investigate its impact on visual quality. A total of 55 consecutive patients (100 eyes) who underwent the SMILE procedure were included. Decentration was measured using a Scheimpflug camera 6 months after surgery. Uncorrected and corrected distance visual acuity (UDVA, CDVA), manifest refraction, and wavefront errors were also measured. Associations between decentration and the preoperative spherical equivalent were analyzed, as well as the associations between decentration and wavefront aberrations. Regarding efficacy and safety, 40 eyes (40%) had an unchanged CDVA; 32 eyes (32%) gained one line; and 11 eyes (11%) gained two lines. Fifteen eyes (15%) lost one line of CDVA, and two eyes (2%) lost two lines. Ninety-nine of the treated eyes (99%) had a postoperative UDVA better than 1.0, and 100 eyes (100%) had a UDVA better than 0.8. The mean decentered displacement was 0.17 ± 0.09 mm. The decentered displacement of all treated eyes (100%) was within 0.50 mm; 70 eyes (70%) were within 0.20 mm; and 90 eyes (90%) were within 0.30 mm. The vertical coma showed the greatest increase in magnitude. The magnitude of horizontal decentration was found to be associated with an induced horizontal coma. This study suggests that, although mild decentration occurred in the early learning curve, good visual outcomes were achieved after the SMILE surgery. Special efforts to minimize induced vertical coma are necessary. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Decentralization and central and regional coordination of health services: the case of Switzerland.
Wyss, K; Lorenz, N
2000-01-01
As part of reforms in the health care delivery sector, decentralization is currently promoted in many countries as a means to improve performance and outcomes of national health care systems. Switzerland is an example of a country with a long-standing tradition of decentralized organization for many purposes, including health care delivery. Apart from the few aspects where the responsibility is at the federal level, it is the task of the 26 cantons to organize the provision of health services for the population of around 7 million people. This permits the system to be responsive to local priorities and interest as well as to new developments in medical and public health know-how. However, the increasing and complex difficulties of most health care delivery systems raise questions about the need for mechanisms for coordination at federal level, as well as about the equity and the effectiveness of the decentralized approach. The Swiss case shows that in a strongly decentralized system, health policy and strategy elaboration, as well as coordination mechanisms among the regional components of the system, are very hard to establish. This situation may lead to strong regional inequities in the financing of health care as well as to differences in the distribution of financial, human and material inputs into the health system. The study of the Swiss health system reveals also that, within a decentralized framework, the promotion of cost-effective interventions through a well-balanced approach towards promotional, preventive and curative services, or towards ambulatory and hospital care, is difficult to achieve, as agreements between relatively autonomous regions are difficult to obtain. Therefore, a decentralized system is not necessarily the most equitable and cost-effective way to deliver health care.
McGuire, Megan; Pinoges, Loretxu; Kanapathipillai, Rupa; Munyenyembe, Tamika; Huckabee, Martha; Makombe, Simon; Szumilin, Elisabeth; Heinzelmann, Annette; Pujades-Rodríguez, Mar
2012-01-01
To describe patient antiretroviral therapy (cART) outcomes associated with intensive decentralization of services in a rural HIV program in Malawi. Longitudinal analysis of data from HIV-infected patients starting cART between August 2001 and December 2008 and of a cross-sectional immunovirological assessment conducted 12 (±2) months after therapy start. One-year mortality, lost to follow-up, and attrition (deaths and lost to follow-up) rates were estimated with exact Poisson 95% confidence intervals (CI) by type of care delivery and year of initiation. Association of virological suppression (<50 copies/mL) and immunological success (CD4 gain ≥100 cells/µL), with type of care was investigated using multiple logistic regression. During the study period, 4322 cART patients received centralized care and 11,090 decentralized care. At therapy start, patients treated in decentralized health facilities had higher median CD4 count levels (167 vs. 130 cell/µL, P<0.0001) than other patients. Two years after cART start, program attrition was lower in decentralized than centralized facilities (9.9 per 100 person-years, 95% CI: 9.5-10.4 vs. 20.8 per 100 person-years, 95% CI: 19.7-22.0). One year after treatment start, differences in immunological success (adjusted OR=1.23, 95% CI: 0.83-1.83), and viral suppression (adjusted OR=0.80, 95% CI: 0.56-1.14) between patients followed at centralized and decentralized facilities were not statistically significant. In rural Malawi, 1- and 2-year program attrition was lower in decentralized than in centralized health facilities and no statistically significant differences in one-year immunovirological outcomes were observed between the two health care levels. Longer follow-up is needed to confirm these results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copps, Kevin D.
The Sandia Analysis Workbench (SAW) project has developed and deployed a production capability for SIERRA computational mechanics analysis workflows. However, the electrical analysis workflow capability requirements have only been demonstrated in early prototype states, with no real capability deployed for analysts’ use. This milestone aims to improve the electrical analysis workflow capability (via SAW and related tools) and deploy it for ongoing use. We propose to focus on a QASPR electrical analysis calibration workflow use case. We will include a number of new capabilities (versus today’s SAW), such as: 1) support for the XYCE code workflow component, 2) data managementmore » coupled to electrical workflow, 3) human-in-theloop workflow capability, and 4) electrical analysis workflow capability deployed on the restricted (and possibly classified) network at Sandia. While far from the complete set of capabilities required for electrical analysis workflow over the long term, this is a substantial first step toward full production support for the electrical analysts.« less
Bayramzadeh, Sara; Alkazemi, Mariam F
2014-01-01
This study aims to explore the relationship between the nursing station design and use of communication technologies by comparing centralized and decentralized nursing stations. The rapid changes in communication technologies in healthcare are inevitable. Communication methods can change the way occupants use a space. In the meantime, decentralized nursing stations are emerging as a replacement for the traditional centralized nursing stations; however, not much research has been done on how the design of nursing stations can impact the use of communication technologies. A cross sectional study was conducted using an Internet-based survey among registered nurses in a Southeastern hospital in the United States. Two units with centralized nursing stations and two units with decentralized nursing stations were compared in terms of the application of communication technologies. A total of 70 registered nurses completed the survey in a 2-week period. The results revealed no significant differences between centralized and decentralized nursing stations in terms of frequency of communication technologies used. However, a difference was found between perception of nurses toward communication technologies and perceptions of the use of communication technologies in decentralized nursing stations. Although the study was limited to one hospital, the results indicate that nurses hold positive attitudes toward communication technologies. The results also reveal the strengths and weaknesses of each nursing station design with regard to communication technologies. Hospital, interdisciplinary, nursing, technology, work environment.
Skaalvik, Mari Wolff; Gaski, Margrete; Norbye, Bente
2014-01-01
Ensuring a sufficient nursing workforce, with respect to both number and relevant professional competencies, is crucial in rural Arctic regions in Norway. This study examines the continuing education (CE) of nurses who graduated from a decentralized nursing programme between 1994 and 2011. This study aims to measure the extent to which the decentralized nursing education (DNE) in question has served as a basis for CE that is adapted to current and future community health care service needs in rural Arctic regions in northern Norway. More specifically, the study aims to investigate the frequency and scope of CE courses among the graduates of a DNE, the choice of study model and the degree of employment with respect to the relevant CE. This study is a quantitative survey providing descriptive statistics. The primary finding in this study is that 56% of the participants had engaged in CE and that they were employed in positions related to their education. The majority of students with decentralized bachelor's degrees engaged in CE that was part time and/or decentralized. More than half of the population in this study had completed CE despite no mandatory obligation in order to maintain licensure. Furthermore, 31% of the participants had completed more than one CE programme. The findings show that the participants preferred CE organized as part time and or decentralized studies.
James, Jean-Ann; Sung, Sangwoo; Jeong, Hyunju; Broesicke, Osvaldo A; French, Steven P; Li, Duo; Crittenden, John C
2018-01-02
The purpose of this study is to explore the potential water, CO 2 and NO x emission, and cost savings that the deployment of decentralized water and energy technologies within two urban growth scenarios can achieve. We assess the effectiveness of urban growth, technological, and political strategies to reduce these burdens in the 13-county Atlanta metropolitan region. The urban growth between 2005 and 2030 was modeled for a business as usual (BAU) scenario and a more compact growth (MCG) scenario. We considered combined cooling, heating and power (CCHP) systems using microturbines for our decentralized energy technology and rooftop rainwater harvesting and low flow fixtures for the decentralized water technologies. Decentralized water and energy technologies had more of an impact in reducing the CO 2 and NO x emissions and water withdrawal and consumption than an MCG growth scenario (which does not consider energy for transit). Decentralized energy can reduce the CO 2 and NO x emissions by 8% and 63%, respectively. Decentralized energy and water technologies can reduce the water withdrawal and consumption in the MCG scenario by 49% and 50% respectively. Installing CCHP systems on both the existing and new building stocks with a net metering policy could reduce the CO 2 , NO x , and water consumption by 50%, 90%, and 75% respectively.
Hayashi, K.; Hayashi, H.; Nakao, F.; Hayashi, F.
2001-01-01
AIM—To prospectively investigate changes in the area of the anterior capsule opening, and intraocular lens (IOL) decentration and tilt after implantation of a hydrogel IOL. METHODS—100 patients underwent implantation of a hydrogel IOL in one eye and an acrylic IOL implantation in the opposite eye. The area of the anterior capsule opening, and the degree of IOL decentration and tilt were measured using the Scheimpflug videophotography system at 3 days, and at 1, 3, and 6 months postoperatively. RESULTS—The mean anterior capsule opening area decreased significantly in both groups. At 6 months postoperatively, the area in the hydrogel group was significantly smaller than that in the acrylic group. The mean percentage of the area reduction in the hydrogel group was also significantly greater than that in the acrylic group, being 16.9% in the hydrogel group and 8.8% in the acrylic group. In contrast, IOL decentration and tilt did not progress in either group. No significant differences were found in the degree of IOL decentration and tilt throughout the follow up period. CONCLUSIONS—Contraction of the anterior capsule opening was more extensive with the hydrogel IOL than with the acrylic IOL, but the degree of IOL decentration and tilt were similar for the two types of lenses studied. PMID:11673291
Fully decentralized control of a soft-bodied robot inspired by true slime mold.
Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio
2010-03-01
Animals exhibit astoundingly adaptive and supple locomotion under real world constraints. In order to endow robots with similar capabilities, we must implement many degrees of freedom, equivalent to animals, into the robots' bodies. For taming many degrees of freedom, the concept of autonomous decentralized control plays a pivotal role. However a systematic way of designing such autonomous decentralized control system is still missing. Aiming at understanding the principles that underlie animals' locomotion, we have focused on a true slime mold, a primitive living organism, and extracted a design scheme for autonomous decentralized control system. In order to validate this design scheme, this article presents a soft-bodied amoeboid robot inspired by the true slime mold. Significant features of this robot are twofold: (1) the robot has a truly soft and deformable body stemming from real-time tunable springs and protoplasm, the former is used for an outer skin of the body and the latter is to satisfy the law of conservation of mass; and (2) fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts stemming from the law of conservation of protoplasmic mass. Simulation results show that this robot exhibits highly supple and adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design methodology for autonomous decentralized control system.
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow.The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow.
Decentralization, stabilization, and estimation of large-scale linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1976-01-01
In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.
Centralized, decentralized, and independent control of a flexible manipulator on a flexible base
NASA Technical Reports Server (NTRS)
Li, Feiyue; Bainum, Peter M.; Xu, Jianke
1991-01-01
The dynamics and control of a flexible manipulator arm with payload mass on a flexible base in space are considered. The controllers are provided by one torquer at the center of the base and one torquer at the connection joint of the robot and the base. The nonlinear dynamics of the system is modeled by applying the finite element method and Lagrangian formula. Three control strategies are considered and compared, i.e., centralized control, decentralized control, and independent control. All these control designs are based on the linear quadratic regulator theory. A mathematical decomposition is used in the decentralization process so that the coupling between the subsystems is weak, while a physical decomposition is used in the independent control design process. For both the decentralized and the independent controls, the stability of the overall linear system is checked before a numerical simulations is initiated. Two numerical examples show that the response of the independent control system are close to those of the centralized control system, while the responses of the decentralized control system are not.
Rudnick, Paul A
2015-04-01
Multiple-reaction monitoring (MRM) of peptides has been recognized as a promising technology because it is sensitive and robust. Borrowed from stable-isotope dilution (SID) methodologies in the field of small molecules, MRM is now routinely used in proteomics laboratories. While its usefulness validating candidate targets is widely accepted, it has not been established as a discovery tool. Traditional thinking has been that MRM workflows cannot be multiplexed high enough to efficiently profile. This is due to slower instrument scan rates and the complexities of developing increasingly large scheduling methods. In this issue, Colangelo et al. (Proteomics 2015, 15, 1202-1214) describe a pipeline (xMRM) for discovery-style MRM using label-free methods (i.e. relative quantitation). Label-free comes with cost benefits as does MRM, where data are easier to analyze than full-scan. Their paper offers numerous improvements in method design and data analysis. The robustness of their pipeline was tested on rodent postsynaptic density fractions. There, they were able to accurately quantify 112 proteins at a CV% of 11.4, with only 2.5% of the 1697 transitions requiring user intervention. Colangelo et al. aim to extend the reach of MRM deeper into the realm of discovery proteomics, an area that is currently dominated by data-dependent and data-independent workflows. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
López-Campos, Jose Luis; Abad Arranz, María; Calero Acuña, Carmen; Romero Valero, Fernando; Ayerbe García, Ruth; Hidalgo Molina, Antonio; Aguilar Pérez-Grovas, Ricardo Ismael; García Gil, Francisco; Casas Maldonado, Francisco; Caballero Ballesteros, Laura; Sánchez Palop, María; Pérez-Tejero, Dolores; Segado, Alejandro; Calvo Bonachera, Jose; Hernández Sierra, Bárbara; Doménech, Adolfo; Arroyo Varela, Macarena; González Vargas, Francisco; Cruz Rueda, Juan Jose
2015-01-01
Previous clinical audits for chronic obstructive pulmonary disease (COPD) have provided valuable information on the clinical care delivered to patients admitted to medical wards because of COPD exacerbations. However, clinical audits of COPD in an outpatient setting are scarce and no methodological guidelines are currently available. Based on our previous experience, herein we describe a clinical audit for COPD patients in specialized outpatient clinics with the overall goal of establishing a potential methodological workflow. A pilot clinical audit of COPD patients referred to respiratory outpatient clinics in the region of Andalusia, Spain (over 8 million inhabitants), was performed. The audit took place between October 2013 and September 2014, and 10 centers (20% of all public hospitals) were invited to participate. Cases with an established diagnosis of COPD based on risk factors, clinical symptoms, and a post-bronchodilator FEV1/FVC ratio of less than 0.70 were deemed eligible. The usefulness of formally scheduled regular follow-up visits was assessed. Two different databases (resources and clinical database) were constructed. Assessments were planned over a year divided by 4 three-month periods, with the goal of determining seasonal-related changes. Exacerbations and survival served as the main endpoints. This paper describes a methodological framework for conducting a clinical audit of COPD patients in an outpatient setting. Results from such audits can guide health information systems development and implementation in real-world settings.
Transition-Independent Decentralized Markov Decision Processes
NASA Technical Reports Server (NTRS)
Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)
2003-01-01
There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.
NASA Astrophysics Data System (ADS)
Huayang, Yin; Di, Zhou; Bing, Cui
2018-02-01
Using soft budget theory to explore the formation mechanism and the deep institutional incentive of the local financing platform debt expansion from the perspective of fiscal / financial decentralization, construct theoretical framework which explain the expansion of local debt financing platform and conduct an empirical test, the results showed that the higher the degree of fiscal decentralization, fiscal autonomy as a soft constraint body of local government the stronger, local financing platform debt scale is greater; the higher the degree of financial decentralization, local government and financial institutions have the higher autonomy with respect to the central, local financing platform debt scale is bigger; financial synergy degree is stronger, local government financial mutual supervision prompted the local government debt more transparency, local debt financing platform size is smaller.
RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service
NASA Astrophysics Data System (ADS)
Yang, Chao; Chen, Nengcheng; Di, Liping
2012-10-01
Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.
Scientific Data Management (SDM) Center for Enabling Technologies. 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Over the past five years, our activities have both established Kepler as a viable scientific workflow environment and demonstrated its value across multiple science applications. We have published numerous peer-reviewed papers on the technologies highlighted in this short paper and have given Kepler tutorials at SC06,SC07,SC08,and SciDAC 2007. Our outreach activities have allowed scientists to learn best practices and better utilize Kepler to address their individual workflow problems. Our contributions to advancing the state-of-the-art in scientific workflows have focused on the following areas. Progress in each of these areas is described in subsequent sections. Workflow development. The development of amore » deeper understanding of scientific workflows "in the wild" and of the requirements for support tools that allow easy construction of complex scientific workflows; Generic workflow components and templates. The development of generic actors (i.e.workflow components and processes) which can be broadly applied to scientific problems; Provenance collection and analysis. The design of a flexible provenance collection and analysis infrastructure within the workflow environment; and, Workflow reliability and fault tolerance. The improvement of the reliability and fault-tolerance of workflow environments.« less
Yuan, Michael Juntao; Finley, George Mike; Long, Ju; Mills, Christy; Johnson, Ron Kim
2013-01-31
Clinical decision support systems (CDSS) are important tools to improve health care outcomes and reduce preventable medical adverse events. However, the effectiveness and success of CDSS depend on their implementation context and usability in complex health care settings. As a result, usability design and validation, especially in real world clinical settings, are crucial aspects of successful CDSS implementations. Our objective was to develop a novel CDSS to help frontline nurses better manage critical symptom changes in hospitalized patients, hence reducing preventable failure to rescue cases. A robust user interface and implementation strategy that fit into existing workflows was key for the success of the CDSS. Guided by a formal usability evaluation framework, UFuRT (user, function, representation, and task analysis), we developed a high-level specification of the product that captures key usability requirements and is flexible to implement. We interviewed users of the proposed CDSS to identify requirements, listed functions, and operations the system must perform. We then designed visual and workflow representations of the product to perform the operations. The user interface and workflow design were evaluated via heuristic and end user performance evaluation. The heuristic evaluation was done after the first prototype, and its results were incorporated into the product before the end user evaluation was conducted. First, we recruited 4 evaluators with strong domain expertise to study the initial prototype. Heuristic violations were coded and rated for severity. Second, after development of the system, we assembled a panel of nurses, consisting of 3 licensed vocational nurses and 7 registered nurses, to evaluate the user interface and workflow via simulated use cases. We recorded whether each session was successfully completed and its completion time. Each nurse was asked to use the National Aeronautics and Space Administration (NASA) Task Load Index to self-evaluate the amount of cognitive and physical burden associated with using the device. A total of 83 heuristic violations were identified in the studies. The distribution of the heuristic violations and their average severity are reported. The nurse evaluators successfully completed all 30 sessions of the performance evaluations. All nurses were able to use the device after a single training session. On average, the nurses took 111 seconds (SD 30 seconds) to complete the simulated task. The NASA Task Load Index results indicated that the work overhead on the nurses was low. In fact, most of the burden measures were consistent with zero. The only potentially significant burden was temporal demand, which was consistent with the primary use case of the tool. The evaluation has shown that our design was functional and met the requirements demanded by the nurses' tight schedules and heavy workloads. The user interface embedded in the tool provided compelling utility to the nurse with minimal distraction.
Decentralization: Another Perspective
ERIC Educational Resources Information Center
Chapman, Robin
1973-01-01
This paper attempts to pursue the centralization-decentralization dilemma. A setting for this discussion is provided by noting some of the uses of terminology, followed by a consideration of inherent difficulties in conceptualizing. (Author)
32 CFR Appendix B to Part 324 - System of Records Notice
Code of Federal Regulations, 2013 CFR
2013-07-01
... organizationally decentralized system, describe each level of organization or element that maintains a portion of... manager should be indicated. For geographically separated or organizationally decentralized activities...
32 CFR Appendix B to Part 324 - System of Records Notice
Code of Federal Regulations, 2011 CFR
2011-07-01
... organizationally decentralized system, describe each level of organization or element that maintains a portion of... manager should be indicated. For geographically separated or organizationally decentralized activities...
32 CFR Appendix B to Part 324 - System of Records Notice
Code of Federal Regulations, 2012 CFR
2012-07-01
... organizationally decentralized system, describe each level of organization or element that maintains a portion of... manager should be indicated. For geographically separated or organizationally decentralized activities...
32 CFR Appendix B to Part 324 - System of Records Notice
Code of Federal Regulations, 2014 CFR
2014-07-01
... organizationally decentralized system, describe each level of organization or element that maintains a portion of... manager should be indicated. For geographically separated or organizationally decentralized activities...
Kumar, Rajiv B; Goren, Nira D; Stark, David E; Wall, Dennis P; Longhurst, Christopher A
2016-01-01
The diabetes healthcare provider plays a key role in interpreting blood glucose trends, but few institutions have successfully integrated patient home glucose data in the electronic health record (EHR). Published implementations to date have required custom interfaces, which limit wide-scale replication. We piloted automated integration of continuous glucose monitor data in the EHR using widely available consumer technology for 10 pediatric patients with insulin-dependent diabetes. Establishment of a passive data communication bridge via a patient’s/parent’s smartphone enabled automated integration and analytics of patient device data within the EHR between scheduled clinic visits. It is feasible to utilize available consumer technology to assess and triage home diabetes device data within the EHR, and to engage patients/parents and improve healthcare provider workflow. PMID:27018263
NASA Astrophysics Data System (ADS)
Lary, D. J.
2013-12-01
A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.
Data management integration for biomedical core facilities
NASA Astrophysics Data System (ADS)
Zhang, Guo-Qiang; Szymanski, Jacek; Wilson, David
2007-03-01
We present the design, development, and pilot-deployment experiences of MIMI, a web-based, Multi-modality Multi-Resource Information Integration environment for biomedical core facilities. This is an easily customizable, web-based software tool that integrates scientific and administrative support for a biomedical core facility involving a common set of entities: researchers; projects; equipments and devices; support staff; services; samples and materials; experimental workflow; large and complex data. With this software, one can: register users; manage projects; schedule resources; bill services; perform site-wide search; archive, back-up, and share data. With its customizable, expandable, and scalable characteristics, MIMI not only provides a cost-effective solution to the overarching data management problem of biomedical core facilities unavailable in the market place, but also lays a foundation for data federation to facilitate and support discovery-driven research.
[Decentralization of psychiatric health service].
Dabrowski, S
1996-01-01
The article discusses two stages of de-centralization of psychiatric hospitals: the first consists in further division into sub-districts, the second one includes successive establishment of psychiatric wards in general hospitals. With the growth of their number these wards are to take over more and more general psychiatric tasks from the specialized psychiatric hospitals. These wards will not substitute psychiatric hospitals completely. The hospitals, though decreasing in size and number, will be a necessary element of the de-centralized and versatile psychiatric care for a long time to come.
Reduced modeling of flexible structures for decentralized control
NASA Technical Reports Server (NTRS)
Yousuff, A.; Tan, T. M.; Bahar, L. Y.; Konstantinidis, M. F.
1986-01-01
Based upon the modified finite element-transfer matrix method, this paper presents a technique for reduced modeling of flexible structures for decentralized control. The modeling decisions are carried out at (finite-) element level, and are dictated by control objectives. A simply supported beam with two sets of actuators and sensors (linear force actuator and linear position and velocity sensors) is considered for illustration. In this case, it is conjectured that the decentrally controlled closed loop system is guaranteed to be at least marginally stable.
Decentralized digital adaptive control of robot motion
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A decentralized model reference adaptive scheme is developed for digital control of robot manipulators. The adaptation laws are derived using hyperstability theory, which guarantees asymptotic trajectory tracking despite gross robot parameter variations. The control scheme has a decentralized structure in the sense that each local controller receives only its joint angle measurement to produce its joint torque. The independent joint controllers have simple structures and can be programmed using a very simple and computationally fast algorithm. As a result, the scheme is suitable for real-time motion control.
Fischer, Sven; Grechenig, Kristoffel; Meier, Nicolas
2016-01-01
We run several experiments which allow us to compare cooperation under perfect and imperfect information in a centralized and decentralized punishment regime. Under perfect and extremely noisy information, aggregate behavior does not differ between institutions. Under intermediate noise, punishment escalates in the decentralized peer-to-peer punishment regime which badly affects efficiency while sustaining cooperation for longer. Only decentralized punishment is often directed at cooperators (perverse punishment). We report several, sometimes subtle, differences in punishment behavior, and how contributions react. PMID:27746725
Decentralized control of Markovian decision processes: Existence Sigma-admissable policies
NASA Technical Reports Server (NTRS)
Greenland, A.
1980-01-01
The problem of formulating and analyzing Markov decision models having decentralized information and decision patterns is examined. Included are basic examples as well as the mathematical preliminaries needed to understand Markov decision models and, further, to superimpose decentralized decision structures on them. The notion of a variance admissible policy for the model is introduced and it is proved that there exist (possibly nondeterministic) optional policies from the class of variance admissible policies. Directions for further research are explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, W; Bayhealth Medical Center, Dover, DE; Chu, A
Purpose: To quality assure a large quantity of retrospective treatment cases for treatment performances by randomly sampling is inefficient. Here we provide a method to efficiently monitor and investigate the QA of SBRT workflow over Mosaiq. Methods: The code developed with Microsoft SQL Server Management Studio 2008R2 and VBA was used for retrieving and sorting data from Mosaiq (version 2.3–2.6 during 2012–2015). SBRT patients were filtered by fractional dose over 350cGy and total fraction number less than 6, which SBRT prescriptions were defined. The quality assurance on the SBRT workflow was focused on the treatment deliveries such as patient positioningmore » setup, CBCT indicated offsets and couch shifted corrections. The treatment delivery were done by Varian Truebeam systems and the record/verify by Mosaiq. Results: Total 82 SBRT patients corresponding to 103 courses and 854 CBCT images were found by the retrieval query. Most centers record daily pre-treatment (Pre-Tx: before treatment shift) image-guided shifts along treatment course for inter-fraction motion record, and it is useful to also verify it with post-treatment imaging (Post-Tx: after treatment CBCT verification) to verify intra-fraction motion. Analyzing the details of daily recorded shifts can reveals the information of patient-setup and staff’s record/verify behaviors. 3 examples were provided as solid evidences and on-going rectification for preventing future mistakes. Conclusions: The report gave feasible examples for inspector to verify a large amount of data during site investigation. This program can also be extended to a scheduled data mining with software to periodical analyze the timely records in Mosaiq, for example, a various control charts for different QA purposes. As the current trend of automation in radiation therapy field, the data mining would be a necessary tool in the future, just as the automatic plan quality evaluation has been under development in Eclipse.« less
HIS-Based Support of Follow-Up Documentation – Concept and Implementation for Clinical Studies
Herzberg, S.; Fritz, F.; Rahbar, K.; Stegger, L.; Schäfers, M.; Dugas, M.
2011-01-01
Objective Follow-up data must be collected according to the protocol of each clinical study, i.e. at certain time points. Missing follow-up information is a critical problem and may impede or bias the analysis of study data and result in delays. Moreover, additional patient recruitment may be necessary due to incomplete follow-up data. Current electronic data capture (EDC) systems in clinical studies are usually separated from hospital information systems (HIS) and therefore can provide limited functionality to support clinical workflow. In two case studies, we assessed the feasibility of HIS-based support of follow-up documentation. Methods We have developed a data model and a HIS-based workflow to provide follow-up forms according to clinical study protocols. If a follow-up form was due, a database procedure created a follow-up event which was translated by a communication server into an HL7 message and transferred to the import interface of the clinical information system (CIS). This procedure generated the required follow-up form and enqueued a link to it in a work list of the relating study nurses and study physicians, respectively. Results A HIS-based follow-up system automatically generated follow-up forms as defined by a clinical study protocol. These forms were scheduled into work lists of study nurses and study physicians. This system was integrated into the clinical workflow of two clinical studies. In a study from nuclear medicine, each scenario from the test concept according to the protocol of the single photon emission computer tomography/computer tomography (SPECT/CT) study was simulated and each scenario passed the test. For a study in psychiatry, 128 follow-up forms were automatically generated within 27 weeks, on average five forms per week (maximum 12, minimum 1 form per week). Conclusion HIS-based support of follow-up documentation in clinical studies is technically feasible and can support compliance with study protocols. PMID:23616857
Kranzfelder, Michael; Schneider, Armin; Gillen, Sonja; Feussner, Hubertus
2011-03-01
Technical progress in the operating room (OR) increases constantly, but advanced techniques for error prevention are lacking. It has been the vision to create intelligent OR systems ("autopilot") that not only collect intraoperative data but also interpret whether the course of the operation is normal or deviating from the schedule ("situation awareness"), to recommend the adequate next steps of the intervention, and to identify imminent risky situations. Recently introduced technologies in health care for real-time data acquisition (bar code, radiofrequency identification [RFID], voice and emotion recognition) may have the potential to meet these demands. This report aims to identify, based on the authors' institutional experience and a review of the literature (MEDLINE search 2000-2010), which technologies are currently most promising for providing the required data and to describe their fields of application and potential limitations. Retrieval of information on the functional state of the peripheral devices in the OR is technically feasible by continuous sensor-based data acquisition and online analysis. Using bar code technologies, automatic instrument identification seems conceivable, with information given about the actual part of the procedure and indication of any change in the routine workflow. The dynamics of human activities also comprise key information. A promising technology for continuous personnel tracking is data acquisition with RFID. Emotional data capture and analysis in the OR are difficult. Although technically feasible, nonverbal emotion recognition is difficult to assess. In contrast, emotion recognition by speech seems to be a promising technology for further workflow prediction. The presented technologies are a first step to achieving an increased situational awareness in the OR. However, workflow definition in surgery is feasible only if the procedure is standardized, the peculiarities of the individual patient are taken into account, the level of the surgeon's expertise is regarded, and a comprehensive data capture can be obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boddu, S; Morrow, A; Krishnamurthy, N
Purpose: Our goal is to implement lean methodology to make our current process of CT simulation to treatment more efficient. Methods: In this study, we implemented lean methodology and tools and employed flowchart in excel for process-mapping. We formed a group of physicians, physicists, dosimetrists, therapists and a clinical physics assistant and huddled bi-weekly to map current value streams. We performed GEMBA walks and observed current processes from scheduling patient CT Simulations to treatment plan approval. From this, the entire workflow was categorized into processes, sub-processes, and tasks. For each process we gathered data on touch time, first time quality,more » undesirable effects (UDEs), and wait-times from relevant members of each task. UDEs were binned per frequency of their occurrence. We huddled to map future state and to find solutions to high frequency UDEs. We implemented visual controls, hard stops, and documented issues found during chart checks prior to treatment plan approval. Results: We have identified approximately 64 UDEs in our current workflow that could cause delays, re-work, compromise the quality and safety of patient treatments, or cause wait times between 1 – 6 days. While some UDEs are unavoidable, such as re-planning due to patient weight loss, eliminating avoidable UDEs is our goal. In 2015, we found 399 issues with patient treatment plans, of which 261, 95 and 43 were low, medium and high severity, respectively. We also mapped patient-specific QA processes for IMRT/Rapid Arc and SRS/SBRT, involving 10 and 18 steps, respectively. From these, 13 UDEs were found and 5 were addressed that solved 20% of issues. Conclusion: We have successfully implemented lean methodology and tools. We are further mapping treatment site specific workflows to identify bottlenecks, potential breakdowns and personnel allocation and employ tools like failure mode effects analysis to mitigate risk factors to make this process efficient.« less
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Abstract Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow. The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow. PMID:22859881
Northeastern Illinois RTA Decentralized Paratransit Brokerage Program
DOT National Transportation Integrated Search
1982-09-01
This document presents a review and assessment of the Northeastern Illinois Regional Transportation Authority's (RTA) Paratransit Brokerage Demonstration Program which involved six projects implemented by local governments under RTA's decentralized b...
Programs Related to Septic Systems
There are many programs, both at the EPA and elsewhere, that relate to the decentralized wastewater program and provide information about how decentralized wastewater is integrated in environmental quality, planning, protection, and conservation.
20 CFR 375.7 - Operating regulations.
Code of Federal Regulations, 2012 CFR
2012-04-01
... offices. (2) To provide the necessary authority for a decentralized program as outlined in this paragraph...) To provide the necessary authority for a decentralized program as outlined in paragraph (b) of this...
20 CFR 375.7 - Operating regulations.
Code of Federal Regulations, 2013 CFR
2013-04-01
... offices. (2) To provide the necessary authority for a decentralized program as outlined in this paragraph...) To provide the necessary authority for a decentralized program as outlined in paragraph (b) of this...
20 CFR 375.7 - Operating regulations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... offices. (2) To provide the necessary authority for a decentralized program as outlined in this paragraph...) To provide the necessary authority for a decentralized program as outlined in paragraph (b) of this...
Building asynchronous geospatial processing workflows with web services
NASA Astrophysics Data System (ADS)
Zhao, Peisheng; Di, Liping; Yu, Genong
2012-02-01
Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.
Zborowsky, Terri; Bunker-Hellmich, Lou; Morelli, Agneta; O'Neill, Mike
2010-01-01
Evidence-based findings of the effects of nursing station design on nurses' work environment and work behavior are essential to improve conditions and increase retention among these fundamental members of the healthcare delivery team. The purpose of this exploratory study was to investigate how nursing station design (i.e., centralized and decentralized nursing station layouts) affected nurses' use of space, patient visibility, noise levels, and perceptions of the work environment. Advances in information technology have enabled nurses to move away from traditional centralized paper-charting stations to smaller decentralized work stations and charting substations located closer to, or inside of, patient rooms. Improved understanding of the trade-offs presented by centralized and decentralized nursing station design has the potential to provide useful information for future nursing station layouts. This information will be critical for understanding the nurse environment "fit." The study used an exploratory design with both qualitative and quantitative methods. Qualitative data regarding the effects of nursing station design on nurses' health and work environment were gathered by means of focus group interviews. Quantitative data-gathering techniques included place- and person-centered space use observations, patient visibility assessments, sound level measurements, and an online questionnaire regarding perceptions of the work environment. Nurses on all units were observed most frequently performing telephone, computer, and administrative duties. Time spent using telephones, computers, and performing other administrative duties was significantly higher in the centralized nursing stations. Consultations with medical staff and social interactions were significantly less frequent in decentralized nursing stations. There were no indications that either centralized or decentralized nursing station designs resulted in superior visibility. Sound levels measured in all nursing stations exceeded recommended levels during all shifts. No significant differences were identified in nurses' perceptions of work control-demand-support in centralized and decentralized nursing station designs. The "hybrid" nursing design model in which decentralized nursing stations are coupled with centralized meeting rooms for consultation between staff members may strike a balance between the increase in computer duties and the ongoing need for communication and consultation that addresses the conflicting demands of technology and direct patient care.
A Closed-Loop Hardware Simulation of Decentralized Satellite Formation Control
NASA Technical Reports Server (NTRS)
Ebimuma, Takuji; Lightsey, E. Glenn; Baur, Frank (Technical Monitor)
2002-01-01
In recent years, there has been significant interest in the use of formation flying spacecraft for a variety of earth and space science missions. Formation flying may provide smaller and cheaper satellites that, working together, have more capability than larger and more expensive satellites. Several decentralized architectures have been proposed for autonomous establishment and maintenance of satellite formations. In such architectures, each satellite cooperatively maintains the shape of the formation without a central supervisor, and processing only local measurement information. The Global Positioning System (GPS) sensors are ideally suited to provide such local position and velocity measurements to the individual satellites. An investigation of the feasibility of a decentralized approach to satellite formation flying was originally presented by Carpenter. He extended a decentralized linear-quadratic-Gaussian (LQG) framework proposed by Speyer in a fashion similar to an extended Kalman filter (EKE) which processed GPS position fix solutions. The new decentralized LQG architecture was demonstrated in a numerical simulation for a realistic scenario that is similar to missions that have been proposed by NASA and the U.S. Air Force. Another decentralized architecture was proposed by Park et al. using carrier differential-phase GPS (CDGPS). Recently, Busse et al demonstrated the decentralized CDGPS architecture in a hardware-in-the-loop simulation on the Formation Flying TestBed (FFTB) at Goddard Space Flight Center (GSFC), which features two Spirent Cox 16 channel GPS signal generator. Although representing a step forward by utilizing GPS signal simulators for a spacecraft formation flying simulation, only an open-loop performance, in which no maneuvers were executed based on the real-time state estimates, was considered. In this research, hardware experimentation has been extended to include closed-loop integrated guidance and navigation of multiple spacecraft formations using GPS receivers and real-time vehicle telemetry. A hardware closed-loop simulation has been performed using the decentralized LQG architecture proposed by Carpenter in the GPS test facility at the Center for Space Research (CSR). This is the first presentation using this type of hardware for demonstration of closed-loop spacecraft formation flying.
Scientific Workflows and the Sensor Web for Virtual Environmental Observatories
NASA Astrophysics Data System (ADS)
Simonis, I.; Vahed, A.
2008-12-01
Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.
NASA Astrophysics Data System (ADS)
Clempner, Julio B.
2017-01-01
This paper presents a novel analytical method for soundness verification of workflow nets and reset workflow nets, using the well-known stability results of Lyapunov for Petri nets. We also prove that the soundness property is decidable for workflow nets and reset workflow nets. In addition, we provide evidence of several outcomes related with properties such as boundedness, liveness, reversibility and blocking using stability. Our approach is validated theoretically and by a numerical example related to traffic signal-control synchronisation.
Biowep: a workflow enactment portal for bioinformatics applications.
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-03-08
The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.
Biowep: a workflow enactment portal for bioinformatics applications
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-01-01
Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563
Papers by the Decentralized Wastewater Management MOU Partnership
Four position papers for state, local, and tribal government officials and interested stakeholders. These papers include information on the uses and benefits of decentralized wastewater treatment and examples of its effective use.
Clean Water State Revolving Fund (CWSRF): Decentralized Wastewater Treatment
Decentralized wastewater treatment is an onsite or clustered system used to collect, treat, and disperse or reclaim wastewater from a small community or service area (e.g., septic systems, cluster systems, lagoons).
Modeling and stability of segmented reflector telescopes - A decentralized approach
NASA Technical Reports Server (NTRS)
Ryaciotaki-Boussalis, Helen A.; Ih, Che-Hang Charles
1990-01-01
The decentralization of a segmented reflector telescope based on a finite-element model of its structure is considered. The decentralization of the system at the panel level is considered. Each panel is originally treated as an isolated subsystem so that the controller design is performed independently at the local level, and then applied to the composite system for stability analysis. The panel-level control laws were designed by means of pole placement using local output feedback. Simulation results show a better 1000:1 vibration attenuation in panel position when compared to the open-loop system. It is shown that the overall closed-loop system is exponentially stable provided that certain conditions are met. The advantage to the decentralized approach is that the design is performed in terms of the low-dimensionality subsystems, thus drastically reducing the design computational complexities.
Experimental Verification of Fully Decentralized Control Inspired by Plasmodium of True Slime Mold
NASA Astrophysics Data System (ADS)
Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio
This paper presents a fully decentralized control inspired by plasmodium of true slime mold and its validity using a soft-bodied amoeboid robot. The notable features of this paper are twofold: (1) the robot has truly soft and deformable body stemming from real-time tunable springs and a balloon, the former is utilized as an outer skin of the body and the latter serves as protoplasm; and (2) a fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts induced by the law of conservation of protoplasmic mass. Experimental results show that this robot exhibits truly supple locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design scheme for autonomous decentralized control system.
Skaalvik, Mari Wolff; Gaski, Margrete; Norbye, Bente
2014-01-01
Background Ensuring a sufficient nursing workforce, with respect to both number and relevant professional competencies, is crucial in rural Arctic regions in Norway. This study examines the continuing education (CE) of nurses who graduated from a decentralized nursing programme between 1994 and 2011. Objective This study aims to measure the extent to which the decentralized nursing education (DNE) in question has served as a basis for CE that is adapted to current and future community health care service needs in rural Arctic regions in northern Norway. More specifically, the study aims to investigate the frequency and scope of CE courses among the graduates of a DNE, the choice of study model and the degree of employment with respect to the relevant CE. Design This study is a quantitative survey providing descriptive statistics. Results The primary finding in this study is that 56% of the participants had engaged in CE and that they were employed in positions related to their education. The majority of students with decentralized bachelor's degrees engaged in CE that was part time and/or decentralized. Conclusions More than half of the population in this study had completed CE despite no mandatory obligation in order to maintain licensure. Furthermore, 31% of the participants had completed more than one CE programme. The findings show that the participants preferred CE organized as part time and or decentralized studies. PMID:25279355
Suvorova, Alena; Belyakov, Andrey; Makhamatova, Aliia; Ustinov, Andrey; Levina, Olga; Tulupyev, Alexander; Niccolai, Linda; Rassokhin, Vadim; Heimer, Robert
2015-01-01
Prior to 2010, medical care for people living with HIV/AIDS was provided at an outpatient facility near the center of St. Petersburg. Since then, HIV specialty clinics have been established in more outlying regions of the city. The study examined the effect of this decentralization of HIV care on patients' satisfaction with care in clinics of St. Petersburg, Russia. We conducted a cross-sectional study with 418 HIV-positive patients receiving care at the St. Petersburg AIDS Center or at District Infectious Disease Departments (centralized and decentralized models, respectively). Face-to-face interviews included questions about psychosocial characteristics, patient's satisfaction with care, and clinic-related patient experience. Abstraction of medical records provided information on patients' viral load. To compare centralized and decentralized models of care delivery, we performed bivariate and multivariate analysis. Clients of District Infectious Disease Departments spent less time in lines and traveling to reach the clinic, and they had stronger relationships with their doctor. The overall satisfaction with care was high, with 86% of the sample reporting high level of satisfaction. Nevertheless, satisfaction with care was strongly and positively associated with the decentralized model of care and Patient-Doctor Relationship Score. Patient experience elements such as waiting time, travel time, and number of services used were not significant factors related to satisfaction. Given the positive association of satisfaction with decentralized service delivery, it is worth exploring decentralization as one way of improving healthcare services for people living with HIV/AIDS.
Walsh, Kristin E; Chui, Michelle Anne; Kieser, Mara A; Williams, Staci M; Sutter, Susan L; Sutter, John G
2011-01-01
To explore community pharmacy technician workflow change after implementation of an automated robotic prescription-filling device. At an independent community pharmacy in rural Mayville, WI, pharmacy technicians were observed before and 3 months after installation of an automated robotic prescription-filling device. The main outcome measures were sequences and timing of technician workflow steps, workflow interruptions, automation surprises, and workarounds. Of the 77 and 80 observations made before and 3 months after robot installation, respectively, 17 different workflow sequences were observed before installation and 38 after installation. Average prescription filling time was reduced by 40 seconds per prescription with use of the robot. Workflow interruptions per observation increased from 1.49 to 1.79 (P = 0.11), and workarounds increased from 10% to 36% after robot use. Although automated prescription-filling devices can increase efficiency, workflow interruptions and workarounds may negate that efficiency. Assessing changes in workflow and sequencing of tasks that may result from the use of automation can help uncover opportunities for workflow policy and procedure redesign.
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.
2016-12-01
Hydrologists today have to integrate resources such as data and models, which originate and reside in multiple autonomous and heterogeneous repositories over the Web. Several resource management systems have emerged within geoscience communities for sharing long-tail data, which are collected by individual or small research groups, and long-tail models, which are developed by scientists or small modeling communities. While these systems have increased the availability of resources within geoscience domains, deficiencies remain due to the heterogeneity in the methods, which are used to describe, encode, and publish information about resources over the Web. This heterogeneity limits our ability to access the right information in the right context so that it can be efficiently retrieved and understood without the Hydrologist's mediation. A primary challenge of the Web today is the lack of the semantic interoperability among the massive number of resources, which already exist and are continually being generated at rapid rates. To address this challenge, we have developed a decentralized GeoSemantic (GS) framework, which provides three sets of micro-web services to support (i) semantic annotation of resources, (ii) semantic alignment between the metadata of two resources, and (iii) semantic mediation among Standard Names. Here we present the design of the framework and demonstrate its application for semantic integration between data and models used in the IML-CZO. First we show how the IML-CZO data are annotated using the Semantic Annotation Services. Then we illustrate how the Resource Alignment Services and Knowledge Integration Services are used to create a semantic workflow among TopoFlow model, which is a spatially-distributed hydrologic model and the annotated data. Results of this work are (i) a demonstration of how the GS framework advances the integration of heterogeneous data and models of water-related disciplines by seamless handling of their semantic heterogeneity, (ii) an introduction of new paradigm for reusing existing and new standards as well as tools and models without the need of their implementation in the Cyberinfrastructures of water-related disciplines, and (iii) an investigation of a methodology by which distributed models can be coupled in a workflow using the GS services.
Generic worklist handler for workflow-enabled products
NASA Astrophysics Data System (ADS)
Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas
1999-07-01
Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...
2016-07-21
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
Indirect decentralized learning control
NASA Technical Reports Server (NTRS)
Longman, Richard W.; Lee, Soo C.; Phan, M.
1992-01-01
The new field of learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented a theory of indirect learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper develops improved indirect learning control algorithms, and studies the use of such controllers in decentralized systems. The original motivation of the learning control field was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. The basic result of the paper is to show that stability of the indirect learning controllers for all subsystems when the coupling between subsystems is turned off, assures convergence to zero tracking error of the decentralized indirect learning control of the coupled system, provided that the sample time in the digital learning controller is sufficiently short.
Arredondo, A; Parada, I
2001-01-01
This article presents the results from an evaluative longitudinal study with before-after design. The main objective was to determine the effects of health care decentralization on changes in health financing. Taking into account feasibility, political and technical criteria, three Latin American countries were selected as study populations: Mexico, Nicaragua and Peru. The methodology had two main phases. In the first phase, the study referred to secondary sources of data and documents to obtain information about the following variables: type of decentralization implemented, source of finance, funds of financing, providers, final use of resources and mechanisms for resource allocation. In the second phase, the study referred to primary data collected in a survey of key personnel from the health sectors of each country. Taking into account the changes implemented in the three countries, as well as the strengths and weaknesses of each country in financing and decentralization, a rule for decision-making is proposed that attempts to identify the main financial changes implemented in each country and the basic indicators that can be used in future years to direct the planning, assessment, adjustment and correction of health financing and decentralization.
Towards a Decentralized Magnetic Indoor Positioning System
Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg
2015-01-01
Decentralized magnetic indoor localization is a sophisticated method for processing sampled magnetic data directly on a mobile station (MS), thereby decreasing or even avoiding the need for communication with the base station. In contrast to central-oriented positioning systems, which transmit raw data to a base station, decentralized indoor localization pushes application-level knowledge into the MS. A decentralized position solution has thus a strong feasibility to increase energy efficiency and to prolong the lifetime of the MS. In this article, we present a complete architecture and an implementation for a decentralized positioning system. Furthermore, we introduce a technique for the synchronization of the observed magnetic field on the MS with the artificially-generated magnetic field from the coils. Based on real-time clocks (RTCs) and a preemptive operating system, this method allows a stand-alone control of the coils and a proper assignment of the measured magnetic fields on the MS. A stand-alone control and synchronization of the coils and the MS have an exceptional potential to implement a positioning system without the need for wired or wireless communication and enable a deployment of applications for rescue scenarios, like localization of miners or firefighters. PMID:26690145
Towards a Decentralized Magnetic Indoor Positioning System.
Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg
2015-12-04
Decentralized magnetic indoor localization is a sophisticated method for processing sampled magnetic data directly on a mobile station (MS), thereby decreasing or even avoiding the need for communication with the base station. In contrast to central-oriented positioning systems, which transmit raw data to a base station, decentralized indoor localization pushes application-level knowledge into the MS. A decentralized position solution has thus a strong feasibility to increase energy efficiency and to prolong the lifetime of the MS. In this article, we present a complete architecture and an implementation for a decentralized positioning system. Furthermore, we introduce a technique for the synchronization of the observed magnetic field on the MS with the artificially-generated magnetic field from the coils. Based on real-time clocks (RTCs) and a preemptive operating system, this method allows a stand-alone control of the coils and a proper assignment of the measured magnetic fields on the MS. A stand-alone control and synchronization of the coils and the MS have an exceptional potential to implement a positioning system without the need for wired or wireless communication and enable a deployment of applications for rescue scenarios, like localization of miners or firefighters.
Standardizing clinical trials workflow representation in UML for international site comparison.
de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M O; Rodrigues, Maria J; Shah, Jatin; Loures, Marco R; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo
2010-11-09
With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.
Standardizing Clinical Trials Workflow Representation in UML for International Site Comparison
de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M. O.; Rodrigues, Maria J.; Shah, Jatin; Loures, Marco R.; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo
2010-01-01
Background With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Methods Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Results Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. Conclusions This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows. PMID:21085484
32 CFR Appendix F to Part 505 - Example of a System of Records Notice
Code of Federal Regulations, 2012 CFR
2012-07-01
...) System Location: Specify the address of the primary system and any decentralized elements, including... title and duty address of the system manager. For decentralized systems, show the locations, the...
32 CFR Appendix F to Part 505 - Example of a System of Records Notice
Code of Federal Regulations, 2014 CFR
2014-07-01
...) System Location: Specify the address of the primary system and any decentralized elements, including... title and duty address of the system manager. For decentralized systems, show the locations, the...
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
On decentralized control of large-scale systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1978-01-01
A scheme is presented for decentralized control of large-scale linear systems which are composed of a number of interconnected subsystems. By ignoring the interconnections, local feedback controls are chosen to optimize each decoupled subsystem. Conditions are provided to establish compatibility of the individual local controllers and achieve stability of the overall system. Besides computational simplifications, the scheme is attractive because of its structural features and the fact that it produces a robust decentralized regulator for large dynamic systems, which can tolerate a wide range of nonlinearities and perturbations among the subsystems.
Two controller design approaches for decentralized systems
NASA Technical Reports Server (NTRS)
Ozguner, U.; Khorrami, F.; Iftar, A.
1988-01-01
Two different philosophies for designing the controllers of decentralized systems are considered within a quadratic regulator framework which is generalized to admit decentralized frequency weighting. In the first approach, the total system model is examined, and the feedback strategy for each channel or subsystem is determined. In the second approach, separate, possibly overlapping, and uncoupled models are analyzed for each channel, and the results can be combined to study the original system. The two methods are applied to the example of a model of the NASA COFS Mast Flight System.
Decentralized control of the COFS-I Mast using linear dc motors
NASA Technical Reports Server (NTRS)
Lindner, Douglas K.; Celano, Tom; Ide, Eric
1989-01-01
Consideration is given to a decentralized control design for vibration suppression in the COFS-I Mast using linear dc motors for actuators. The decentralized control design is based results from power systems using root locus techniques that are not well known. The approach is effective because the loop gain is low due to low actuator authority. The frequency-dependent nonlinearities of the actuator are taken into account. Because of the tendency of the transients to saturate the the stroke length of the actuator, its effectiveness is limited.
Quantitative workflow based on NN for weighting criteria in landfill suitability mapping
NASA Astrophysics Data System (ADS)
Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Alkhasawneh, Mutasem Sh.; Aziz, Hamidi Abdul
2017-10-01
Our study aims to introduce a new quantitative workflow that integrates neural networks (NNs) and multi criteria decision analysis (MCDA). Existing MCDA workflows reveal a number of drawbacks, because of the reliance on human knowledge in the weighting stage. Thus, new workflow presented to form suitability maps at the regional scale for solid waste planning based on NNs. A feed-forward neural network employed in the workflow. A total of 34 criteria were pre-processed to establish the input dataset for NN modelling. The final learned network used to acquire the weights of the criteria. Accuracies of 95.2% and 93.2% achieved for the training dataset and testing dataset, respectively. The workflow was found to be capable of reducing human interference to generate highly reliable maps. The proposed workflow reveals the applicability of NN in generating landfill suitability maps and the feasibility of integrating them with existing MCDA workflows.
Morizane, Yuki; Shiode, Yusuke; Hirano, Masayuki; Doi, Shinichiro; Toshima, Shinji; Fujiwara, Atsushi; Shiraga, Fumio
2017-01-01
Purpose To investigate the tilt and decentration of the crystalline lens and the intraocular lens (IOL) relative to the corneal topographic axis using anterior segment ocular coherence tomography (AS-OCT). Methods A sample set of 100 eyes from 49 subjects (41 eyes with crystalline lenses and 59 eyes with IOLs) were imaged using second generation AS-OCT (CASIA2, TOMEY) in June and July 2016 at Okayama University. Both mydriatic and non-mydriatic images were obtained, and the tilt and decentration of the crystalline lens and the IOL were quantified. The effects of pupil dilation on measurements were also assessed. Results The crystalline lens showed an average tilt of 5.15° towards the inferotemporal direction relative to the corneal topographic axis under non-mydriatic conditions and 5.25° under mydriatic conditions. Additionally, an average decentration of 0.11 mm towards the temporal direction was observed under non-mydriatic conditions and 0.08 mm under mydriatic conditions. The average tilt for the IOL was 4.31° towards the inferotemporal direction relative to the corneal topographic axis under non-mydriatic conditions and 4.65° in the same direction under mydriatic conditions. The average decentration was 0.05 mm towards the temporal direction under non-mydriatic conditions and 0.08 mm in the same direction under mydriatic conditions. A strong correlation was found between the average tilt and decentration values of the crystalline lens and the IOL under both non-mydriatic and mydriatic conditions (all Spearman correlation coefficients, r ≥ 0.800; all P < 0.001). Conclusion When measured using second generation AS-OCT, both the crystalline lens and the IOL showed an average tilt of 4–6° toward the inferotemporal direction relative to the corneal topographic axis and an average decentration of less than 0.12 mm towards the temporal direction. These results were not influenced by pupil dilation and they showed good repeatability. PMID:28863141
Kimura, Shuhei; Morizane, Yuki; Shiode, Yusuke; Hirano, Masayuki; Doi, Shinichiro; Toshima, Shinji; Fujiwara, Atsushi; Shiraga, Fumio
2017-01-01
To investigate the tilt and decentration of the crystalline lens and the intraocular lens (IOL) relative to the corneal topographic axis using anterior segment ocular coherence tomography (AS-OCT). A sample set of 100 eyes from 49 subjects (41 eyes with crystalline lenses and 59 eyes with IOLs) were imaged using second generation AS-OCT (CASIA2, TOMEY) in June and July 2016 at Okayama University. Both mydriatic and non-mydriatic images were obtained, and the tilt and decentration of the crystalline lens and the IOL were quantified. The effects of pupil dilation on measurements were also assessed. The crystalline lens showed an average tilt of 5.15° towards the inferotemporal direction relative to the corneal topographic axis under non-mydriatic conditions and 5.25° under mydriatic conditions. Additionally, an average decentration of 0.11 mm towards the temporal direction was observed under non-mydriatic conditions and 0.08 mm under mydriatic conditions. The average tilt for the IOL was 4.31° towards the inferotemporal direction relative to the corneal topographic axis under non-mydriatic conditions and 4.65° in the same direction under mydriatic conditions. The average decentration was 0.05 mm towards the temporal direction under non-mydriatic conditions and 0.08 mm in the same direction under mydriatic conditions. A strong correlation was found between the average tilt and decentration values of the crystalline lens and the IOL under both non-mydriatic and mydriatic conditions (all Spearman correlation coefficients, r ≥ 0.800; all P < 0.001). When measured using second generation AS-OCT, both the crystalline lens and the IOL showed an average tilt of 4-6° toward the inferotemporal direction relative to the corneal topographic axis and an average decentration of less than 0.12 mm towards the temporal direction. These results were not influenced by pupil dilation and they showed good repeatability.
Decentralized Multisensory Information Integration in Neural Systems.
Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si
2016-01-13
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.
Decentralized Multisensory Information Integration in Neural Systems
Zhang, Wen-hao; Chen, Aihua
2016-01-01
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843
McGuire, Megan; Pinoges, Loretxu; Kanapathipillai, Rupa; Munyenyembe, Tamika; Huckabee, Martha; Makombe, Simon; Szumilin, Elisabeth; Heinzelmann, Annette; Pujades-Rodríguez, Mar
2012-01-01
Objective To describe patient antiretroviral therapy (cART) outcomes associated with intensive decentralization of services in a rural HIV program in Malawi. Methods Longitudinal analysis of data from HIV-infected patients starting cART between August 2001 and December 2008 and of a cross-sectional immunovirological assessment conducted 12 (±2) months after therapy start. One-year mortality, lost to follow-up, and attrition (deaths and lost to follow-up) rates were estimated with exact Poisson 95% confidence intervals (CI) by type of care delivery and year of initiation. Association of virological suppression (<50 copies/mL) and immunological success (CD4 gain ≥100 cells/µL), with type of care was investigated using multiple logistic regression. Results During the study period, 4322 cART patients received centralized care and 11,090 decentralized care. At therapy start, patients treated in decentralized health facilities had higher median CD4 count levels (167 vs. 130 cell/µL, P<0.0001) than other patients. Two years after cART start, program attrition was lower in decentralized than centralized facilities (9.9 per 100 person-years, 95% CI: 9.5–10.4 vs. 20.8 per 100 person-years, 95% CI: 19.7–22.0). One year after treatment start, differences in immunological success (adjusted OR = 1.23, 95% CI: 0.83–1.83), and viral suppression (adjusted OR = 0.80, 95% CI: 0.56–1.14) between patients followed at centralized and decentralized facilities were not statistically significant. Conclusions In rural Malawi, 1- and 2-year program attrition was lower in decentralized than in centralized health facilities and no statistically significant differences in one-year immunovirological outcomes were observed between the two health care levels. Longer follow-up is needed to confirm these results. PMID:23077473
Agaba, Patricia A; Genberg, Becky L; Sagay, Atiene S; Agbaji, Oche O; Meloni, Seema T; Dadem, Nancin Y; Kolawole, Grace O; Okonkwo, Prosper; Kanki, Phyllis J; Ware, Norma C
2018-01-01
Objective Differentiated care refers collectively to flexible service models designed to meet the differing needs of HIV-infected persons in resource-scarce settings. Decentralization is one such service model. Retention is a key indicator for monitoring the success of HIV treatment and care programs. We used multiple measures to compare retention in a cohort of patients receiving HIV care at “hub” (central) and “spoke” (decentralized) sites in a large public HIV treatment program in north central Nigeria. Methods This retrospective cohort study utilized longitudinal program data representing central and decentralized levels of care in the Plateau State Decentralization Initiative, north central Nigeria. We examined retention with patient- level (retention at fixed times, loss-to-follow-up [LTFU]) and visit-level (gaps-in-care, visit constancy) measures. Regression models with generalized estimating equations (GEE) were used to estimate the effect of decentralization on visit-level measures. Patient-level measures were examined using survival methods with Cox regression models, controlling for baseline variables. Results Of 15,650 patients, 43% were enrolled at the hub. Median time in care was 3.1 years. Hub patients were less likely to be LTFU (adjusted hazard ratio (AHR)=0.91, 95% CI: 0.85-0.97), compared to spoke patients. Visit constancy was lower at the hub (−4.5%, 95% CI: −3.5, −5.5), where gaps in care were also more likely to occur (adjusted odds ratio=1.95, 95% CI: 1.83-2.08). Conclusion Decentralized sites demonstrated better retention outcomes using visit-level measures, while the hub achieved better retention outcomes using patient-level measures. Retention estimates produced by incorporating multiple measures showed substantial variation, confirming the influence of measurement strategies on the results of retention research. Future studies of retention in HIV care in sub-Saharan Africa will be well-served by including multiple measures. PMID:29682399
Agaba, Patricia A; Genberg, Becky L; Sagay, Atiene S; Agbaji, Oche O; Meloni, Seema T; Dadem, Nancin Y; Kolawole, Grace O; Okonkwo, Prosper; Kanki, Phyllis J; Ware, Norma C
2018-01-01
Differentiated care refers collectively to flexible service models designed to meet the differing needs of HIV-infected persons in resource-scarce settings. Decentralization is one such service model. Retention is a key indicator for monitoring the success of HIV treatment and care programs. We used multiple measures to compare retention in a cohort of patients receiving HIV care at "hub" (central) and "spoke" (decentralized) sites in a large public HIV treatment program in north central Nigeria. This retrospective cohort study utilized longitudinal program data representing central and decentralized levels of care in the Plateau State Decentralization Initiative, north central Nigeria. We examined retention with patient- level (retention at fixed times, loss-to-follow-up [LTFU]) and visit-level (gaps-in-care, visit constancy) measures. Regression models with generalized estimating equations (GEE) were used to estimate the effect of decentralization on visit-level measures. Patient-level measures were examined using survival methods with Cox regression models, controlling for baseline variables. Of 15,650 patients, 43% were enrolled at the hub. Median time in care was 3.1 years. Hub patients were less likely to be LTFU (adjusted hazard ratio (AHR)=0.91, 95% CI: 0.85-0.97), compared to spoke patients. Visit constancy was lower at the hub (-4.5%, 95% CI: -3.5, -5.5), where gaps in care were also more likely to occur (adjusted odds ratio=1.95, 95% CI: 1.83-2.08). Decentralized sites demonstrated better retention outcomes using visit-level measures, while the hub achieved better retention outcomes using patient-level measures. Retention estimates produced by incorporating multiple measures showed substantial variation, confirming the influence of measurement strategies on the results of retention research. Future studies of retention in HIV care in sub-Saharan Africa will be well-served by including multiple measures.
EVALUATION OF ECONOMIC INCENTIVES FOR DECENTRALIZED STORMWATER RUNOFF MANAGEMENT
Impervious surfaces in urban and suburban areas can lead to excess stormwater runoff throughout a watershed, typically resulting in widespread hydrologic and ecological alteration of receiving streams. Decentralized stormwater management may improve stream ecosystems by reducing ...
Engaging Social Capital for Decentralized Urban Stormwater Management
Decentralized approaches to urban stormwater management, whereby installations of green infrastructure (e.g., rain gardens, bioswales, and constructed wetlands) are dispersed throughout a management area, are cost-effective solutions with co-benefits beyond water abatement. Inste...
Scientific Data Management (SDM) Center for Enabling Technologies. Final Report, 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Our contributions to advancing the State of the Art in scientific workflows have focused on the following areas: Workflow development; Generic workflow components and templates; Provenance collection and analysis; and, Workflow reliability and fault tolerance.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Polya's bees: A model of decentralized decision-making.
Golman, Russell; Hagmann, David; Miller, John H
2015-09-01
How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate.
Hamed, Kaveh Akbari; Gregg, Robert D
2017-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Decentralized and Tactical Air Traffic Flow Management
NASA Technical Reports Server (NTRS)
Odoni, Amedeo R.; Bertsimas, Dimitris
1997-01-01
This project dealt with the following topics: 1. Review and description of the existing air traffic flow management system (ATFM) and identification of aspects with potential for improvement. 2. Identification and review of existing models and simulations dealing with all system segments (enroute, terminal area, ground) 3. Formulation of concepts for overall decentralization of the ATFM system, ranging from moderate decentralization to full decentralization 4. Specification of the modifications to the ATFM system required to accommodate each of the alternative concepts. 5. Identification of issues that need to be addressed with regard to: determination of the way the ATFM system would be operating; types of flow management strategies that would be used; and estimation of the effectiveness of ATFM with regard to reducing delay and re-routing costs. 6. Concept evaluation through identification of criteria and methodologies for accommodating the interests of stakeholders and of approaches to optimization of operational procedures for all segments of the ATFM system.
Decentralized control of sound radiation using iterative loop recovery.
Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R
2010-10-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Decentralized Control of Sound Radiation Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2009-01-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117
Green, A; Ali, B; Naeem, A; Ross, D
2000-01-01
This paper identifies key political and technical issues involved in the development of an appropriate resource allocation and budgetary system for the public health sector, using experience gained in the Province of Balochistan, Pakistan. The resource allocation and budgetary system is a critical, yet often neglected, component of any decentralization policy. Current systems are often based on historical incrementalism that is neither efficient nor equitable. This article describes technical work carried out in Balochistan to develop a system of resource allocation and budgeting that is needs-based, in line with policies of decentralization, and implementable within existing technical constraints. However, the development of technical systems, while necessary, is not a sufficient condition for the implementation of a resource allocation and decentralized budgeting system. This is illustrated by analysing the constraints that have been encountered in the development of such a system in Balochistan.
Polya’s bees: A model of decentralized decision-making
Golman, Russell; Hagmann, David; Miller, John H.
2015-01-01
How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate. PMID:26601255
Green, A.; Ali, B.; Naeem, A.; Ross, D.
2000-01-01
This paper identifies key political and technical issues involved in the development of an appropriate resource allocation and budgetary system for the public health sector, using experience gained in the Province of Balochistan, Pakistan. The resource allocation and budgetary system is a critical, yet often neglected, component of any decentralization policy. Current systems are often based on historical incrementalism that is neither efficient nor equitable. This article describes technical work carried out in Balochistan to develop a system of resource allocation and budgeting that is needs-based, in line with policies of decentralization, and implementable within existing technical constraints. However, the development of technical systems, while necessary, is not a sufficient condition for the implementation of a resource allocation and decentralized budgeting system. This is illustrated by analysing the constraints that have been encountered in the development of such a system in Balochistan. PMID:10994286
Walsh, Kristin E.; Chui, Michelle Anne; Kieser, Mara A.; Williams, Staci M.; Sutter, Susan L.; Sutter, John G.
2012-01-01
Objective To explore community pharmacy technician workflow change after implementation of an automated robotic prescription-filling device. Methods At an independent community pharmacy in rural Mayville, WI, pharmacy technicians were observed before and 3 months after installation of an automated robotic prescription-filling device. The main outcome measures were sequences and timing of technician workflow steps, workflow interruptions, automation surprises, and workarounds. Results Of the 77 and 80 observations made before and 3 months after robot installation, respectively, 17 different workflow sequences were observed before installation and 38 after installation. Average prescription filling time was reduced by 40 seconds per prescription with use of the robot. Workflow interruptions per observation increased from 1.49 to 1.79 (P = 0.11), and workarounds increased from 10% to 36% after robot use. Conclusion Although automated prescription-filling devices can increase efficiency, workflow interruptions and workarounds may negate that efficiency. Assessing changes in workflow and sequencing of tasks that may result from the use of automation can help uncover opportunities for workflow policy and procedure redesign. PMID:21896459
NASA Astrophysics Data System (ADS)
Pan, Tianheng
2018-01-01
In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.
NASA Astrophysics Data System (ADS)
McCarthy, Ann
2006-01-01
The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.
Parker, Pete; Thapa, Brijesh; Jacob, Aerin
2015-12-01
To alleviate poverty and enhance conservation in resource dependent communities, managers must identify existing livelihood strategies and the associated factors that impede household access to livelihood assets. Researchers increasingly advocate reallocating management power from exclusionary central institutions to a decentralized system of management based on local and inclusive participation. However, it is yet to be shown if decentralizing conservation leads to diversified livelihoods within a protected area. The purpose of this study was to identify and assess factors affecting household livelihood diversification within Nepal's Kanchenjunga Conservation Area Project, the first protected area in Asia to decentralize conservation. We randomly surveyed 25% of Kanchenjunga households to assess household socioeconomic and demographic characteristics and access to livelihood assets. We used a cluster analysis with the ten most common income generating activities (both on- and off-farm) to group the strategies households use to diversify livelihoods, and a multinomial logistic regression to identify predictors of livelihood diversification. We found four distinct groups of household livelihood strategies with a range of diversification that directly corresponded to household income. The predictors of livelihood diversification were more related to pre-existing socioeconomic and demographic factors (e.g., more landholdings and livestock, fewer dependents, receiving remittances) than activities sponsored by decentralizing conservation (e.g., microcredit, training, education, interaction with project staff). Taken together, our findings indicate that without direct policies to target marginalized groups, decentralized conservation in Kanchenjunga will continue to exclude marginalized groups, limiting a household's ability to diversify their livelihood and perpetuating their dependence on natural resources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Decentralization and health resource allocation: a case study at the district level in Indonesia.
Abdullah, Asnawi; Stoelwinder, Johannes
2008-01-01
Health resource allocation has been an issue of political debate in many health systems. However, the debate has tended to concentrate on vertical allocation from the national to regional level. Allocation within regions or institutions has been largely ignored. This study was conducted to contribute analysis to this gap. The objective was to investigate health resource allocation within District Health Offices (DHOs) and to compare the trends and patterns of several budget categories before and after decentralization. The study was conducted in three districts in the Province of Nanggroe Aceh Darussalam. Six fiscal year budgets, two before decentralization and four after, were studied. Data was collected from the Local Government Planning Office and DHOs. Results indicated that in the first year of implementing a decentralization policy, the local government budget rose sharply, particularly in the wealthiest district. In contrast, in relatively poor districts the budget was only boosted slightly. Increasing total local government budgets had a positive impact on increasing the health budget. The absolute amount of health budgets increased significantly, but by percentage did not change very much. Budgets for several projects and budget items increased significantly, but others, such as health promotion, monitoring and evaluation, and public-goods-related activities, decreased. This study concluded that decentralization in Indonesia had made a positive impact on district government fiscal capacity and had affected DHO budgets positively. However, an imbalanced budget allocation between projects and budget items was obvious, and this needs serious attention from policy makers. Otherwise, decentralization will not significantly improve the health system in Indonesia.
An empirical examination of the impacts of decentralized nursing unit design.
Pati, Debajyoti; Harvey, Thomas E; Redden, Pamela; Summers, Barbara; Pati, Sipra
2015-01-01
The objective of the study was to examine the impact of decentralization on operational efficiency, staff well-being, and teamwork on three inpatient units. Decentralized unit operations and the corresponding physical design solution were hypothesized to positively affect several concerns-productive use of nursing time, staff stress, walking distances, and teamwork, among others. With a wide adoption of the concept, empirical evidence on the impact of decentralization was warranted. A multimethod, before-and-after, quasi-experimental design was adopted for the study, focusing on five issues, namely, (1) how nurses spend their time, (2) walking distance, (3) acute stress, (4) productivity, and (5) teamwork. Data on all five issues were collected on three older units with centralized operational model (before move). The same set of data, with identical tools and measures, were collected on the same units after move in to new physical units with decentralized operational model. Data were collected during spring and fall of 2011. Documentation, nurse station use, medication room use, and supplies room use showed consistent change across the three units. Walking distance increased (statistically significant) on two of the three units. Self-reported level of collaboration decreased, although assessment of the physical facility for collaboration increased. Decentralized nursing and physical design models potentially result in quality of work improvements associated with documentation, medication, and supplies. However, there are unexpected consequences associated with walking, and staff collaboration and teamwork. The solution to the unexpected consequences may lie in operational interventions and greater emphasis on culture change. © The Author(s) 2015.
Chan, Adrienne K; Mateyu, Gabriel; Jahn, Andreas; Schouten, Erik; Arora, Paul; Mlotha, William; Kambanji, Marion; van Lettow, Monique
2010-06-01
To assess the effect of decentralization (DC) of antiretroviral therapy (ART) provision in a rural district of Malawi using an integrated primary care model. Between October 2004 and December 2008, 8093 patients (63% women) were registered for ART. Of these, 3440 (43%) were decentralized to health centres for follow-up ART care. We applied multivariate regression analysis that adjusted for sex, age, clinical stage at initiation, type of regimen, presence of side effects because of ART, and duration of treatment and follow-up at site of analysis. Patients managed at health centres had lower mortality [adjusted OR 0.19 (95% C.I. 0.15-0.25)] and lower loss to follow-up (defaulted from treatment) [adjusted OR 0.48 (95% C.I. 0.40-0.58)]. During the first 10 months of follow-up, those decentralized to health centres were approximately 60% less likely to default than those not decentralized; and after 10 months of follow-up, 40% less likely to default. DC was significantly associated with a reduced risk of death from 0 to 25 months of follow-up. The lower mortality may be explained by the selection of stable patients for DC, and the mentorship and supportive supervision of lower cadre health workers to identify and refer complicated cases. Decentralization of follow-up ART care to rural health facilities, using an integrated primary care model, appears a safe and effective way to rapidly scale-up ART and improves both geographical equity in access to HIV-related services and adherence to ART.
Real, Kevin; Fay, Lindsey; Isaacs, Kathy; Carll-White, Allison; Schadler, Aric
2018-01-01
This study utilizes systems theory to understand how changes to physical design structures impact communication processes and patient and staff design-related outcomes. Many scholars and researchers have noted the importance of communication and teamwork for patient care quality. Few studies have examined changes to nursing station design within a systems theory framework. This study employed a multimethod, before-and-after, quasi-experimental research design. Nurses completed surveys in centralized units and later in decentralized units ( N = 26 pre , N = 51 post ). Patients completed surveys ( N = 62 pre ) in centralized units and later in decentralized units ( N = 49 post ). Surveys included quantitative measures and qualitative open-ended responses. Patients preferred the decentralized units because of larger single-occupancy rooms, greater privacy/confidentiality, and overall satisfaction with design. Nurses had a more complex response. Nurses approved the patient rooms, unit environment, and noise levels in decentralized units. However, they reported reduced access to support spaces, lower levels of team/mentoring communication, and less satisfaction with design than in centralized units. Qualitative findings supported these results. Nurses were more positive about centralized units and patients were more positive toward decentralized units. The results of this study suggest a need to understand how system components operate in concert. A major contribution of this study is the inclusion of patient satisfaction with design, an important yet overlooked fact in patient satisfaction. Healthcare design researchers and practitioners may consider how changing system interdependencies can lead to unexpected changes to communication processes and system outcomes in complex systems.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
Radiology information system: a workflow-based approach.
Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; van der Aalst, W M P
2009-09-01
Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare.
The Rhetoric of Decentralization
ERIC Educational Resources Information Center
Ravitch, Diane
1974-01-01
Questions the rationale for and possible consequences of political decentralization of New York City. Suggests that the disadvantages--reduced level of professionalism, increased expense in multiple government operation, "stabilization" of residential segregation, necessity for budget negotiations because of public disclosure of tax…
PARTICIPATORY STORM WATER MANAGEMENT AND SUSTAINABILITY – WHAT ARE THE CONNECTIONS?
Urban stormwater is typically conveyed to centralized infrastructure, and there is great potential for reducing stormwater runoff quantity through decentralization. For areas which are already developed, decentralization of stormwater management involves private property and poss...
Suthar, Amitabh B; Rutherford, George W; Horvath, Tara; Doherty, Meg C; Negussie, Eyerusalem K
2014-03-01
Current service delivery systems do not reach all people in need of antiretroviral therapy (ART). In order to inform the operational and service delivery section of the WHO 2013 consolidated antiretroviral guidelines, our objective was to summarize systematic reviews on integrating ART delivery into maternal, newborn, and child health (MNCH) care settings in countries with generalized epidemics, tuberculosis (TB) treatment settings in which the burden of HIV and TB is high, and settings providing opiate substitution therapy (OST); and decentralizing ART into primary health facilities and communities. A summary of systematic reviews. The reviewers searched PubMed, Embase, PsycINFO, Web of Science, CENTRAL, and the WHO Index Medicus databases. Randomized controlled trials and observational cohort studies were included if they compared ART coverage, retention in HIV care, and/or mortality in MNCH, TB, or OST facilities providing ART with MNCH, TB, or OST facilities providing ART services separately; or primary health facilities or communities providing ART with hospitals providing ART. The reviewers identified 28 studies on integration and decentralization. Antiretroviral therapy integration into MNCH facilities improved ART coverage (relative risk [RR] 1.37, 95% confidence interval [CI] 1.05-1.79) and led to comparable retention in care. ART integration into TB treatment settings improved ART coverage (RR 1.83, 95% CI 1.48-2.23) and led to a nonsignificant reduction in mortality (RR 0.55, 95% CI 0.29-1.05). The limited data on ART integration into OST services indicated comparable rates of ART coverage, retention, and mortality. Partial decentralization into primary health facilities improved retention (RR 1.05, 95% CI 1.01-1.09) and reduced mortality (RR 0.34, 95% CI 0.13-0.87). Full decentralization improved retention (RR 1.12, 95% CI 1.08-1.17) and led to comparable mortality. Community-based ART led to comparable rates of retention and mortality. Integrating ART into MNCH, TB, and OST services was often associated with improvements in ART coverage, and decentralization of ART into primary health facilities and communities was often associated with improved retention. Neither integration nor decentralization was associated with adverse outcomes. These data contributed to recommendations in the WHO 2013 consolidated antiretroviral guidelines to integrate ART delivery into MNCH, TB, and OST services and to decentralize ART.
Overnight shift work: factors contributing to diagnostic discrepancies.
Hanna, Tarek N; Loehfelm, Thomas; Khosa, Faisal; Rohatgi, Saurabh; Johnson, Jamlik-Omari
2016-02-01
The aims of the study are to identify factors contributing to preliminary interpretive discrepancies on overnight radiology resident shifts and apply this data in the context of known literature to draw parallels to attending overnight shift work schedules. Residents in one university-based training program provided preliminary interpretations of 18,488 overnight (11 pm–8 am) studies at a level 1 trauma center between July 1, 2013 and December 31, 2014. As part of their normal workflow and feedback, attendings scored the reports as major discrepancy, minor discrepancy, agree, and agree--good job. We retrospectively obtained the preliminary interpretation scores for each study. Total relative value units (RVUs) per shift were calculated as an indicator of overnight workload. The dataset was supplemented with information on trainee level, number of consecutive nights on night float, hour, modality, and per-shift RVU. The data were analyzed with proportional logistic regression and Fisher's exact test. There were 233 major discrepancies (1.26 %). Trainee level (senior vs. junior residents; 1.08 vs. 1.38 %; p < 0.05) and modality were significantly associated with performance. Increased workload affected more junior residents' performance, with R3 residents performing significantly worse on busier nights. Hour of the night was not significantly associated with performance, but there was a trend toward best performance at 2 am, with subsequent decreased accuracy throughout the remaining shift hours. Improved performance occurred after the first six night float shifts, presumably as residents acclimated to a night schedule. As overnight shift work schedules increase in popularity for residents and attendings, focused attention to factors impacting interpretative accuracy is warranted.
Cornett, Alex; Kuziemsky, Craig
2015-01-01
Implementing team based workflows can be complex because of the scope of providers involved and the extent of information exchange and communication that needs to occur. While a workflow may represent the ideal structure of communication that needs to occur, information issues and contextual factors may impact how the workflow is implemented in practice. Understanding these issues will help us better design systems to support team based workflows. In this paper we use a case study of palliative sedation therapy (PST) to model a PST workflow and then use it to identify purposes of communication, information issues and contextual factors that impact them. We then suggest how our findings could inform health information technology (HIT) design to support team based communication workflows.
A Workflow to Improve the Alignment of Prostate Imaging with Whole-mount Histopathology.
Yamamoto, Hidekazu; Nir, Dror; Vyas, Lona; Chang, Richard T; Popert, Rick; Cahill, Declan; Challacombe, Ben; Dasgupta, Prokar; Chandra, Ashish
2014-08-01
Evaluation of prostate imaging tests against whole-mount histology specimens requires accurate alignment between radiologic and histologic data sets. Misalignment results in false-positive and -negative zones as assessed by imaging. We describe a workflow for three-dimensional alignment of prostate imaging data against whole-mount prostatectomy reference specimens and assess its performance against a standard workflow. Ethical approval was granted. Patients underwent motorized transrectal ultrasound (Prostate Histoscanning) to generate a three-dimensional image of the prostate before radical prostatectomy. The test workflow incorporated steps for axial alignment between imaging and histology, size adjustments following formalin fixation, and use of custom-made parallel cutters and digital caliper instruments. The control workflow comprised freehand cutting and assumed homogeneous block thicknesses at the same relative angles between pathology and imaging sections. Thirty radical prostatectomy specimens were histologically and radiologically processed, either by an alignment-optimized workflow (n = 20) or a control workflow (n = 10). The optimized workflow generated tissue blocks of heterogeneous thicknesses but with no significant drifting in the cutting plane. The control workflow resulted in significantly nonparallel blocks, accurately matching only one out of four histology blocks to their respective imaging data. The image-to-histology alignment accuracy was 20% greater in the optimized workflow (P < .0001), with higher sensitivity (85% vs. 69%) and specificity (94% vs. 73%) for margin prediction in a 5 × 5-mm grid analysis. A significantly better alignment was observed in the optimized workflow. Evaluation of prostate imaging biomarkers using whole-mount histology references should include a test-to-reference spatial alignment workflow. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Conceptual-level workflow modeling of scientific experiments using NMR as a case study
Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R
2007-01-01
Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870
FAST: A fully asynchronous and status-tracking pattern for geoprocessing services orchestration
NASA Astrophysics Data System (ADS)
Wu, Huayi; You, Lan; Gui, Zhipeng; Gao, Shuang; Li, Zhenqiang; Yu, Jingmin
2014-09-01
Geoprocessing service orchestration (GSO) provides a unified and flexible way to implement cross-application, long-lived, and multi-step geoprocessing service workflows by coordinating geoprocessing services collaboratively. Usually, geoprocessing services and geoprocessing service workflows are data and/or computing intensive. The intensity feature may make the execution process of a workflow time-consuming. Since it initials an execution request without blocking other interactions on the client side, an asynchronous mechanism is especially appropriate for GSO workflows. Many critical problems remain to be solved in existing asynchronous patterns for GSO including difficulties in improving performance, status tracking, and clarifying the workflow structure. These problems are a challenge when orchestrating performance efficiency, making statuses instantly available, and constructing clearly structured GSO workflows. A Fully Asynchronous and Status-Tracking (FAST) pattern that adopts asynchronous interactions throughout the whole communication tier of a workflow is proposed for GSO. The proposed FAST pattern includes a mechanism that actively pushes the latest status to clients instantly and economically. An independent proxy was designed to isolate the status tracking logic from the geoprocessing business logic, which assists the formation of a clear GSO workflow structure. A workflow was implemented in the FAST pattern to simulate the flooding process in the Poyang Lake region. Experimental results show that the proposed FAST pattern can efficiently tackle data/computing intensive geoprocessing tasks. The performance of all collaborative partners was improved due to the asynchronous mechanism throughout communication tier. A status-tracking mechanism helps users retrieve the latest running status of a GSO workflow in an efficient and instant way. The clear structure of the GSO workflow lowers the barriers for geospatial domain experts and model designers to compose asynchronous GSO workflows. Most importantly, it provides better support for locating and diagnosing potential exceptions.
Conceptual-level workflow modeling of scientific experiments using NMR as a case study.
Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R
2007-01-30
Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.
The future of scientific workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Peterka, Tom; Altintas, Ilkay
Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less
NASA Astrophysics Data System (ADS)
Wang, Ximing; Martinez, Clarisa; Wang, Jing; Liu, Ye; Liu, Brent
2014-03-01
Clinical trials usually have a demand to collect, track and analyze multimedia data according to the workflow. Currently, the clinical trial data management requirements are normally addressed with custom-built systems. Challenges occur in the workflow design within different trials. The traditional pre-defined custom-built system is usually limited to a specific clinical trial and normally requires time-consuming and resource-intensive software development. To provide a solution, we present a user customizable imaging informatics-based intelligent workflow engine system for managing stroke rehabilitation clinical trials with intelligent workflow. The intelligent workflow engine provides flexibility in building and tailoring the workflow in various stages of clinical trials. By providing a solution to tailor and automate the workflow, the system will save time and reduce errors for clinical trials. Although our system is designed for clinical trials for rehabilitation, it may be extended to other imaging based clinical trials as well.
Elementary Introduction to the Green Management of the Construction in Whole Process
NASA Astrophysics Data System (ADS)
Na), Wu Y. N.(Yun; Yu), Yan H. Y.(Hong; Jun), Huang Z. J.(Zhi
Construction industries consume more energy resources than necessary. it is essential to establish a management system with all pollution problems resolved to construct green buildings. By applying the theory of whole life cycle, this paper divides the whole process of construction into four sub-phases, which will also be subdivided into more concrete working procedures. Based on this, a systematic framework is promoted for the green management of the construction, especially and creatively, considering the green aims as important as the traditional three aims-"quality aim, schedule aim and cost aim". This framework, adhering to the integration idea-"customers first, whole optimal", regards the green control and workflow as an organic whole in order to build green, sustainable and healthy architecture, and then provide a perfect guide and reference to the green management.
LHCb migration from Subversion to Git
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.; Closier, J.; Cattaneo, M.
2017-10-01
Due to user demand and to support new development workflows based on code review and multiple development streams, LHCb decided to port the source code management from Subversion to Git, using the CERN GitLab hosting service. Although tools exist for this kind of migration, LHCb specificities and development models required careful planning of the migration, development of migration tools, changes to the development model, and redefinition of the release procedures. Moreover we had to support a hybrid situation with some software projects hosted in Git and others still in Subversion, or even branches of one projects hosted in different systems. We present the way we addressed the special LHCb requirements, the technical details of migrating large non standard Subversion repositories, and how we managed to smoothly migrate the software projects following the schedule of each project manager.
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
Engaging Social Capital for Decentralized Urban Stormwater Management (Paper in Non-EPA Proceedings)
Decentralized approaches to urban stormwater management, whereby installations of green infrastructure (e.g., rain gardens, bioswales, constructed wetlands) are dispersed throughout a management area, are cost-effective solutions with co-benefits beyond just water abatement. Inst...
Decentralized Modular Systems Versus Centralized Systems.
ERIC Educational Resources Information Center
Crossey, R. E.
Building design, planning, and construction programing for modular decentralized mechanical building systems are outlined in terms of costs, performance, expansion and flexibility. Design strategy, approach, and guidelines for implementing such systems for buildings are suggested, with emphasis on mechanical equipment and building element…
Educational Decentralization Policies in Argentina and Brazil: Exploring the New Trends.
ERIC Educational Resources Information Center
Derqui, Jorge M. Gorostiaga
2001-01-01
Analyzes educational decentralization trends and policies in Argentina and Brazil during 1990s, includes case studies. Discusses historical background and rationales behind "provinicialization" in Argentina and "municipalization" in Brazil; identifies commonalities, including centralization of curriculum and evaluation…
Satellite Power System (SPS) centralization/decentralization
NASA Technical Reports Server (NTRS)
Naisbitt, J.
1978-01-01
The decentralization of government in the United States of America is described and its effect on the solution of energy problems is given. The human response to the introduction of new technologies is considered as well as the behavioral aspects of multiple options.
NASA Technical Reports Server (NTRS)
Steffen, Chris
1990-01-01
An overview of the time-delay problem and the reliability problem which arise in trying to perform robotic construction operations at a remote space location are presented. The effects of the time-delay upon the control system design will be itemized. A high level overview of a decentralized method of control which is expected to perform better than the centralized approach in solving the time-delay problem is given. The lower level, decentralized, autonomous, Troter Move-Bar algorithm is also presented (Troters are coordinated independent robots). The solution of the reliability problem is connected to adding redundancy to the system. One method of adding redundancy is given.
Faguet, Jean-Paul
2016-06-22
Mohammed, North, and Ashton find that decentralization in Fiji shifted health-sector workloads from tertiary hospitals to peripheral health centres, but with little transfer of administrative authority from the centre. Decision-making in five functional areas analysed remains highly centralized. They surmise that the benefits of decentralization in terms of services and outcomes will be limited. This paper invokes Faguet's (2012) model of local government responsiveness and accountability to explain why this is so - not only for Fiji, but in any country that decentralizes workloads but not the decision space of local governments. A competitive dynamic between economic and civic actors that interact to generate an open, competitive politics, which in turn produces accountable, responsive government can only occur where real power and resources have been devolved to local governments. Where local decision space is lacking, by contrast, decentralization is bound to fail because it has not really happened in the first place. © 2016 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
NASA Astrophysics Data System (ADS)
Inguane, Ronaldo; Gallego-Ayala, Jordi; Juízo, Dinis
In the context of integrated water resources management implementation, the decentralization of water resources management (DWRM) at the river basin level is a crucial aspect for its success. However, decentralization requires the creation of new institutions on the ground, to stimulate an environment enabling stakeholder participation and integration into the water management decision-making process. In 1991, Mozambique began restructuring its water sector toward operational decentralized water resources management. Within this context of decentralization, new legal and institutional frameworks have been created, e.g., Regional Water Administrations (RWAs) and River Basin Committees. This paper identifies and analyzes the key institutional challenges and opportunities of DWRM implementation in Mozambique. The paper uses a critical social science research methodology for in-depth analysis of the roots of the constraining factors for the implementation of DWRM. The results obtained suggest that RWAs should be designed considering the specific geographic and infrastructural conditions of their jurisdictional areas and that priorities should be selected in their institutional capacity building strategies that match local realities. Furthermore, the results also indicate that RWAs have enjoyed limited support from basin stakeholders, mainly in basins with less hydraulic infrastructure, in securing water availability for their users and minimizing the effect of climate variability.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
Metaworkflows and Workflow Interoperability for Heliophysics
NASA Astrophysics Data System (ADS)
Pierantoni, Gabriele; Carley, Eoin P.
2014-06-01
Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They- implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WSPGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.
Welker, A; Wolcke, B; Schleppers, A; Schmeck, S B; Focke, U; Gervais, H W; Schmeck, J
2010-10-01
The introduction of the diagnosis-related groups reimbursement system has increased cost pressures. Due to the interaction of many different professional groups, analysis and optimization of internal coordination and scheduling in the operating room (OR) is mandatory. The aim of this study was to analyze the processes at a university hospital in order to optimize strategies by identifying potential weak points. Over a period 6 weeks before and 4 weeks after intervention processes time intervals in the OR of a tertiary care hospital (university hospital) were documented in a structured data collection sheet. The main reason for lack of efficiency of labor was underused OR utilization. Multifactorial reasons, particularly in the management of perioperative interfaces, led to vacant ORs. A significant deficit was in the use of OR capacity at the end of the daily OR schedule. After harmonization of working hours of different staff groups and implementation of several other changes an increase in efficiency could be verified. These results indicate that optimization of perioperative processes considerably contribute to the success of OR organization. Additionally, the implementation of standard operating procedures and a generally accepted OR statute are mandatory. In this way an efficient OR management can contribute to the economic success of a hospital.
NASA Astrophysics Data System (ADS)
Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal
2015-05-01
When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.
Wu, Danny T Y; Smart, Nikolas; Ciemins, Elizabeth L; Lanham, Holly J; Lindberg, Curt; Zheng, Kai
2017-01-01
To develop a workflow-supported clinical documentation system, it is a critical first step to understand clinical workflow. While Time and Motion studies has been regarded as the gold standard of workflow analysis, this method can be resource consuming and its data may be biased due to the cognitive limitation of human observers. In this study, we aimed to evaluate the feasibility and validity of using EHR audit trail logs to analyze clinical workflow. Specifically, we compared three known workflow changes from our previous study with the corresponding EHR audit trail logs of the study participants. The results showed that EHR audit trail logs can be a valid source for clinical workflow analysis, and can provide an objective view of clinicians' behaviors, multi-dimensional comparisons, and a highly extensible analysis framework.
Schweitzer, M; Lasierra, N; Hoerbst, A
2015-01-01
Increasing the flexibility from a user-perspective and enabling a workflow based interaction, facilitates an easy user-friendly utilization of EHRs for healthcare professionals' daily work. To offer such versatile EHR-functionality, our approach is based on the execution of clinical workflows by means of a composition of semantic web-services. The backbone of such architecture is an ontology which enables to represent clinical workflows and facilitates the selection of suitable services. In this paper we present the methods and results after running observations of diabetes routine consultations which were conducted in order to identify those workflows and the relation among the included tasks. Mentioned workflows were first modeled by BPMN and then generalized. As a following step in our study, interviews will be conducted with clinical personnel to validate modeled workflows.
Reengineering observatory operations for the time domain
NASA Astrophysics Data System (ADS)
Seaman, Robert L.; Vestrand, W. T.; Hessman, Frederic V.
2014-07-01
Observatories are complex scientific and technical institutions serving diverse users and purposes. Their telescopes, instruments, software, and human resources engage in interwoven workflows over a broad range of timescales. These workflows have been tuned to be responsive to concepts of observatory operations that were applicable when various assets were commissioned, years or decades in the past. The astronomical community is entering an era of rapid change increasingly characterized by large time domain surveys, robotic telescopes and automated infrastructures, and - most significantly - of operating modes and scientific consortia that span our individual facilities, joining them into complex network entities. Observatories must adapt and numerous initiatives are in progress that focus on redesigning individual components out of the astronomical toolkit. New instrumentation is both more capable and more complex than ever, and even simple instruments may have powerful observation scripting capabilities. Remote and queue observing modes are now widespread. Data archives are becoming ubiquitous. Virtual observatory standards and protocols and astroinformatics data-mining techniques layered on these are areas of active development. Indeed, new large-aperture ground-based telescopes may be as expensive as space missions and have similarly formal project management processes and large data management requirements. This piecewise approach is not enough. Whatever challenges of funding or politics facing the national and international astronomical communities it will be more efficient - scientifically as well as in the usual figures of merit of cost, schedule, performance, and risks - to explicitly address the systems engineering of the astronomical community as a whole.
Automatic Integration Testbeds validation on Open Science Grid
NASA Astrophysics Data System (ADS)
Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.
2011-12-01
A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.
Arrott, M.; Alexander, Corrine; Graybeal, J.; Mueller, C.; Signell, R.; de La Beaujardière, J.; Taylor, A.; Wilkin, J.; Powell, B.; Orcutt, J.
2011-01-01
The NOAA-led U.S. Integrated Ocean Observing System (IOOS) and the National Science Foundation's Ocean Observatories Initiative (OOI) have been collaborating since 2007 on advanced tools and technologies that ensure open access to ocean observations and models. Initial collaboration focused on serving ocean data via cloud computing-a key component of the OOI cyberinfrastructure (CI) architecture. As the OOI transitioned from planning to execution in the Fall of 2009, an OOI/IOOS team developed a customer-based "use case" to align more closely with the emerging objectives of OOI-CI team's first software release scheduled for Summer 2011 and provide a quantitative capacity for stress-testing these tools and protocols. A requirements process was initiated with coastal modelers, focusing on improved workflows to deliver ocean observation data. Accomplishments to date include the documentation and assessment of scientific workflows for two "early adopter" modeling teams from IOOS Regional partners (Rutgers-the State University of New Jersey and University of Hawaii's School of Ocean and Earth Science and Technology) to enable full understanding of data sources and needs; generation of all-inclusive lists of the data sets required and those obtainable through IOOS; a more complete understanding of areas where IOOS can expand data access capabilities to better serve the needs of the modeling community; and development of "data set agents" (software) to facilitate data acquisition from numerous data providers and conversions of the data format to the OOI-CI canonical form. ?? 2011 MTS.
Building CHAOS: An Operating System for Livermore Linux Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garlick, J E; Dunlap, C M
2003-02-21
The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less
ERIC Educational Resources Information Center
Cullen, John B.; Perrewe, Pamela L.
1981-01-01
Used factors identified in the literature as predictors of centralization/decentralization as potential discriminating variables among several decision making configurations in university affiliated professional schools. The model developed from multiple discriminant analysis had reasonable success in classifying correctly only the decentralized…
Adult Education in Modern Greece.
ERIC Educational Resources Information Center
Boucouvalas, Marcie
1982-01-01
Greece's adult education enterprise is well-organized, well-thought-out, and well-grounded. A sound philosophical, theoretical, and conceptual foundation supports its operational components. A unique feature is its blend of centralization and decentralization: centralized policy and guidance and decentralized decision making and autonomy over…
The Organization of Correctional Education Services
ERIC Educational Resources Information Center
Gehring, Thom
2007-01-01
There have been five major types of correctional education organizations over the centuries: Sabbath school, traditional or decentralized, bureau, correctional school district (CSD), and integral education. The middle three are modern organizational patterns that can be implemented throughout a system: Decentralized, bureau, and CSD. The…
Regionalism, Devolution and Education
ERIC Educational Resources Information Center
Bogdanor, Vernon
1977-01-01
Described are effects of political decentralization in the United Kingdom on political and social institutions, particularly education. The author concludes that regionalism could yield advantages of power decentralization, diversity of decision making, and educational systems which are more closely connected to regional and local traditions.…
Yuan, Michael Juntao; Finley, George Mike; Mills, Christy; Johnson, Ron Kim
2013-01-01
Background Clinical decision support systems (CDSS) are important tools to improve health care outcomes and reduce preventable medical adverse events. However, the effectiveness and success of CDSS depend on their implementation context and usability in complex health care settings. As a result, usability design and validation, especially in real world clinical settings, are crucial aspects of successful CDSS implementations. Objective Our objective was to develop a novel CDSS to help frontline nurses better manage critical symptom changes in hospitalized patients, hence reducing preventable failure to rescue cases. A robust user interface and implementation strategy that fit into existing workflows was key for the success of the CDSS. Methods Guided by a formal usability evaluation framework, UFuRT (user, function, representation, and task analysis), we developed a high-level specification of the product that captures key usability requirements and is flexible to implement. We interviewed users of the proposed CDSS to identify requirements, listed functions, and operations the system must perform. We then designed visual and workflow representations of the product to perform the operations. The user interface and workflow design were evaluated via heuristic and end user performance evaluation. The heuristic evaluation was done after the first prototype, and its results were incorporated into the product before the end user evaluation was conducted. First, we recruited 4 evaluators with strong domain expertise to study the initial prototype. Heuristic violations were coded and rated for severity. Second, after development of the system, we assembled a panel of nurses, consisting of 3 licensed vocational nurses and 7 registered nurses, to evaluate the user interface and workflow via simulated use cases. We recorded whether each session was successfully completed and its completion time. Each nurse was asked to use the National Aeronautics and Space Administration (NASA) Task Load Index to self-evaluate the amount of cognitive and physical burden associated with using the device. Results A total of 83 heuristic violations were identified in the studies. The distribution of the heuristic violations and their average severity are reported. The nurse evaluators successfully completed all 30 sessions of the performance evaluations. All nurses were able to use the device after a single training session. On average, the nurses took 111 seconds (SD 30 seconds) to complete the simulated task. The NASA Task Load Index results indicated that the work overhead on the nurses was low. In fact, most of the burden measures were consistent with zero. The only potentially significant burden was temporal demand, which was consistent with the primary use case of the tool. Conclusions The evaluation has shown that our design was functional and met the requirements demanded by the nurses’ tight schedules and heavy workloads. The user interface embedded in the tool provided compelling utility to the nurse with minimal distraction. PMID:23612350
Boyer, Sylvie; Abu-Zaineh, Mohammad; Blanche, Jérôme; Loubière, Sandrine; Bonono, Renée-Cécile; Moatti, Jean-Paul; Ventelou, Bruno
2011-12-01
Scaling up antiretroviral treatment (ART) through decentralization of HIV care is increasingly recommended as a strategy toward ensuring equitable access to treatment. However, there have been hitherto few attempts to empirically examine the performance of this policy, and particularly its role in protecting against the risk of catastrophic health expenditures (CHE). This article therefore seeks to assess whether HIV care decentralization has a protective effect against the risk of CHE associated with HIV infection. DATA SOURCE AND STUDY DESIGN: We use primary data from the cross-sectional EVAL-ANRS 12-116 survey, conducted in 2006-2007 among a random sample of 3,151 HIV-infected outpatients followed up in 27 hospitals in Cameroon. DATA COLLECTION AND METHODS: Data collected contain sociodemographic, economic, and clinical information on patients as well as health care supply-related characteristics. We assess the determinants of CHE among the ART-treated patients using a hierarchical logistic model (n = 2,412), designed to adequately investigate the separate effects of patients and supply-related characteristics. Expenditures for HIV care exceed 17 percent of household income for 50 percent of the study population. After adjusting for individual characteristics and technological level, decentralization of HIV services emerges as the main health system factor explaining interclass variance, with a protective effect on the risk of CHE. The findings suggest that HIV care decentralization is likely to enhance equity in access to ART. Decentralization appears, however, to be a necessary but insufficient condition to fully remove the risk of CHE, unless other innovative reforms in health financing are introduced. © Health Research and Educational Trust.
Decentralization in Zambia: resource allocation and district performance.
Bossert, Thomas; Chitah, Mukosha Bona; Bowser, Diana
2003-12-01
Zambia implemented an ambitious process of health sector decentralization in the mid 1990s. This article presents an assessment of the degree of decentralization, called 'decision space', that was allowed to districts in Zambia, and an analysis of data on districts available at the national level to assess allocation choices made by local authorities and some indicators of the performance of the health systems under decentralization. The Zambian officials in health districts had a moderate range of choice over expenditures, user fees, contracting, targeting and governance. Their choices were quite limited over salaries and allowances and they did not have control over additional major sources of revenue, like local taxes. The study found that the formula for allocation of government funding which was based on population size and hospital beds resulted in relatively equal per capita expenditures among districts. Decentralization allowed the districts to make decisions on internal allocation of resources and on user fee levels and expenditures. General guidelines for the allocation of resources established a maximum and minimum percentage to be allocated to district offices, hospitals, health centres and communities. Districts tended to exceed the maximum for district offices, but the large urban districts and those without public district hospitals were not even reaching the minimum for hospital allocations. Wealthier and urban districts were more successful in raising revenue through user fees, although the proportion of total expenditures that came from user fees was low. An analysis of available indicators of performance, such as the utilization of health services, immunization coverage and family planning activities, found little variation during the period 1995-98 except for a decline in immunization coverage, which may have also been affected by changes in donor funding. These findings suggest that decentralization may not have had either a positive or negative impact on services.
Mester, U; Heinen, S; Kaymak, H
2010-09-01
Aspheric intraocular lenses (IOLs) aim to improve visual function and particularly contrast vision by neutralizing spherical aberration. One drawback of such IOLs is the enhanced sensitivity to decentration and tilt, which can deteriorate image quality. A total of 30 patients who received bilateral phacoemulsification before implantation of the aspheric lens FY-60AD (Hoya) were included in a prospective study. In 25 of the patients (50 eyes) the following parameters could be assessed 3 months after surgery: visual acuity, refraction, contrast sensitivity, pupil size, wavefront errors and decentration and tilt using a newly developed device. The functional results were very satisfying and comparable to results gained with other aspheric IOLs. The mean refraction was sph + 0.1 D (±0.7 D) and cyl 0.6 D (±0.8 D). The spherical equivalent was −0.2 D (±0.6 D). Wavefront measurements revealed a good compensation of the corneal spherical aberration but vertical and horizontal coma also showed opposing values in the cornea and IOL. The assessment of the lens position using the Purkinje meter demonstrated uncritical amounts of decentration and tilt. The mean amount of decentration was 0.2 mm±0.2 mm in the horizontal and vertical directions. The mean amount of tilt was 4.0±2.1° in horizontal and 3.0±2.5° in vertical directions. In a normal dioptric power range the aspheric IOL FY-60AD compensates the corneal spherical aberration very well with only minimal decentration. The slight tilt is symmetrical in both eyes and corresponds to the position of the crystalline lens in young eyes. This may contribute to our findings of compensated corneal coma.
A Tool Supporting Collaborative Data Analytics Workflow Design and Management
NASA Astrophysics Data System (ADS)
Zhang, J.; Bao, Q.; Lee, T. J.
2016-12-01
Collaborative experiment design could significantly enhance the sharing and adoption of the data analytics algorithms and models emerged in Earth science. Existing data-oriented workflow tools, however, are not suitable to support collaborative design of such a workflow, to name a few, to support real-time co-design; to track how a workflow evolves over time based on changing designs contributed by multiple Earth scientists; and to capture and retrieve collaboration knowledge on workflow design (discussions that lead to a design). To address the aforementioned challenges, we have designed and developed a technique supporting collaborative data-oriented workflow composition and management, as a key component toward supporting big data collaboration through the Internet. Reproducibility and scalability are two major targets demanding fundamental infrastructural support. One outcome of the project os a software tool, supporting an elastic number of groups of Earth scientists to collaboratively design and compose data analytics workflows through the Internet. Instead of recreating the wheel, we have extended an existing workflow tool VisTrails into an online collaborative environment as a proof of concept.
Grid-based platform for training in Earth Observation
NASA Astrophysics Data System (ADS)
Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor
2010-05-01
GiSHEO platform [1] providing on-demand services for training and high education in Earth Observation is developed, in the frame of an ESA funded project through its PECS programme, to respond to the needs of powerful education resources in remote sensing field. It intends to be a Grid-based platform of which potential for experimentation and extensibility are the key benefits compared with a desktop software solution. Near-real time applications requiring simultaneous multiple short-time-response data-intensive tasks, as in the case of a short time training event, are the ones that are proved to be ideal for this platform. The platform is based on Globus Toolkit 4 facilities for security and process management, and on the clusters of four academic institutions involved in the project. The authorization uses a VOMS service. The main public services are the followings: the EO processing services (represented through special WSRF-type services); the workflow service exposing a particular workflow engine; the data indexing and discovery service for accessing the data management mechanisms; the processing services, a collection allowing easy access to the processing platform. The WSRF-type services for basic satellite image processing are reusing free image processing tools, OpenCV and GDAL. New algorithms and workflows were develop to tackle with challenging problems like detecting the underground remains of old fortifications, walls or houses. More details can be found in [2]. Composed services can be specified through workflows and are easy to be deployed. The workflow engine, OSyRIS (Orchestration System using a Rule based Inference Solution), is based on DROOLS, and a new rule-based workflow language, SILK (SImple Language for worKflow), has been built. Workflow creation in SILK can be done with or without a visual designing tools. The basics of SILK are the tasks and relations (rules) between them. It is similar with the SCUFL language, but not relying on XML in order to allow the introduction of more workflow specific issues. Moreover, an event-condition-action (ECA) approach allows a greater flexibility when expressing data and task dependencies, as well as the creation of adaptive workflows which can react to changes in the configuration of the Grid or in the workflow itself. Changes inside the grid are handled by creating specific rules which allow resource selection based on various task scheduling criteria. Modifications of the workflow are usually accomplished either by inserting or retracting at runtime rules belonging to it or by modifying the executor of the task in case a better one is found. The former implies changes in its structure while the latter does not necessarily mean changes of the resource but more precisely changes of the algorithm used for solving the task. More details can be found in [3]. Another important platform component is the data indexing and storage service, GDIS, providing features for data storage, indexing data using a specialized RDBMS, finding data by various conditions, querying external services and keeping track of temporary data generated by other components. The data storage component part of GDIS is responsible for storing the data by using available storage backends such as local disk file systems (ext3), local cluster storage (GFS) or distributed file systems (HDFS). A front-end GridFTP service is capable of interacting with the storage domains on behalf of the clients and in a uniform way and also enforces the security restrictions provided by other specialized services and related with data access. The data indexing is performed by PostGIS. An advanced and flexible interface for searching the project's geographical repository is built around a custom query language (LLQL - Lisp Like Query Language) designed to provide fine grained access to the data in the repository and to query external services (e.g. for exploiting the connection with GENESI-DR catalog). More details can be found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [2] D. Petcu, D. Zaharie, M. Neagul, S. Panica, M. Frincu, D. Gorgan, T. Stefanut, V. Bacu, Remote Sensed Image Processing on Grids for Training in Earth Observation. In Image Processing, V. Kordic (ed.), In-Tech, January 2010. [3] M. Neagul, S. Panica, D. Petcu, D. Zaharie, D. Gorgan, Web and Grid Services for Training in Earth Observation, IDAACS 2009, IEEE Computer Press, 241-246 [4] M. Frincu, S. Panica, M. Neagul, D. Petcu, Gisheo: On Demand Grid Service Based Platform for EO Data Processing. HiperGrid 2009, Politehnica Press, 415-422. [5] D. Gorgan, T. Stefanut, V. Bacu, Grid Based Training Environment for Earth Observation, GPC 2009, LNCS 5529, 98-109
77 FR 56625 - Privacy Act of 1974; Systems of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-13
... Internet at http://www.regulations.gov as they are received without change, including any personal.... George G. Meade, MD 20755-6000. Decentralized segments: Defense Intelligence Agency (DIA) Headquarters... decentralized system locations, write to the National Security Agency/Central Security Service, Freedom of...
OPTIMIZATION OF DECENTRALIZED BMP CONTROLS IN URBAN AREAS
This paper will present an overview of a recently completed project for the US EPA entitled, Optimization of Urban Wet-weather Flow Control Systems. The focus of this effort is on techniques that are suitable for evaluating decentralized BMP controls. The four major components ...
OPTIMIZATION OF DECENTRALIZED BMP CONTROLS IN URBAN AREAS
This paper will present an overview of a recently completed project for the US EPA entitled Optimization of Urban Wet-weather Flow Control Systems. The focus of this effort is on techniques that are suitable for evaluating decentralized BMP controls. The four major components o...
Endogenous System Microbes as Treatment Process Indicators for Decentralized Non-potable Water Reuse
Monitoring the efficacy of treatment strategies to remove pathogens in decentralized systems remains a challenge. Evaluating log reduction targets by measuring pathogen levels is hampered by their sporadic and low occurrence rates. Fecal indicator bacteria are used in centraliz...
Strategic Alignment: Recruiting Students in a Highly Decentralized Environment
ERIC Educational Resources Information Center
Levin, Richard
2016-01-01
All enrollment managers face some level of challenge related to decentralized decision making and operations. Policies and practices can vary considerably by academic area, creating administrative complexity, restricting the scope and speed of institutional initiatives, and limiting potential efficiencies. Central attempts to standardize or…
ERIC Educational Resources Information Center
Ouchi, William G.
2004-01-01
Argues that school systems are so centralized that they waste money on bureaucratic operations and lack the capacity to respond rapidly to changing circumstances. A study of nine school systems that vary dramatically in their degree of decentralization demonstrates that true decentralization yields benefits in both efficiency and performance. (MLF)
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
Fujiwara, T
2012-01-01
Unlike in urban areas where intensive water reclamation systems are available, development of decentralized technologies and systems is required for water use to be sustainable in agricultural areas. To overcome various water quality issues in those areas, a research project entitled 'Development of an innovative water management system with decentralized water reclamation and cascading material-cycle for agricultural areas under the consideration of climate change' was launched in 2009. This paper introduces the concept of this research and provides detailed information on each of its research areas: (1) development of a diffuse agricultural pollution control technology using catch crops; (2) development of a decentralized differentiable treatment system for livestock and human excreta; and (3) development of a cascading material-cycle system for water pollution control and value-added production. The author also emphasizes that the innovative water management system for agricultural areas should incorporate a strategy for the voluntary collection of bio-resources.
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
Decentralized DC Microgrid Monitoring and Optimization via Primary Control Perturbations
NASA Astrophysics Data System (ADS)
Angjelichinoski, Marko; Scaglione, Anna; Popovski, Petar; Stefanovic, Cedomir
2018-06-01
We treat the emerging power systems with direct current (DC) MicroGrids, characterized with high penetration of power electronic converters. We rely on the power electronics to propose a decentralized solution for autonomous learning of and adaptation to the operating conditions of the DC Mirogrids; the goal is to eliminate the need to rely on an external communication system for such purpose. The solution works within the primary droop control loops and uses only local bus voltage measurements. Each controller is able to estimate (i) the generation capacities of power sources, (ii) the load demands, and (iii) the conductances of the distribution lines. To define a well-conditioned estimation problem, we employ decentralized strategy where the primary droop controllers temporarily switch between operating points in a coordinated manner, following amplitude-modulated training sequences. We study the use of the estimator in a decentralized solution of the Optimal Economic Dispatch problem. The evaluations confirm the usefulness of the proposed solution for autonomous MicroGrid operation.
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
Carden, Robert; DelliFraine, Jami L
2005-01-01
The cost of blood and blood products has increased rapidly over the last several years while the supply of available blood donors has simultaneously decreased. Higher blood costs and donor shortages have put a strain on the relationship between blood suppliers and their hospital customers. This study examines the association between blood center centralization or decentralization and several aspects of hospital satisfaction. Centralized and decentralized blood centers have significant differences in various aspects of hospital customer satisfaction. Advantages and disadvantages of the two structures are discussed, as well as areas for future research.
Decentralized stochastic control
NASA Technical Reports Server (NTRS)
Speyer, J. L.
1980-01-01
Decentralized stochastic control is characterized by being decentralized in that the information to one controller is not the same as information to another controller. The system including the information has a stochastic or uncertain component. This complicates the development of decision rules which one determines under the assumption that the system is deterministic. The system is dynamic which means the present decisions affect future system responses and the information in the system. This circumstance presents a complex problem where tools like dynamic programming are no longer applicable. These difficulties are discussed from an intuitive viewpoint. Particular assumptions are introduced which allow a limited theory which produces mechanizable affine decision rules.
Kumar, Santosh; Prakash, Nishith
2017-07-01
In this paper, we investigate the impacts of political decentralization and women reservation in local governance on institutional births and child mortality in the state of Bihar, India. Using the difference-in-differences methodology, we find a significant positive association between political decentralization and institutional births. We also find that the increased participation of women at local governance led to an increased survival rate of children belonging to richer households. We argue that our results are consistent with female leaders having policy preference for women and child well-being. Copyright © 2017 Elsevier Ltd. All rights reserved.
Centralized vs. decentralized child mental health services.
Adams, M S
1977-09-01
One of the basic tenets of the Community Mental Health Center movement is that services should be provided in the consumers' community. Various centers across the country have attempted to do this in either a centralized or decentralized fashion. Historically, most health services have been provided centrally, a good example being the traditional general hospital with its centralized medical services. Over the years, some of these services have become decentralized to take the form of local health centers, health maintenance organizations, community clinics, etc, and now various large mental health centers are also being broken down into smaller community units. An example of each type of mental health facility is delineated here.
A decentralized process for finding equilibria given by linear equations.
Reiter, S
1994-01-01
I present a decentralized process for finding the equilibria of an economy characterized by a finite number of linear equilibrium conditions. The process finds all equilibria or, if there are none, reports that, in a finite number of steps at most equal to the number of equations. The communication and computational complexity compare favorably with other decentralized processes. The process may also be interpreted as an algorithm for solving a distributed system of linear equations. Comparisons with the Linpack program for LU (lower and upper triangular decomposition of the matrix of the equation system, a version of Gaussian elimination) are presented. PMID:11607486
Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.
2003-01-01
Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.