Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work
NASA Technical Reports Server (NTRS)
Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David
2005-01-01
Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.
Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy
2005-01-01
Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management
NASA Astrophysics Data System (ADS)
Hendrix, Val; Benjamin, Doug; Yao, Yushu
2012-12-01
Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment in a cloud environment. Our future cloud efforts will further build on this work.
5 CFR 9701.211 - Occupational clusters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9701.211 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Structure § 9701.211 Occupational clusters. For...
Data Management as a Cluster Middleware Centerpiece
NASA Technical Reports Server (NTRS)
Zero, Jose; McNab, David; Sawyer, William; Cheung, Samson; Duffy, Daniel; Rood, Richard; Webster, Phil; Palm, Nancy; Salmon, Ellen; Schardt, Tom
2004-01-01
Through earth and space modeling and the ongoing launches of satellites to gather data, NASA has become one of the largest producers of data in the world. These large data sets necessitated the creation of a Data Management System (DMS) to assist both the users and the administrators of the data. Halcyon Systems Inc. was contracted by the NASA Center for Computational Sciences (NCCS) to produce a Data Management System. The prototype of the DMS was produced by Halcyon Systems Inc. (Halcyon) for the Global Modeling and Assimilation Office (GMAO). The system, which was implemented and deployed within a relatively short period of time, has proven to be highly reliable and deployable. Following the prototype deployment, Halcyon was contacted by the NCCS to produce a production DMS version for their user community. The system is composed of several existing open source or government-sponsored components such as the San Diego Supercomputer Center s (SDSC) Storage Resource Broker (SRB), the Distributed Oceanographic Data System (DODS), and other components. Since Data Management is one of the foremost problems in cluster computing, the final package not only extends its capabilities as a Data Management System, but also to a cluster management system. This Cluster/Data Management System (CDMS) can be envisioned as the integration of existing packages.
Katz, R
1992-11-01
Cluster management is a management model that fosters decentralization of management, develops leadership potential of staff, and creates ownership of unit-based goals. Unlike shared governance models, there is no formal structure created by committees and it is less threatening for managers. There are two parts to the cluster management model. One is the formation of cluster groups, consisting of all staff and facilitated by a cluster leader. The cluster groups function for communication and problem-solving. The second part of the cluster management model is the creation of task forces. These task forces are designed to work on short-term goals, usually in response to solving one of the unit's goals. Sometimes the task forces are used for quality improvement or system problems. Clusters are groups of not more than five or six staff members, facilitated by a cluster leader. A cluster is made up of individuals who work the same shift. For example, people with job titles who work days would be in a cluster. There would be registered nurses, licensed practical nurses, nursing assistants, and unit clerks in the cluster. The cluster leader is chosen by the manager based on certain criteria and is trained for this specialized role. The concept of cluster management, criteria for choosing leaders, training for leaders, using cluster groups to solve quality improvement issues, and the learning process necessary for manager support are described.
5 CFR 9701.355 - Setting pay upon movement to a different occupational cluster.
Code of Federal Regulations, 2010 CFR
2010-01-01
... occupational cluster. 9701.355 Section 9701.355 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Pay and Pay Administration Pay Administration § 9701...
3D Viewer Platform of Cloud Clustering Management System: Google Map 3D
NASA Astrophysics Data System (ADS)
Choi, Sung-Ja; Lee, Gang-Soo
The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2002-12-19
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
SLURM: Simplex Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2003-04-22
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
NASA Astrophysics Data System (ADS)
Brenden, T. O.; Clark, R. D.; Wiley, M. J.; Seelbach, P. W.; Wang, L.
2005-05-01
Remote sensing and geographic information systems have made it possible to attribute variables for streams at increasingly detailed resolutions (e.g., individual river reaches). Nevertheless, management decisions still must be made at large scales because land and stream managers typically lack sufficient resources to manage on an individual reach basis. Managers thus require a method for identifying stream management units that are ecologically similar and that can be expected to respond similarly to management decisions. We have developed a spatially-constrained clustering algorithm that can merge neighboring river reaches with similar ecological characteristics into larger management units. The clustering algorithm is based on the Cluster Affinity Search Technique (CAST), which was developed for clustering gene expression data. Inputs to the clustering algorithm are the neighbor relationships of the reaches that comprise the digital river network, the ecological attributes of the reaches, and an affinity value, which identifies the minimum similarity for merging river reaches. In this presentation, we describe the clustering algorithm in greater detail and contrast its use with other methods (expert opinion, classification approach, regular clustering) for identifying management units using several Michigan watersheds as a backdrop.
Dell, Emily A; Bowman, Daniel; Rufty, Thomas; Shi, Wei
2008-07-01
Turfgrass is a highly managed ecosystem subject to frequent fertilization, mowing, irrigation, and application of pesticides. Turf management practices may create a perturbed environment for ammonia oxidizers, a key microbial group responsible for nitrification. To elucidate the long-term effects of turf management on these bacteria, we assessed the composition of betaproteobacterial ammonia oxidizers in a chronosequence of turfgrass systems (i.e., 1, 6, 23, and 95 years old) and the adjacent native pines by using both 16S rRNA and amoA gene fragments specific to ammonia oxidizers. Based on the Shannon-Wiener diversity index of denaturing gradient gel electrophoresis patterns and the rarefaction curves of amoA clones, turf management did not change the relative diversity and richness of ammonia oxidizers in turf soils as compared to native pine soils. Ammonia oxidizers in turfgrass systems comprised a suite of phylogenetic clusters common to other terrestrial ecosystems. Nitrosospira clusters 0, 2, 3, and 4; Nitrosospira sp. Nsp65-like sequences; and Nitrosomonas clusters 6 and 7 were detected in the turfgrass chronosequence with Nitrosospira clusters 3 and 4 being dominant. However, both turf age and land change (pine to turf) effected minor changes in ammonia oxidizer composition. Nitrosospira cluster 0 was observed only in older turfgrass systems (i.e., 23 and 95 years old); fine-scale differences within Nitrosospira cluster 3 were seen between native pines and turf. Further investigations are needed to elucidate the ecological implications of the compositional differences.
Systematic and Scalable Testing of Concurrent Programs
2013-12-16
The evaluation of CHESS [107] checked eight different programs ranging from process management libraries to a distributed execution engine to a research...tool (§3.1) targets systematic testing of scheduling nondeterminism in multi- threaded components of the Omega cluster management system [129], while...tool for systematic testing of multithreaded com- ponents of the Omega cluster management system [129]. In particular, §3.1.1 defines a model for
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
Tremblay, Marlène; Hess, Justin P; Christenson, Brock M; McIntyre, Kolby K; Smink, Ben; van der Kamp, Arjen J; de Jong, Lisanne G; Döpfer, Dörte
2016-07-01
Automatic milking systems (AMS) are implemented in a variety of situations and environments. Consequently, there is a need to characterize individual farming practices and regional challenges to streamline management advice and objectives for producers. Benchmarking is often used in the dairy industry to compare farms by computing percentile ranks of the production values of groups of farms. Grouping for conventional benchmarking is commonly limited to the use of a few factors such as farms' geographic region or breed of cattle. We hypothesized that herds' production data and management information could be clustered in a meaningful way using cluster analysis and that this clustering approach would yield better peer groups of farms than benchmarking methods based on criteria such as country, region, breed, or breed and region. By applying mixed latent-class model-based cluster analysis to 529 North American AMS dairy farms with respect to 18 significant risk factors, 6 clusters were identified. Each cluster (i.e., peer group) represented unique management styles, challenges, and production patterns. When compared with peer groups based on criteria similar to the conventional benchmarking standards, the 6 clusters better predicted milk produced (kilograms) per robot per day. Each cluster represented a unique management and production pattern that requires specialized advice. For example, cluster 1 farms were those that recently installed AMS robots, whereas cluster 3 farms (the most northern farms) fed high amounts of concentrates through the robot to compensate for low-energy feed in the bunk. In addition to general recommendations for farms within a cluster, individual farms can generate their own specific goals by comparing themselves to farms within their cluster. This is very comparable to benchmarking but adds the specific characteristics of the peer group, resulting in better farm management advice. The improvement that cluster analysis allows for is characterized by the multivariable approach and the fact that comparisons between production units can be accomplished within a cluster and between clusters as a choice. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
System and method for merging clusters of wireless nodes in a wireless network
Budampati, Ramakrishna S [Maple Grove, MN; Gonia, Patrick S [Maplewood, MN; Kolavennu, Soumitri N [Blaine, MN; Mahasenan, Arun V [Kerala, IN
2012-05-29
A system includes a first cluster having multiple first wireless nodes. One first node is configured to act as a first cluster master, and other first nodes are configured to receive time synchronization information provided by the first cluster master. The system also includes a second cluster having one or more second wireless nodes. One second node is configured to act as a second cluster master, and any other second nodes configured to receive time synchronization information provided by the second cluster master. The system further includes a manager configured to merge the clusters into a combined cluster. One of the nodes is configured to act as a single cluster master for the combined cluster, and the other nodes are configured to receive time synchronization information provided by the single cluster master.
Dynamically allocated virtual clustering management system
NASA Astrophysics Data System (ADS)
Marcus, Kelvin; Cannata, Jess
2013-05-01
The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.
Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.
2009-09-09
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amjad Majid; Albert, Don; Andersson, Par
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.
Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450
Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei
2016-01-29
In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.
Logistics Enterprise Evaluation Model Based On Fuzzy Clustering Analysis
NASA Astrophysics Data System (ADS)
Fu, Pei-hua; Yin, Hong-bo
In this thesis, we introduced an evaluation model based on fuzzy cluster algorithm of logistics enterprises. First of all,we present the evaluation index system which contains basic information, management level, technical strength, transport capacity,informatization level, market competition and customer service. We decided the index weight according to the grades, and evaluated integrate ability of the logistics enterprises using fuzzy cluster analysis method. In this thesis, we introduced the system evaluation module and cluster analysis module in detail and described how we achieved these two modules. At last, we gave the result of the system.
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Hillyer, T. N.; Wilkins, J.
2012-12-01
The CERES Science Team integrates data from 5 CERES instruments onboard the Terra, Aqua and NPP missions. The processing chain fuses CERES observations with data from 19 other unique sources. The addition of CERES Flight Model 5 (FM5) onboard NPP, coupled with ground processing system upgrades further emphasizes the need for an automated job-submission utility to manage multiple processing streams concurrently. The operator-driven, legacy-processing approach relied on manually staging data from magnetic tape to limited spinning disk attached to a shared memory architecture system. The migration of CERES production code to a distributed, cluster computing environment with approximately one petabyte of spinning disk containing all precursor input data products facilitates the development of a CERES-specific, automated workflow manager. In the cluster environment, I/O is the primary system resource in contention across jobs. Therefore, system load can be maximized with a throttling workload manager. This poster discusses a Java and Perl implementation of an automated job management tool tailored for CERES processing.
Jung, Youngmee Tiffany; Narayanan, N C; Cheng, Yu-Ling
2018-05-01
There is a growing interest in decentralized wastewater management (DWWM) as a potential alternative to centralized wastewater management (CWWM) in developing countries. However, the comparative cost of CWWM and DWWM is not well understood. In this study, the cost of cluster-type DWWM is simulated and compared to the cost of CWWM in Alibag, India. A three-step model is built to simulate a broad range of potential DWWM configurations with varying number and layout of cluster subsystems. The considered DWWM scheme consists of cluster subsystems, that each uses simplified sewer and DEWATS (Decentralized Wastewater Treatment Systems). We consider CWWM that uses conventional sewer and an activated sludge plant. The results show that the cost of DWWM can vary significantly with the number and layout of the comprising cluster subsystems. The cost of DWWM increased nonlinearly with increasing number of comprising clusters, mainly due to the loss in the economies of scale for DEWATS. For configurations with the same number of comprising cluster subsystems, the cost of DWWM varied by ±5% around the mean, depending on the layout of the cluster subsystems. In comparison to CWWM, DWWM was of lower cost than CWWM when configured with fewer than 16 clusters in Alibag, with significantly less operation and maintenance requirement, but with higher capital and land requirement for construction. The study demonstrates that cluster-type DWWM using simplified sewer and DEWATS may be a cost-competitive alternative to CWWM, when carefully configured to lower the cost. Copyright © 2018 Elsevier Ltd. All rights reserved.
Knowledge Management in Acquisition and Program Management (KM in the AM and PM)
2002-01-01
a clumping of clusters.16 If all the planets in a solar system had moons, the moons would be the people, each planet would be a discipline or cluster...exploration, one looks for non-obvious, unknown relation- ships in a data set. The discovery that cus- tomers frequently buy beer and diapers to- gether from
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Lee H.; Laros, James H., III
This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
NASA Astrophysics Data System (ADS)
Markov, N. G.; E Vasilyeva, E.; Evsyutkin, I. V.
2017-01-01
The intellectual information system for management of geological and technical arrangements during oil fields exploitation is developed. Service-oriented architecture of its software is a distinctive feature of the system. The results of the cluster analysis of real field data received by means of this system are shown.
Anchala, Raghupathy; Kaptoge, Stephen; Pant, Hira; Di Angelantonio, Emanuele; Franco, Oscar H.; Prabhakaran, D.
2015-01-01
Background Randomized control trials from the developed world report that clinical decision support systems (DSS) could provide an effective means to improve the management of hypertension (HTN). However, evidence from developing countries in this regard is rather limited, and there is a need to assess the impact of a clinical DSS on managing HTN in primary health care center (PHC) settings. Methods and Results We performed a cluster randomized trial to test the effectiveness and cost‐effectiveness of a clinical DSS among Indian adult hypertensive patients (between 35 and 64 years of age), wherein 16 PHC clusters from a district of Telangana state, India, were randomized to receive either a DSS or a chart‐based support (CBS) system. Each intervention arm had 8 PHC clusters, with a mean of 102 hypertensive patients per cluster (n=845 in DSS and 783 in CBS groups). Mean change in systolic blood pressure (SBP) from baseline to 12 months was the primary endpoint. The mean difference in SBP change from baseline between the DSS and CBS at the 12th month of follow‐up, adjusted for age, sex, height, waist, body mass index, alcohol consumption, vegetable intake, pickle intake, and baseline differences in blood pressure, was −6.59 mm Hg (95% confidence interval: −12.18 to −1.42; P=0.021). The cost‐effective ratio for CBS and DSS groups was $96.01 and $36.57 per mm of SBP reduction, respectively. Conclusion Clinical DSS are effective and cost‐effective in the management of HTN in resource‐constrained PHC settings. Clinical Trial Registration URL: http://www.ctri.nic.in. Unique identifier: CTRI/2012/03/002476. PMID:25559011
Typology of Ohio, USA, Tree Farmers Based Upon Forestry Outreach Needs
NASA Astrophysics Data System (ADS)
Starr, SE; McConnell, TE; Bruskotter, JS; Williams, RA
2015-02-01
This study differentiated groups of Ohio tree farmers through multivariate clustering of their perceived needs for forest management outreach. Tree farmers were surveyed via a mailed questionnaire. Respondents were asked to rate, on a 1-7 scale, their informational needs for 26 outreach topics, which were reduced to six factors. Based on these factors, three clusters were identified—holistic managers, environmental stewards, and pragmatic tree farmers. Cluster assignment of individuals was dependent upon a tree farmer's age, acreage owned, and number of years enrolled in the American Tree Farm System. Holistic managers showed a greater interest in the outreach topics while pragmatic tree farmers displayed an overall lesser interest. Across clusters, print media and in-person workshops were preferred over emails and webinars for receiving forest management information. In-person workshops should be no more than 1 day events, held on a weekday, during the daytime, at a cost not exceeding 35. Programming related to environmental influences, which included managing for forest insects and diseases, was concluded to have the greater potential to impact clientele among all outreach factors due to the information being applicable across demographics and/or management objectives.
Dynamically Allocated Virtual Clustering Management System Users Guide
2016-11-01
provides usage instructions for the DAVC version 2.0 web application. 15. SUBJECT TERMS DAVC, Dynamically Allocated Virtual Clustering...This report provides usage instructions for the DAVC version 2.0 web application. This report is separated into the following sections, which detail
Agent-based method for distributed clustering of textual information
Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN
2010-09-28
A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.
Roca, Josep; Vargas, Claudia; Cano, Isaac; Selivanov, Vitaly; Barreiro, Esther; Maier, Dieter; Falciani, Francesco; Wagner, Peter; Cascante, Marta; Garcia-Aymerich, Judith; Kalko, Susana; De Mas, Igor; Tegnér, Jesper; Escarrabill, Joan; Agustí, Alvar; Gomez-Cabrero, David
2014-11-28
Heterogeneity in clinical manifestations and disease progression in Chronic Obstructive Pulmonary Disease (COPD) lead to consequences for patient health risk assessment, stratification and management. Implicit with the classical "spill over" hypothesis is that COPD heterogeneity is driven by the pulmonary events of the disease. Alternatively, we hypothesized that COPD heterogeneities result from the interplay of mechanisms governing three conceptually different phenomena: 1) pulmonary disease, 2) systemic effects of COPD and 3) co-morbidity clustering, each of them with their own dynamics. To explore the potential of a systems analysis of COPD heterogeneity focused on skeletal muscle dysfunction and on co-morbidity clustering aiming at generating predictive modeling with impact on patient management. To this end, strategies combining deterministic modeling and network medicine analyses of the Biobridge dataset were used to investigate the mechanisms of skeletal muscle dysfunction. An independent data driven analysis of co-morbidity clustering examining associated genes and pathways was performed using a large dataset (ICD9-CM data from Medicare, 13 million people). Finally, a targeted network analysis using the outcomes of the two approaches (skeletal muscle dysfunction and co-morbidity clustering) explored shared pathways between these phenomena. (1) Evidence of abnormal regulation of skeletal muscle bioenergetics and skeletal muscle remodeling showing a significant association with nitroso-redox disequilibrium was observed in COPD; (2) COPD patients presented higher risk for co-morbidity clustering than non-COPD patients increasing with ageing; and, (3) the on-going targeted network analyses suggests shared pathways between skeletal muscle dysfunction and co-morbidity clustering. The results indicate the high potential of a systems approach to address COPD heterogeneity. Significant knowledge gaps were identified that are relevant to shape strategies aiming at fostering 4P Medicine for patients with COPD.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2009-09-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2010-11-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Greenhouse tomato limited cluster production systems: crop management practices affect yield
NASA Technical Reports Server (NTRS)
Logendra, L. S.; Gianfagna, T. J.; Specca, D. R.; Janes, H. W.
2001-01-01
Limited-cluster production systems may be a useful strategy to increase crop production and profitability for the greenhouse tomato (Lycopersicon esculentum Mill). In this study, using an ebb-and-flood hydroponics system, we modified plant architecture and spacing and determined the effects on fruit yield and harvest index at two light levels. Single-cluster plants pruned to allow two leaves above the cluster had 25% higher fruit yields than did plants pruned directly above the cluster; this was due to an increase in fruit weight, not fruit number. Both fruit yield and harvest index were greater for all single-cluster plants at the higher light level because of increases in both fruit weight and fruit number. Fruit yield for two-cluster plants was 30% to 40% higher than for single-cluster plants, and there was little difference in the dates or length of the harvest period. Fruit yield for three-cluster plants was not significantly different from that of two-cluster plants; moreover, the harvest period was delayed by 5 days. Plant density (5.5, 7.4, 9.2 plants/m2) affected fruit yield/plant, but not fruit yield/unit area. Given the higher costs for materials and labor associated with higher plant densities, a two-cluster crop at 5.5 plants/m2 with two leaves above the cluster was the best of the production system strategies tested.
Study of systems and techniques for data base management
NASA Technical Reports Server (NTRS)
1976-01-01
Data management areas were studied to identify pertinent problems and issues that will affect future NASA data users in terms of performance and cost. Specific topics discussed include the identifications of potential NASA data users other than those normally discussed, consideration affecting the clustering of minicomputers, low cost computer system for information retrieval and analysis, the testing of minicomputer based data base management systems, ongoing work related to the use of dedicated systems for data base management, and the problems of data interchange among a community of NASA data users.
Gathering Real World Evidence with Cluster Analysis for Clinical Decision Support.
Xia, Eryu; Liu, Haifeng; Li, Jing; Mei, Jing; Li, Xuejun; Xu, Enliang; Li, Xiang; Hu, Gang; Xie, Guotong; Xu, Meilin
2017-01-01
Clinical decision support systems are information technology systems that assist clinical decision-making tasks, which have been shown to enhance clinical performance. Cluster analysis, which groups similar patients together, aims to separate patient cases into phenotypically heterogenous groups and defining therapeutically homogeneous patient subclasses. Useful as it is, the application of cluster analysis in clinical decision support systems is less reported. Here, we describe the usage of cluster analysis in clinical decision support systems, by first dividing patient cases into similar groups and then providing diagnosis or treatment suggestions based on the group profiles. This integration provides data for clinical decisions and compiles a wide range of clinical practices to inform the performance of individual clinicians. We also include an example usage of the system under the scenario of blood lipid management in type 2 diabetes. These efforts represent a step toward promoting patient-centered care and enabling precision medicine.
The web site provides guidance and technical assistance for homeowners, government officials, industry professionals, and EPA partners about how to properly develop and manage individual onsite and community cluster systems that treat domestic wastewater.
Diagnosis, pathophysiology, and management of cluster headache.
Hoffmann, Jan; May, Arne
2018-01-01
Cluster headache is a trigeminal autonomic cephalalgia characterised by extremely painful, strictly unilateral, short-lasting headache attacks accompanied by ipsilateral autonomic symptoms or the sense of restlessness and agitation, or both. The severity of the disorder has major effects on the patient's quality of life and, in some cases, might lead to suicidal ideation. Cluster headache is now thought to involve a synchronised abnormal activity in the hypothalamus, the trigeminovascular system, and the autonomic nervous system. The hypothalamus appears to play a fundamental role in the generation of a permissive state that allows the initiation of an episode, whereas the attacks are likely to require the involvement of the peripheral nervous system. Triptans are the most effective drugs to treat an acute cluster headache attack. Monoclonal antibodies against calcitonin gene-related peptide, a crucial neurotransmitter of the trigeminal system, are under investigation for the preventive treatment of cluster headache. These studies will increase our understanding of the disorder and perhaps reveal other therapeutic targets. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Clustering Methodology of Web Log Data for Learning Management Systems
ERIC Educational Resources Information Center
Valsamidis, Stavros; Kontogiannis, Sotirios; Kazanidis, Ioannis; Theodosiou, Theodosios; Karakos, Alexandros
2012-01-01
Learning Management Systems (LMS) collect large amounts of data. Data mining techniques can be applied to analyse their web data log files. The instructors may use this data for assessing and measuring their courses. In this respect, we have proposed a methodology for analysing LMS courses and students' activity. This methodology uses a Markov…
ERIC Educational Resources Information Center
Morgan, Robert L., Ed.; And Others
The document contains 12 papers. Two of the papers present opening and closing remarks to the conference. The other 10 deal with their State's management information system (MIS) in vocational education. The 10 papers are clustered according to whether they are primarily descriptive of student accounting (four papers), manpower supply and demand…
Integrated management of thesis using clustering method
NASA Astrophysics Data System (ADS)
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Supporting scalability and flexibility in a distributed management platform
NASA Astrophysics Data System (ADS)
Jardin, P.
1996-06-01
The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.
NASA Technical Reports Server (NTRS)
1973-01-01
Results of the design and manufacturing reviews on the maturity of the Skylab modules are presented along with results of investigations on the scope of the cluster risk assessment efforts. The technical management system and its capability to assess and resolve problems are studied.
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.
Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580
Ergatis: a web interface and scalable software system for bioinformatics workflows
Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.
2010-01-01
Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634
Automated clustering-based workload characterization
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Menasce, Daniel A.; Yesha, Yelena
1996-01-01
The demands placed on the mass storage systems at various federal agencies and national laboratories are continuously increasing in intensity. This forces system managers to constantly monitor the system, evaluate the demand placed on it, and tune it appropriately using either heuristics based on experience or analytic models. Performance models require an accurate workload characterization. This can be a laborious and time consuming process. It became evident from our experience that a tool is necessary to automate the workload characterization process. This paper presents the design and discusses the implementation of a tool for workload characterization of mass storage systems. The main features of the tool discussed here are: (1)Automatic support for peak-period determination. Histograms of system activity are generated and presented to the user for peak-period determination; (2) Automatic clustering analysis. The data collected from the mass storage system logs is clustered using clustering algorithms and tightness measures to limit the number of generated clusters; (3) Reporting of varied file statistics. The tool computes several statistics on file sizes such as average, standard deviation, minimum, maximum, frequency, as well as average transfer time. These statistics are given on a per cluster basis; (4) Portability. The tool can easily be used to characterize the workload in mass storage systems of different vendors. The user needs to specify through a simple log description language how the a specific log should be interpreted. The rest of this paper is organized as follows. Section two presents basic concepts in workload characterization as they apply to mass storage systems. Section three describes clustering algorithms and tightness measures. The following section presents the architecture of the tool. Section five presents some results of workload characterization using the tool.Finally, section six presents some concluding remarks.
Setting Up a Public Use Local Area Network.
ERIC Educational Resources Information Center
Flower, Eric; Thulstrup, Lisa
1988-01-01
Describes a public use microcomputer cluster at the University of Maine, Orono. Various network topologies, hardware and software options, installation problems, system management, and performance are discussed. (MES)
REEF: Retainable Evaluator Execution Framework
Weimer, Markus; Chen, Yingda; Chun, Byung-Gon; Condie, Tyson; Curino, Carlo; Douglas, Chris; Lee, Yunseong; Majestro, Tony; Malkhi, Dahlia; Matusevych, Sergiy; Myers, Brandon; Narayanamurthy, Shravan; Ramakrishnan, Raghu; Rao, Sriram; Sears, Russell; Sezgin, Beysim; Wang, Julia
2015-01-01
Resource Managers like Apache YARN have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low-level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault-tolerance, task scheduling and coordination) and re-implement common mechanisms (e.g., caching, bulk-data transfers). This paper presents REEF, a development framework that provides a control-plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource re-use for data caching, and state management abstractions that greatly ease the development of elastic data processing work-flows on cloud platforms that support a Resource Manager service. REEF is being used to develop several commercial offerings such as the Azure Stream Analytics service. Furthermore, we demonstrate REEF development of a distributed shell application, a machine learning algorithm, and a port of the CORFU [4] system. REEF is also currently an Apache Incubator project that has attracted contributors from several instititutions.1 PMID:26819493
Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)
2000-01-01
real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .
Stokes, Jonathan; Kristensen, Søren Rud; Checkland, Kath; Cheraghi-Sohi, Sudeh; Bower, Peter
2017-08-03
Health systems must transition from catering primarily to acute conditions, to meet the increasing burden of chronic disease and multimorbidity. Case management is a popular method of integrating care, seeking to accomplish this goal. However, the intervention has shown limited effectiveness. We explore whether the effects of case management vary in patients with different types of multimorbidity. We extended a previously published quasi-experiment (difference-in-differences analysis) with 2049 propensity matched case management intervention patients, adding an additional interaction term to determine subgroup effects (difference-in-difference-in-differences) by different conceptualisations of multimorbidity: 1) Mental-physical comorbidity versus others; 2) 3+ chronic conditions versus <3; 3) Discordant versus concordant conditions; 4) Cardiovascular/metabolic cluster conditions only versus others; 5) Mental health-associated cluster conditions only versus others; 6) Musculoskeletal disorder cluster conditions only versus others 7) Charlson index >5 versus others. Outcome measures included a variety of secondary care utilisation and cost measures. The majority of conceptualisations suggested little to no difference in effect between subgroups. Where results were significant, the vast majority of effect sizes identified in either direction were very small. The trend across the majority of the results appeared to show very slight increases of admissions with treatment for the most complex patients (highest risk). The exceptions to this, patients with a Charlson index >5 may benefit slightly more from case management with decreased ACSC admissions (effect size (ES): −0.06) and inpatient re-admissions (30 days, ES: −0.05), and patients with only cardiovascular/metabolic cluster conditions may benefit slightly more with decreased inpatient non-elective admissions (ES: −0.12). Only the three significant estimates for the musculoskeletal disorder cluster met the minimum requirement for at least a ‘small’ effect. Two of these estimates in particular were very large. This cluster represented only 0.5% of the total patients analysed, however, so is hugely vulnerable to the effects of outliers, and makes us very cautious of interpreting these as ‘real’ effects. Our results indicate no appropriate multimorbidity subgroup at which to target the case management intervention in terms of secondary care utilisation/cost outcomes. The most complex, highest risk patients may legitimately require hospitalisation, and the intensified management may better identify these unmet needs. End of life patients (e.g. Charlson index >5)/those with only conditions particularly amenable to primary care management (e.g. cardiovascular/metabolic cluster conditions) may benefit very slightly more than others.
Nonlinear management of the angular momentum of soliton clusters: Theory and experiment
NASA Astrophysics Data System (ADS)
Fratalocchi, Andrea; Piccardi, Armando; Peccianti, Marco; Assanto, Gaetano
2007-06-01
We demonstrate, both theoretically and experimentally, how to acquire nonlinear control over the angular momentum of a cluster of solitary waves. Our results, stemming from a universal theoretical model, show that the angular momentum can be adjusted by acting on the global energy input in the system. The phenomenon is experimentally ascertained in nematic liquid crystals by observing a power-dependent rotation of a two-soliton ensemble.
Physical properties of star clusters in the outer LMC as observed by the DES
Pieres, A.; Santiago, B.; Balbinot, E.; ...
2016-05-26
The Large Magellanic Cloud (LMC) harbors a rich and diverse system of star clusters, whose ages, chemical abundances, and positions provide information about the LMC history of star formation. We use Science Verification imaging data from the Dark Energy Survey to increase the census of known star clusters in the outer LMC and to derive physical parameters for a large sample of such objects using a spatially and photometrically homogeneous data set. Our sample contains 255 visually identified cluster candidates, of which 109 were not listed in any previous catalog. We quantify the crowding effect for the stellar sample producedmore » by the DES Data Management pipeline and conclude that the stellar completeness is < 10% inside typical LMC cluster cores. We therefore develop a pipeline to sample and measure stellar magnitudes and positions around the cluster candidates using DAOPHOT. We also implement a maximum-likelihood method to fit individual density profiles and colour-magnitude diagrams. For 117 (from a total of 255) of the cluster candidates (28 uncatalogued clusters), we obtain reliable ages, metallicities, distance moduli and structural parameters, confirming their nature as physical systems. The distribution of cluster metallicities shows a radial dependence, with no clusters more metal-rich than [Fe/H] ~ -0.7 beyond 8 kpc from the LMC center. Furthermore, the age distribution has two peaks at ≃ 1.2 Gyr and ≃ 2.7 Gyr.« less
Physical properties of star clusters in the outer LMC as observed by the DES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieres, A.; Santiago, B.; Balbinot, E.
The Large Magellanic Cloud (LMC) harbors a rich and diverse system of star clusters, whose ages, chemical abundances, and positions provide information about the LMC history of star formation. We use Science Verification imaging data from the Dark Energy Survey to increase the census of known star clusters in the outer LMC and to derive physical parameters for a large sample of such objects using a spatially and photometrically homogeneous data set. Our sample contains 255 visually identified cluster candidates, of which 109 were not listed in any previous catalog. We quantify the crowding effect for the stellar sample producedmore » by the DES Data Management pipeline and conclude that the stellar completeness is < 10% inside typical LMC cluster cores. We therefore develop a pipeline to sample and measure stellar magnitudes and positions around the cluster candidates using DAOPHOT. We also implement a maximum-likelihood method to fit individual density profiles and colour-magnitude diagrams. For 117 (from a total of 255) of the cluster candidates (28 uncatalogued clusters), we obtain reliable ages, metallicities, distance moduli and structural parameters, confirming their nature as physical systems. The distribution of cluster metallicities shows a radial dependence, with no clusters more metal-rich than [Fe/H] ~ -0.7 beyond 8 kpc from the LMC center. Furthermore, the age distribution has two peaks at ≃ 1.2 Gyr and ≃ 2.7 Gyr.« less
Critical Infrastructure Protection and Resilience Literature Survey: Modeling and Simulation
2014-11-01
2013 Page 34 of 63 Below the yellow set is a purple cluster bringing together detection , anomaly , intrusion, sensors, monitoring and alerting (early...hazards and threats to security56 Water ADWICE, PSS®SINCAL ADWICE for real-time anomaly detection in water management systems57 One tool that...Systems. Cybernetics and Information Technologies. 2008;8(4):57-68. 57. Raciti M, Cucurull J, Nadjm-Tehrani S. Anomaly detection in water management
A new Self-Adaptive disPatching System for local clusters
NASA Astrophysics Data System (ADS)
Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng
2015-12-01
The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.
Wang, Shen-Tsu; Li, Meng-Hua
2014-01-01
When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.
Scheduling for energy and reliability management on multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Qi, Xuan
Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-17
...; Fox Canyon Cluster Allotment Management Plan Project EIS AGENCY: Forest Service, USDA. ACTION: Notice... preparing an environmental impact statement (EIS) to analyze the effects of changing grazing management in four allotments on the Paulina Ranger District. The Fox Canyon Cluster project area is located...
Automated rice leaf disease detection using color image analysis
NASA Astrophysics Data System (ADS)
Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.
2011-06-01
In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks
Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang
2018-01-01
In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches. PMID:29439439
Clustering-based urbanisation to improve enterprise information systems agility
NASA Astrophysics Data System (ADS)
Imache, Rabah; Izza, Said; Ahmed-Nacer, Mohamed
2015-11-01
Enterprises are daily facing pressures to demonstrate their ability to adapt quickly to the unpredictable changes of their dynamic in terms of technology, social, legislative, competitiveness and globalisation. Thus, to ensure its place in this hard context, enterprise must always be agile and must ensure its sustainability by a continuous improvement of its information system (IS). Therefore, the agility of enterprise information systems (EISs) can be considered today as a primary objective of any enterprise. One way of achieving this objective is by the urbanisation of the EIS in the context of continuous improvement to make it a real asset servicing enterprise strategy. This paper investigates the benefits of EISs urbanisation based on clustering techniques as a driver for agility production and/or improvement to help managers and IT management departments to improve continuously the performance of the enterprise and make appropriate decisions in the scope of the enterprise objectives and strategy. This approach is applied to the urbanisation of a tour operator EIS.
De Brún, Aoife; McAuliffe, Eilish
2018-03-13
Health systems research recognizes the complexity of healthcare, and the interacting and interdependent nature of components of a health system. To better understand such systems, innovative methods are required to depict and analyze their structures. This paper describes social network analysis as a methodology to depict, diagnose, and evaluate health systems and networks therein. Social network analysis is a set of techniques to map, measure, and analyze social relationships between people, teams, and organizations. Through use of a case study exploring support relationships among senior managers in a newly established hospital group, this paper illustrates some of the commonly used network- and node-level metrics in social network analysis, and demonstrates the value of these maps and metrics to understand systems. Network analysis offers a valuable approach to health systems and services researchers as it offers a means to depict activity relevant to network questions of interest, to identify opinion leaders, influencers, clusters in the network, and those individuals serving as bridgers across clusters. The strengths and limitations inherent in the method are discussed, and the applications of social network analysis in health services research are explored.
Li, Meng-Hua
2014-01-01
When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713
High Speed White Dwarf Asteroseismology with the Herty Hall Cluster
NASA Astrophysics Data System (ADS)
Gray, Aaron; Kim, A.
2012-01-01
Asteroseismology is the process of using observed oscillations of stars to infer their interior structure. In high speed asteroseismology, we complete that by quickly computing hundreds of thousands of models to match the observed period spectra. Each model on a single processor takes five to ten seconds to run. Therefore, we use a cluster of sixteen Dell Workstations with dual-core processors. The computers use the Ubuntu operating system and Apache Hadoop software to manage workloads.
MASPECTRAS: a platform for management and analysis of proteomics LC-MS/MS data
Hartler, Jürgen; Thallinger, Gerhard G; Stocker, Gernot; Sturn, Alexander; Burkard, Thomas R; Körner, Erik; Rader, Robert; Schmidt, Andreas; Mechtler, Karl; Trajanoski, Zlatko
2007-01-01
Background The advancements of proteomics technologies have led to a rapid increase in the number, size and rate at which datasets are generated. Managing and extracting valuable information from such datasets requires the use of data management platforms and computational approaches. Results We have developed the MAss SPECTRometry Analysis System (MASPECTRAS), a platform for management and analysis of proteomics LC-MS/MS data. MASPECTRAS is based on the Proteome Experimental Data Repository (PEDRo) relational database schema and follows the guidelines of the Proteomics Standards Initiative (PSI). Analysis modules include: 1) import and parsing of the results from the search engines SEQUEST, Mascot, Spectrum Mill, X! Tandem, and OMSSA; 2) peptide validation, 3) clustering of proteins based on Markov Clustering and multiple alignments; and 4) quantification using the Automated Statistical Analysis of Protein Abundance Ratios algorithm (ASAPRatio). The system provides customizable data retrieval and visualization tools, as well as export to PRoteomics IDEntifications public repository (PRIDE). MASPECTRAS is freely available at Conclusion Given the unique features and the flexibility due to the use of standard software technology, our platform represents significant advance and could be of great interest to the proteomics community. PMID:17567892
NASA Astrophysics Data System (ADS)
Kireev, V.; Silenko, A.; Guseva, A.
2017-01-01
This article describes an approach to the determination of the level of formation of competences of university graduates, oriented to work in the state corporation “Rosatom” in a knowledge management system. With the use of cluster analysis graduate classes were identified, focused on knowledge transfer, analysis and the search for new knowledge, creative transformation of knowledge. In addition, the class innovators were identified, which were fully formed the necessary cognitive competences.
An adolescent suicide cluster and the possible role of electronic communication technology.
Robertson, Lindsay; Skegg, Keren; Poore, Marion; Williams, Sheila; Taylor, Barry
2012-01-01
Since the development of Centers for Disease Control's (CDC) guidelines for the management of suicide clusters, the use of electronic communication technologies has increased dramatically. To describe an adolescent suicide cluster that drew our attention to the possible role of online social networking and SMS text messaging as sources of contagion after a suicide and obstacles to recognition of a potential cluster. A public health approach involving a multidisciplinary community response was used to investigate a group of suicides of New Zealand adolescents thought to be a cluster. Difficulties in identifying and managing contagion posed by use of electronic communications were assessed. The probability of observing a time-space cluster such as this by chance alone was p = .009. The cases did not belong to a single school, rather several were linked by social networking sites, including sites created in memory of earlier suicide cases, as well as mobile telephones. These facilitated the rapid spread of information and rumor about the deaths throughout the community. They made the recognition and management of a possible cluster more difficult. Relevant community agencies should proactively develop a strategy to enable the identification and management of suicide contagion. Guidelines to assist communities in managing clusters should be updated to reflect the widespread use of communication technologies in modern society.
Study on Adaptive Parameter Determination of Cluster Analysis in Urban Management Cases
NASA Astrophysics Data System (ADS)
Fu, J. Y.; Jing, C. F.; Du, M. Y.; Fu, Y. L.; Dai, P. P.
2017-09-01
The fine management for cities is the important way to realize the smart city. The data mining which uses spatial clustering analysis for urban management cases can be used in the evaluation of urban public facilities deployment, and support the policy decisions, and also provides technical support for the fine management of the city. Aiming at the problem that DBSCAN algorithm which is based on the density-clustering can not realize parameter adaptive determination, this paper proposed the optimizing method of parameter adaptive determination based on the spatial analysis. Firstly, making analysis of the function Ripley's K for the data set to realize adaptive determination of global parameter MinPts, which means setting the maximum aggregation scale as the range of data clustering. Calculating every point object's highest frequency K value in the range of Eps which uses K-D tree and setting it as the value of clustering density to realize the adaptive determination of global parameter MinPts. Then, the R language was used to optimize the above process to accomplish the precise clustering of typical urban management cases. The experimental results based on the typical case of urban management in XiCheng district of Beijing shows that: The new DBSCAN clustering algorithm this paper presents takes full account of the data's spatial and statistical characteristic which has obvious clustering feature, and has a better applicability and high quality. The results of the study are not only helpful for the formulation of urban management policies and the allocation of urban management supervisors in XiCheng District of Beijing, but also to other cities and related fields.
Chinman, Matthew; McCarthy, Sharon; Hannah, Gordon; Byrne, Thomas Hugh; Smelson, David A
2017-03-09
Incorporating evidence-based integrated treatment for dual disorders into typical care settings has been challenging, especially among those serving Veterans who are homeless. This paper presents an evaluation of an effort to incorporate an evidence-based, dual disorder treatment called Maintaining Independence and Sobriety Through Systems Integration, Outreach, and Networking-Veterans Edition (MISSION-Vet) into case management teams serving Veterans who are homeless, using an implementation strategy called Getting To Outcomes (GTO). This Hybrid Type III, cluster-randomized controlled trial assessed the impact of GTO over and above MISSION-Vet Implementation as Usual (IU). Both conditions received standard MISSION-Vet training and manuals. The GTO group received an implementation manual, training, technical assistance, and data feedback. The study occurred in teams at three large VA Medical Centers over 2 years. Within each team, existing sub-teams (case managers and Veterans they serve) were the clusters randomly assigned. The trial assessed MISSION-Vet services delivered and collected via administrative data and implementation barriers and facilitators, via semi-structured interview. No case managers in the IU group initiated MISSION-Vet while 68% in the GTO group did. Seven percent of Veterans with case managers in the GTO group received at least one MISSION-Vet session. Most case managers appreciated the MISSION-Vet materials and felt the GTO planning meetings supported using MISSION-Vet. Case manager interviews also showed that MISSION-Vet could be confusing; there was little involvement from leadership after their initial agreement to participate; the data feedback system had a number of difficulties; and case managers did not have the resources to implement all aspects of MISSION-Vet. This project shows that GTO-like support can help launch new practices but that multiple implementation facilitators are needed for successful execution of a complex evidence-based program like MISSION-Vet. ClinicalTrials.gov NCT01430741.
Kussaga, Jamal B; Luning, Pieternel A; Tiisekwa, Bendantunguka P M; Jacxsens, Liesbeth
2014-04-01
This study provides insight for food safety (FS) performance in light of the current performance of core FS management system (FSMS) activities and context riskiness of these systems to identify the opportunities for improvement of the FSMS. A FSMS diagnostic instrument was applied to assess the performance levels of FSMS activities regarding context riskiness and FS performance in 14 fish processing companies in Tanzania. Two clusters (cluster I and II) with average FSMS (level 2) operating under moderate-risk context (score 2) were identified. Overall, cluster I had better (score 3) FS performance than cluster II (score 2 to 3). However, a majority of the fish companies need further improvement of their FSMS and reduction of context riskiness to assure good FS performance. The FSMS activity levels could be improved through hygienic design of equipment and facilities, strict raw material control, proper follow-up of critical control point analysis, developing specific sanitation procedures and company-specific sampling design and measuring plans, independent validation of preventive measures, and establishing comprehensive documentation and record-keeping systems. The risk level of the context could be reduced through automation of production processes (such as filleting, packaging, and sanitation) to restrict people's interference, recruitment of permanent high-skilled technological staff, and setting requirements on product use (storage and distribution conditions) on customers. However, such intervention measures for improvement could be taken in phases, starting with less expensive ones (such as sanitation procedures) that can be implemented in the short term to more expensive interventions (setting up assurance activities) to be adopted in the long term. These measures are essential for fish processing companies to move toward FSMS that are more effective.
NASA Astrophysics Data System (ADS)
Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.
1995-05-01
Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.
2000-04-01
be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Chisholm, Alison; Price, David B; Pinnock, Hilary; Lee, Tan Tze; Roa, Camilo; Cho, Sang-Heon; David-Wang, Aileen; Wong, Gary; van der Molen, Thys; Ryan, Dermot; Castillo-Carandang, Nina; Yong, Yee Vern
2017-01-05
REALISE Asia-an online questionnaire-based study of Asian asthma patients-identified five patient clusters defined in terms of their control status and attitude towards their asthma (categorised as: 'Well-adjusted and at least partly controlled'; 'In denial about symptoms'; 'Tolerating with poor control'; 'Adrift and poorly controlled'; 'Worried with multiple symptoms'). We developed consensus recommendations for tailoring management of these attitudinal-control clusters. An expert panel undertook a three-round electronic Delphi (e-Delphi): Round 1: panellists received descriptions of the attitudinal-control clusters and provided free text recommendations for their assessment and management. Round 2: panellists prioritised Round 1 recommendations and met (or joined a teleconference) to consolidate the recommendations. Round 3: panellists voted and prioritised the remaining recommendations. Consensus was defined as Round 3 recommendations endorsed by >50% of panellists. Highest priority recommendations were those receiving the highest score. The multidisciplinary panellists (9 clinicians, 1 pharmacist and 1 health social scientist; 7 from Asia) identified consensus recommendations for all clusters. Recommended pharmacological (e.g., step-up/down; self-management; simplified regimen) and non-pharmacological approaches (e.g., trigger management, education, social support; inhaler technique) varied substantially according to each cluster's attitude to asthma and associated psychosocial drivers of behaviour. The attitudinal-control clusters defined by REALISE Asia resonated with the international panel. Consensus was reached on appropriate tailored management approaches for all clusters. Summarised and incorporated into a structured management pathway, these recommendations could facilitate personalised care. Generalisability of these patient clusters should be assessed in other socio-economic, cultural and literacy groups and nationalities in Asia.
Taming Pipelines, Users, and High Performance Computing with Rector
NASA Astrophysics Data System (ADS)
Estes, N. M.; Bowley, K. S.; Paris, K. N.; Silva, V. H.; Robinson, M. S.
2018-04-01
Rector is a high-performance job management system created by the LROC SOC team to enable processing of thousands of observations and ancillary data products as well as ad-hoc user jobs across a 634 CPU core processing cluster.
WIS Implementation Study Report. Volume 2. Resumes.
1983-10-01
WIS modernization that major attention be paid to interface definition and design, system integra- tion and test , and configuration management of the...Estimates -- Computer Corporation of America -- 155 Test Processing Systems -- Newburyport Computer Associates, Inc. -- 183 Cluster II Papers-- Standards...enhancements of the SPL/I compiler system, development of test systems for the verification of SDEX/M and the timing and architecture of the AN/U YK-20 and
Time series clustering analysis of health-promoting behavior
NASA Astrophysics Data System (ADS)
Yang, Chi-Ta; Hung, Yu-Shiang; Deng, Guang-Feng
2013-10-01
Health promotion must be emphasized to achieve the World Health Organization goal of health for all. Since the global population is aging rapidly, ComCare elder health-promoting service was developed by the Taiwan Institute for Information Industry in 2011. Based on the Pender health promotion model, ComCare service offers five categories of health-promoting functions to address the everyday needs of seniors: nutrition management, social support, exercise management, health responsibility, stress management. To assess the overall ComCare service and to improve understanding of the health-promoting behavior of elders, this study analyzed health-promoting behavioral data automatically collected by the ComCare monitoring system. In the 30638 session records collected for 249 elders from January, 2012 to March, 2013, behavior patterns were identified by fuzzy c-mean time series clustering algorithm combined with autocorrelation-based representation schemes. The analysis showed that time series data for elder health-promoting behavior can be classified into four different clusters. Each type reveals different health-promoting needs, frequencies, function numbers and behaviors. The data analysis result can assist policymakers, health-care providers, and experts in medicine, public health, nursing and psychology and has been provided to Taiwan National Health Insurance Administration to assess the elder health-promoting behavior.
Decentralized Formation Flying Control in a Multiple-Team Hierarchy
NASA Technical Reports Server (NTRS)
Mueller, Joseph .; Thomas, Stephanie J.
2005-01-01
This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple-team framework. The objective is to divide large clusters into teams of manageable size, so that the communication and computational demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high-level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using MANTA (Messaging Architecture for Networking and Threaded Applications). In this architecture, tasks may be remotely added, removed or replaced post-launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in MATLAB, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple-team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.
Mansour, Ahmad M; Hamade, Haya; Ghaddar, Ayman; Mokadem, Ahmad Samih; El Hajj Ali, Mohamad; Awwad, Shady
2012-01-01
To present the visual outcomes and ocular sequelae of victims of cluster bombs. This retrospective, multicenter case series of ocular injury due to cluster bombs was conducted for 3 years after the war in South Lebanon (July 2006). Data were gathered from the reports to the Information Management System for Mine Action. There were 308 victims of clusters bombs; 36 individuals were killed, of which 2 received ocular lacerations and; 272 individuals were injured with 18 receiving ocular injury. These 18 surviving individuals were assessed by the authors. Ocular injury occurred in 6.5% (20/308) of cluster bomb victims. Trauma to multiple organs occurred in 12 of 18 cases (67%) with ocular injury. Ocular findings included corneal or scleral lacerations (16 eyes), corneal foreign bodies (9 eyes), corneal decompensation (2 eyes), ruptured cataract (6 eyes), and intravitreal foreign bodies (10 eyes). The corneas of one patient had extreme attenuation of the endothelium. Ocular injury occurred in 6.5% of cluster bomb victims and 67% of the patients with ocular injury sustained trauma to multiple organs. Visual morbidity in civilians is an additional reason for a global ban on the use of cluster bombs.
NASA Astrophysics Data System (ADS)
Li, Xiwang
Buildings consume about 41.1% of primary energy and 74% of the electricity in the U.S. Moreover, it is estimated by the National Energy Technology Laboratory that more than 1/4 of the 713 GW of U.S. electricity demand in 2010 could be dispatchable if only buildings could respond to that dispatch through advanced building energy control and operation strategies and smart grid infrastructure. In this study, it is envisioned that neighboring buildings will have the tendency to form a cluster, an open cyber-physical system to exploit the economic opportunities provided by a smart grid, distributed power generation, and storage devices. Through optimized demand management, these building clusters will then reduce overall primary energy consumption and peak time electricity consumption, and be more resilient to power disruptions. Therefore, this project seeks to develop a Net-zero building cluster simulation testbed and high fidelity energy forecasting models for adaptive and real-time control and decision making strategy development that can be used in a Net-zero building cluster. The following research activities are summarized in this thesis: 1) Development of a building cluster emulator for building cluster control and operation strategy assessment. 2) Development of a novel building energy forecasting methodology using active system identification and data fusion techniques. In this methodology, a systematic approach for building energy system characteristic evaluation, system excitation and model adaptation is included. The developed methodology is compared with other literature-reported building energy forecasting methods; 3) Development of the high fidelity on-line building cluster energy forecasting models, which includes energy forecasting models for buildings, PV panels, batteries and ice tank thermal storage systems 4) Small scale real building validation study to verify the performance of the developed building energy forecasting methodology. The outcomes of this thesis can be used for building cluster energy forecasting model development and model based control and operation optimization. The thesis concludes with a summary of the key outcomes of this research, as well as a list of recommendations for future work.
NASA Astrophysics Data System (ADS)
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
Interactive Query Processing in Big Data Systems: A Cross Industry Study of MapReduce Workloads
2012-04-02
invite cluster operators and the broader data management commu- nity to share additional knowledge about their MapReduce workloads. 9. ACKNOWLEDGMENTS...against real- life production MapReduce workloads. Knowledge of such workloads is currently limited to a handful of technology companies [19, 8, 48, 41...database management insights would benefit from checking workload assumptions against empirical measurements. The broad spectrum of workloads analyzed allows
Caeiro, Sandra; Goovaerts, Pierre; Painho, Marco; Costa, M Helena
2003-09-15
The Sado Estuary is a coastal zone located in the south of Portugal where conflicts between conservation and development exist because of its location near industrialized urban zones and its designation as a natural reserve. The aim of this paper is to evaluate a set of multivariate geostatistical approaches to delineate spatially contiguous regions of sediment structure for Sado Estuary. These areas will be the supporting infrastructure of an environmental management system for this estuary. The boundaries of each homogeneous area were derived from three sediment characterization attributes through three different approaches: (1) cluster analysis of dissimilarity matrix function of geographical separation followed by indicator kriging of the cluster data, (2) discriminant analysis of kriged values of the three sediment attributes, and (3) a combination of methods 1 and 2. Final maximum likelihood classification was integrated into a geographical information system. All methods generated fairly spatially contiguous management areas that reproduce well the environment of the estuary. Map comparison techniques based on kappa statistics showed thatthe resultant three maps are similar, supporting the choice of any of the methods as appropriate for management of the Sado Estuary. However, the results of method 1 seem to be in better agreement with estuary behavior, assessment of contamination sources, and previous work conducted at this site.
Kecskeméti, Elizabeth; Berkelmann-Löhnertz, Beate; Reineke, Annette
2016-01-01
Using barcoded pyrosequencing fungal and bacterial communities associated with grape berry clusters (Vitis vinifera L.) obtained from conventional, organic and biodynamic vineyard plots were investigated in two subsequent years at different stages during berry ripening. The four most abundant operational taxonomic units (OTUs) based on fungal ITS data were Botrytis cinerea, Cladosporium spp., Aureobasidium pullulans and Alternaria alternata which represented 57% and 47% of the total reads in 2010 and 2011, respectively. Members of the genera Sphingomonas, Gluconobacter, Pseudomonas, Erwinia, and Massilia constituted 67% of the total number of bacterial 16S DNA reads in 2010 samples and 78% in 2011 samples. Viticultural management system had no significant effect on abundance of fungi or bacteria in both years and at all three sampling dates. Exceptions were A. alternata and Pseudomonas spp. which were more abundant in the carposphere of conventional compared to biodynamic berries, as well as Sphingomonas spp. which was significantly less abundant on conventional compared to organic berries at an early ripening stage in 2011. In general, there were no significant differences in fungal and bacterial diversity indices or richness evident between management systems. No distinct fungal or bacterial communities were associated with the different maturation stages or management systems, respectively. An exception was the last stage of berry maturation in 2011, where the Simpson diversity index was significantly higher for fungal communities on biodynamic compared to conventional grapes. Our study highlights the existence of complex and dynamic microbial communities in the grape cluster carposphere including both phytopathogenic and potentially antagonistic microorganisms that can have a significant impact on grape production. Such knowledge is particularly relevant for development, selection and application of effective control measures against economically important pathogens present in the grape carposphere.
Kecskeméti, Elizabeth; Berkelmann-Löhnertz, Beate; Reineke, Annette
2016-01-01
Using barcoded pyrosequencing fungal and bacterial communities associated with grape berry clusters (Vitis vinifera L.) obtained from conventional, organic and biodynamic vineyard plots were investigated in two subsequent years at different stages during berry ripening. The four most abundant operational taxonomic units (OTUs) based on fungal ITS data were Botrytis cinerea, Cladosporium spp., Aureobasidium pullulans and Alternaria alternata which represented 57% and 47% of the total reads in 2010 and 2011, respectively. Members of the genera Sphingomonas, Gluconobacter, Pseudomonas, Erwinia, and Massilia constituted 67% of the total number of bacterial 16S DNA reads in 2010 samples and 78% in 2011 samples. Viticultural management system had no significant effect on abundance of fungi or bacteria in both years and at all three sampling dates. Exceptions were A. alternata and Pseudomonas spp. which were more abundant in the carposphere of conventional compared to biodynamic berries, as well as Sphingomonas spp. which was significantly less abundant on conventional compared to organic berries at an early ripening stage in 2011. In general, there were no significant differences in fungal and bacterial diversity indices or richness evident between management systems. No distinct fungal or bacterial communities were associated with the different maturation stages or management systems, respectively. An exception was the last stage of berry maturation in 2011, where the Simpson diversity index was significantly higher for fungal communities on biodynamic compared to conventional grapes. Our study highlights the existence of complex and dynamic microbial communities in the grape cluster carposphere including both phytopathogenic and potentially antagonistic microorganisms that can have a significant impact on grape production. Such knowledge is particularly relevant for development, selection and application of effective control measures against economically important pathogens present in the grape carposphere. PMID:27500633
Eradication of Transboundary Animal Diseases: Can the Rinderpest Success Story be Repeated?
Thomson, G R; Penrith, M-L
2017-04-01
A matrix system was developed to aid in the evaluation of the technical amenability to eradication, through mass vaccination, of transboundary animal diseases (TADs). The system involved evaluation of three basic criteria - disease management efficiency, surveillance and epidemiological factors - each in turn comprised of a number of elements (17 in all). On that basis, 25 TADs that have occurred or do occur in southern Africa and for which vaccines are available, in addition to rinderpest (incorporated as a yardstick because it has been eradicated worldwide), were ranked. Cluster analysis was also applied using the same criteria to the 26 diseases, creating division into three groups. One cluster contained only diseases transmitted by arthropods (e.g. African horse sickness and Rift Valley fever) and considered difficult to eradicate because technologies for managing parasitic arthropods on a large scale are unavailable, while a second cluster contained diseases that have been widely considered to be eradicable [rinderpest, canine rabies, the Eurasian serotypes of foot and mouth disease virus (O, A, C & Asia 1) and peste des petits ruminants] as well classical swine fever, Newcastle disease and lumpy skin disease. The third cluster contained all the other TADs evaluated with the implication that these constitute TADs that would be more difficult to eradicate. However, it is acknowledged that the scores assigned in the course of this study may be biased. The point is that the system proposed offers an objective method for assessment of the technical eradicability of TADs; the rankings and groupings derived during this study are less important than the provision of a systematic approach for further development and evaluation. © 2015 Blackwell Verlag GmbH.
Description and typology of intensive Chios dairy sheep farms in Greece.
Gelasakis, A I; Valergakis, G E; Arsenos, G; Banos, G
2012-06-01
The aim was to assess the intensified dairy sheep farming systems of the Chios breed in Greece, establishing a typology that may properly describe and characterize them. The study included the total of the 66 farms of the Chios sheep breeders' cooperative Macedonia. Data were collected using a structured direct questionnaire for in-depth interviews, including questions properly selected to obtain a general description of farm characteristics and overall management practices. A multivariate statistical analysis was used on the data to obtain the most appropriate typology. Initially, principal component analysis was used to produce uncorrelated variables (principal components), which would be used for the consecutive cluster analysis. The number of clusters was decided using hierarchical cluster analysis, whereas, the farms were allocated in 4 clusters using k-means cluster analysis. The identified clusters were described and afterward compared using one-way ANOVA or a chi-squared test. The main differences were evident on land availability and use, facility and equipment availability and type, expansion rates, and application of preventive flock health programs. In general, cluster 1 included newly established, intensive, well-equipped, specialized farms and cluster 2 included well-established farms with balanced sheep and feed/crop production. In cluster 3 were assigned small flock farms focusing more on arable crops than on sheep farming with a tendency to evolve toward cluster 2, whereas cluster 4 included farms representing a rather conservative form of Chios sheep breeding with low/intermediate inputs and choosing not to focus on feed/crop production. In the studied set of farms, 4 different farmer attitudes were evident: 1) farming disrupts sheep breeding; feed should be purchased and economies of scale will decrease costs (mainly cluster 1), 2) only exercise/pasture land is necessary; at least part of the feed (pasture) must be home-grown to decrease costs (clusters 1 and 4), 3) providing pasture to sheep is essential; on-farm feed production decreases costs (mainly cluster 3), and 4) large-scale farming (feed production and cash crops) does not disrupt sheep breeding; all feed must be produced on-farm to decrease costs (mainly cluster 3). Conducting a profitability analysis among different clusters, exploring and discovering the most beneficial levels of intensified management and capital investment should now be considered. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Mena, Carlos; Sepúlveda, Cesar; Fuentes, Eduardo; Ormazábal, Yony; Palomo, Iván
2018-05-07
Cardiovascular diseases (CVDs) are the primary cause of death and disability in de world, and the detection of populations at risk as well as localization of vulnerable areas is essential for adequate epidemiological management. Techniques developed for spatial analysis, among them geographical information systems and spatial statistics, such as cluster detection and spatial correlation, are useful for the study of the distribution of the CVDs. These techniques, enabling recognition of events at different geographical levels of study (e.g., rural, deprived neighbourhoods, etc.), make it possible to relate CVDs to factors present in the immediate environment. The systemic literature presented here shows that this group of diseases is clustered with regard to incidence, mortality and hospitalization as well as obesity, smoking, increased glycated haemoglobin levels, hypertension physical activity and age. In addition, acquired variables such as income, residency (rural or urban) and education, contribute to CVD clustering. Both local cluster detection and spatial regression techniques give statistical weight to the findings providing valuable information that can influence response mechanisms in the health services by indicating locations in need of intervention and assignment of available resources.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
1990-12-02
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against dark space. Parts of the Hopkins Ultraviolet Telescope (HUT), Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering
Ullah, Farman; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
Schafrick, Nathaniel H.; Milbrath, Meghan O.; Berrocal, Veronica J.; Wilson, Mark L.; Eisenberg, Joseph N. S.
2013-01-01
Mosquito management within households remains central to the control of dengue virus transmission. An important factor in these management decisions is the spatial clustering of Aedes aegypti. We measured spatial clustering of Ae. aegypti in the town of Borbón, Ecuador and assessed what characteristics of breeding containers influenced the clustering. We used logistic regression to assess the spatial extent of that clustering. We found strong evidence for juvenile mosquito clustering within 20 m and for adult mosquito clustering within 10 m, and stronger clustering associations for containers ≥ 40 L than those < 40 L. Aedes aegypti clusters persisted after adjusting for various container characteristics, suggesting that patterns are likely attributable to short dispersal distances rather than shared characteristics of containers in cluster areas. These findings have implications for targeting Ae. aegypti control efforts. PMID:24002483
Cluster analysis and prediction of treatment outcomes for chronic rhinosinusitis.
Soler, Zachary M; Hyer, J Madison; Rudmik, Luke; Ramakrishnan, Viswanathan; Smith, Timothy L; Schlosser, Rodney J
2016-04-01
Current clinical classifications of chronic rhinosinusitis (CRS) have weak prognostic utility regarding treatment outcomes. Simplified discriminant analysis based on unsupervised clustering has identified novel phenotypic subgroups of CRS, but prognostic utility is unknown. We sought to determine whether discriminant analysis allows prognostication in patients choosing surgery versus continued medical management. A multi-institutional prospective study of patients with CRS in whom initial medical therapy failed who then self-selected continued medical management or surgical treatment was used to separate patients into 5 clusters based on a previously described discriminant analysis using total Sino-Nasal Outcome Test-22 (SNOT-22) score, age, and missed productivity. Patients completed the SNOT-22 at baseline and for 18 months of follow-up. Baseline demographic and objective measures included olfactory testing, computed tomography, and endoscopy scoring. SNOT-22 outcomes for surgical versus continued medical treatment were compared across clusters. Data were available on 690 patients. Baseline differences in demographics, comorbidities, objective disease measures, and patient-reported outcomes were similar to previous clustering reports. Three of 5 clusters identified by means of discriminant analysis had improved SNOT-22 outcomes with surgical intervention when compared with continued medical management (surgery was a mean of 21.2 points better across these 3 clusters at 6 months, P < .05). These differences were sustained at 18 months of follow-up. Two of 5 clusters had similar outcomes when comparing surgery with continued medical management. A simplified discriminant analysis based on 3 common clinical variables is able to cluster patients and provide prognostic information regarding surgical treatment versus continued medical management in patients with CRS. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Seizure clusters: A common, understudied and undertreated phenomenon in refractory epilepsy.
Komaragiri, Arpitha; Detyniecki, Kamil; Hirsch, Lawrence J
2016-06-01
Epilepsy is widely prevalent globally and has emerged as a well-studied neurological condition in the recent past. Seizure clusters, a type of seizures, and several aspects pertaining to the etiopathogenesis and management of clusters are yet to be elucidated. This review is an attempt to recapitulate the current understanding of seizure clusters based on the research that has been performed on seizure clusters. This article will provide a comprehensive review of various aspects of clusters, and discusses definitions, prevalence, risk factors, impact on quality of life, approved treatment modalities, and recent advances in management. Copyright © 2016 Elsevier Inc. All rights reserved.
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
Rocketdyne automated dynamics data analysis and management system
NASA Technical Reports Server (NTRS)
Tarn, Robert B.
1988-01-01
An automated dynamics data analysis and management systems implemented on a DEC VAX minicomputer cluster is described. Multichannel acquisition, Fast Fourier Transformation analysis, and an online database have significantly improved the analysis of wideband transducer responses from Space Shuttle Main Engine testing. Leakage error correction to recover sinusoid amplitudes and correct for frequency slewing is described. The phase errors caused by FM recorder/playback head misalignment are automatically measured and used to correct the data. Data compression methods are described and compared. The system hardware is described. Applications using the data base are introduced, including software for power spectral density, instantaneous time history, amplitude histogram, fatigue analysis, and rotordynamics expert system analysis.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.
Fan, Hangyu; Wang, Huandong; Li, Yong
2018-01-23
Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.
Testing the Archivas Cluster (Arc) for Ozone Monitoring Instrument (OMI) Scientific Data Storage
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2005-01-01
The Ozone Monitoring Instrument (OMI) launched on NASA's Aura Spacecraft, the third of the major platforms of the EOS program on July 15,2004. In addition to the long term archive and distribution of the data from OM1 through the Goddard Earth Science Distributed Active Archive Center (GESDAAC), we are evaluating other archive mechanisms that can archive the data in a more immediately available method where it can be used for futher data production and analysis. In 2004, Archivas, Inc. was selected by NASA s Small Business Innovative Research (SBIR) program for the development of their Archivas Cluster (ArC) product. Arc is an online disk based system utilizing self-management and automation on a Linux cluster. Its goal is to produce a low cost solution coupled with the ease of management. The OM1 project is an application partner of the SBIR program, and has deployed a small cluster (5TB) based on the beta Archwas software. We performed extensive testing of the unit using production OM1 data since launch. In 2005, Archivas, Inc. was funded in SBIR Phase II for further development, which will include testing scalability with the deployment of a larger (35TB) cluster at Goddard. We plan to include Arc in the OM1 Team Leader Computing Facility (TLCF) hosting OM1 data for direct access and analysis by the OMI Science Team. This presentation will include a brief technical description of the Archivas Cluster, a summary of the SBIR Phase I beta testing results, and an overview of the OMI ground data processing architecture including its interaction with the Phase II Archivas Cluster and hosting of OMI data for the scientists.
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…
Eric J. Gustafson
1998-01-01
To integrate multiple uses (mature forest and commodity production) better on forested lands, timber management strategies that cluster harvests have been proposed. One such approach clusters harvest activity in space and time, and rotates timber production zones across the landscape with a long temporal period (dynamic zoning). Dynamic zoning has...
2017-06-30
Clustered Regularly Interspaced Short Palindromic Repeat/ CRISPR -associated protein 9 ( CRISPR /Cas9)-based Gene Drives En vi ro nm en ta l L ab or at...Management on Military Lands Clustered Regularly Interspaced Short Palindromic Repeat/ CRISPR -associated protein 9 ( CRISPR /Cas9)-based Gene Drives Ping... CRISPR /Cas9-based Gene Drives for Invasive Species Management on Military Lands” ERDC/EL SR-17-2 ii Abstract Applications of genetic engineering
Irrigation effects on soil attributes and grapevine performance in a 'Godello' vineyard of NW Spain
NASA Astrophysics Data System (ADS)
Fandiño, María; Trigo-Córdoba, Emiliano; Martínez, Emma M.; Bouzas-Cid, Yolanda; Rey, Benjamín J.; Cancela, Javier J.; Mirás-Avalos, Jose M.
2014-05-01
Irrigation systems are increasingly being used in Galician vineyards. However, a lack of information about irrigation management can cause a bad use of these systems and, consequently, reductions in berry quality and loss of water resources. In this context, experiences with Galician cultivars may provide useful information. A field experiment was carried out over two seasons (2012-2013) on Vitis vinifera (L.) cv. 'Godello' in order to assess the effects of irrigation on soil attributes, grapevine performance and berry composition. The field site was a commercial vineyard located in A Rúa (Ourense-NW Spain). Rain-fed vines (R) were compared with two irrigation systems: surface drip irrigation (DI) and subsurface drip irrigation (SDI). Physical and chemical characteristics of soil were analyzed after installing irrigation systems at the beginning of each season, in order to assess the effects that irrigation might have on soil attributes. Soil water content, leaf and stem water potentials and stomatal conductance were periodically measured over the two seasons. Yield components including number of clusters, yield per plant and cluster average weight were taken. Soluble solids, pH, total acidity and amino acids contents were measured on the grapes at harvest. Pruning weight was also recorded. Soil attributes did not significantly vary due to the irrigation treatments. Stem water potentials were significantly lower for R plants on certain dates through the season, whereas stomatal conductance was similar for the three treatments in 2013, while in 2012 SDI plants showed greater stomatal conductance values. SDI plants yielded more than those R due to both a greater number of clusters per plant and to heavier clusters. Pruning weight was significantly higher in SI plants. Berry composition was similar for the three treatments except for the amino acids content, which was higher under SDI conditions. These results may be helpful for a sustainable management of irrigation in Galician vineyards.
Hahus, Ian; Migliaccio, Kati; Douglas-Mankin, Kyle; Klarenberg, Geraldine; Muñoz-Carpena, Rafael
2018-04-27
Hierarchical and partitional cluster analyses were used to compartmentalize Water Conservation Area 1, a managed wetland within the Arthur R. Marshall Loxahatchee National Wildlife Refuge in southeast Florida, USA, based on physical, biological, and climatic geospatial attributes. Single, complete, average, and Ward's linkages were tested during the hierarchical cluster analyses, with average linkage providing the best results. In general, the partitional method, partitioning around medoids, found clusters that were more evenly sized and more spatially aggregated than those resulting from the hierarchical analyses. However, hierarchical analysis appeared to be better suited to identify outlier regions that were significantly different from other areas. The clusters identified by geospatial attributes were similar to clusters developed for the interior marsh in a separate study using water quality attributes, suggesting that similar factors have influenced variations in both the set of physical, biological, and climatic attributes selected in this study and water quality parameters. However, geospatial data allowed further subdivision of several interior marsh clusters identified from the water quality data, potentially indicating zones with important differences in function. Identification of these zones can be useful to managers and modelers by informing the distribution of monitoring equipment and personnel as well as delineating regions that may respond similarly to future changes in management or climate.
Production Experiences with the Cray-Enabled TORQUE Resource Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Maxwell, Don E; Beer, David
High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less
Massive problem reports mining and analysis based parallelism for similar search
NASA Astrophysics Data System (ADS)
Zhou, Ya; Hu, Cailin; Xiong, Han; Wei, Xiafei; Li, Ling
2017-05-01
Massive problem reports and solutions accumulated over time and continuously collected in XML Spreadsheet (XMLSS) format from enterprises and organizations, which record a series of comprehensive description about problems that can help technicians to trace problems and their solutions. It's a significant and challenging issue to effectively manage and analyze these massive semi-structured data to provide similar problem solutions, decisions of immediate problem and assisting product optimization for users during hardware and software maintenance. For this purpose, we build a data management system to manage, mine and analyze these data search results that can be categorized and organized into several categories for users to quickly find out where their interesting results locate. Experiment results demonstrate that this system is better than traditional centralized management system on the performance and the adaptive capability of heterogeneous data greatly. Besides, because of re-extracting topics, it enables each cluster to be described more precise and reasonable.
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
Mansour, Ahmad M.; Hamade, Haya; Ghaddar, Ayman; Mokadem, Ahmad Samih; El Hajj Ali, Mohamad; Awwad, Shady
2012-01-01
Purpose: To present the visual outcomes and ocular sequelae of victims of cluster bombs. Materials and Methods: This retrospective, multicenter case series of ocular injury due to cluster bombs was conducted for 3 years after the war in South Lebanon (July 2006). Data were gathered from the reports to the Information Management System for Mine Action. Results: There were 308 victims of clusters bombs; 36 individuals were killed, of which 2 received ocular lacerations and; 272 individuals were injured with 18 receiving ocular injury. These 18 surviving individuals were assessed by the authors. Ocular injury occurred in 6.5% (20/308) of cluster bomb victims. Trauma to multiple organs occurred in 12 of 18 cases (67%) with ocular injury. Ocular findings included corneal or scleral lacerations (16 eyes), corneal foreign bodies (9 eyes), corneal decompensation (2 eyes), ruptured cataract (6 eyes), and intravitreal foreign bodies (10 eyes). The corneas of one patient had extreme attenuation of the endothelium. Conclusions: Ocular injury occurred in 6.5% of cluster bomb victims and 67% of the patients with ocular injury sustained trauma to multiple organs. Visual morbidity in civilians is an additional reason for a global ban on the use of cluster bombs. PMID:22346132
Business, Marketing and Management Occupations. Education for Employment Task Lists.
ERIC Educational Resources Information Center
Lake County Area Vocational Center, Grayslake, IL.
The duties and tasks found in these task lists form the basis of instructional content for secondary, postsecondary, and adult occupational training programs for business, marketing, and management occupations. The business, marketing, and management occupations are divided into eight clusters. The clusters and occupations are:…
Classification Scheme for Centuries of Reconstructed Streamflow Droughts in Water Resources Planning
NASA Astrophysics Data System (ADS)
Stagge, J.; Rosenberg, D. E.
2017-12-01
New advances in reconstructing streamflow from tree rings have permitted the reconstruction of flows back to the 1400s or earlier at a monthly, rather than annual, time scale. This is a critical step for incorporating centuries of streamflow reconstructions into water resources planning. Expanding the historical record is particularly important where the observed record contains few of these rare, but potentially disastrous extreme events. We present how a paleo-drought clustering approach was incorporated alongside more traditional water management planning in the Weber River basin, northern Utah. This study used newly developed monthly reconstructions of flow since 1430 CE and defined drought events as flow less than the 50th percentile during at least three contiguous months. Characteristics for each drought event included measures of drought duration, severity, cumulative loss, onset, seasonality, recession rate, and recovery rate. Reconstructed drought events were then clustered by hierarchical clustering to determine distinct drought "types" and the historical event that best represents the centroid of each cluster. The resulting 144 reconstructed drought events in the Weber basin clustered into nine distinct types, of which four were severe enough to potentially require drought management. Using the characteristic drought event for each of the severe drought clusters, water managers were able to estimate system reliability and the historical return frequency for each drought type. Plotting drought duration and severity from centuries of historical reconstructed events alongside observed events and climate change projections further placed recent events into a historical context. For example, the drought of record for the Weber River remains the most severe event in the record with regard to minimum flow percentile (1930, 7 years), but is far from the longest event in the longer historical record, where events beginning in 1658 and 1705 both lasted longer than 13 years. The proposed drought clustering approach provides a powerful tool for merging historical reconstructions, observations, and climate change projections in water resources planning, while also providing a framework to make use of valuable and increasingly available tree-ring reconstructions of monthly streamflow.
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
Traffic Flow Management: Data Mining Update
NASA Technical Reports Server (NTRS)
Grabbe, Shon R.
2012-01-01
This presentation provides an update on recent data mining efforts that have been designed to (1) identify like/similar days in the national airspace system, (2) cluster/aggregate national-level rerouting data and (3) apply machine learning techniques to predict when Ground Delay Programs are required at a weather-impacted airport
Advanced Cyber Attack Modeling Analysis and Visualization
2010-03-01
Graph Analysis Network Web Logs Netflow Data TCP Dump Data System Logs Detect Protect Security Management What-If Figure 8. TVA attack graphs for...Clustered Graphs,” in Proceedings of the Symposium on Graph Drawing, September 1996. [25] K. Lakkaraju, W. Yurcik, A. Lee, “NVisionIP: NetFlow
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
Berkeley lab checkpoint/restart (BLCR) for Linux clusters
Hargrove, Paul H.; Duell, Jason C.
2006-09-01
This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less
Autonomous distributed self-organization for mobile wireless sensor networks.
Wen, Chih-Yu; Tang, Hung-Kai
2009-01-01
This paper presents an adaptive combined-metrics-based clustering scheme for mobile wireless sensor networks, which manages the mobile sensors by utilizing the hierarchical network structure and allocates network resources efficiently A local criteria is used to help mobile sensors form a new cluster or join a current cluster. The messages transmitted during hierarchical clustering are applied to choose distributed gateways such that communication for adjacent clusters and distributed topology control can be achieved. In order to balance the load among clusters and govern the topology change, a cluster reformation scheme using localized criterions is implemented. The proposed scheme is simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithm provides efficient network topology management and achieves high scalability in mobile sensor networks.
Browning, Colette; Chapman, Anna; Cowlishaw, Sean; Li, Zhixin; Thomas, Shane A; Yang, Hui; Zhang, Tuohong
2011-02-09
The Happy Life Club™ is an intervention that utilises health coaches trained in behavioural change and motivational interviewing techniques to assist with the management of type 2 diabetes mellitus (T2DM) in primary care settings in China. Health coaches will support participants to improve modifiable risk factors and adhere to effective self-management treatments associated with T2DM. A cluster randomised controlled trial involving 22 Community Health Centres (CHCs) in Fengtai District of Beijing, China. CHCs will be randomised into a control or intervention group, facilitating recruitment of at least 1320 individual participants with T2DM into the study. Participants in the intervention group will receive a combination of both telephone and face-to-face health coaching over 18 months, in addition to usual care received by the control group. Health coaching will be performed by CHC doctors and nurses certified in coach-assisted chronic disease management. Outcomes will be assessed at baseline and again at 6, 12 and 18 months by means of a clinical health check and self-administered questionnaire. The primary outcome measure is HbA1c level. Secondary outcomes include metabolic, physiological and psychological variables. This cluster RCT has been developed to suit the Chinese health care system and will contribute to the evidence base for the management of patients with T2DM. With a strong focus on self-management and health coach support, the study has the potential to be adapted to other chronic diseases, as well as other regions of China. Current Controlled Trials ISRCTN01010526.
Advanced Approach of Multiagent Based Buoy Communication
Gricius, Gediminas; Drungilas, Darius; Dzemydiene, Dale
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
Quinn, Emma; Johnstone, Travers; Najjar, Zeina; Cains, Toni; Tan, Geoff; Huhtinen, Essi; Nilsson, Sven; Burgess, Stuart; Dunn, Matthew; Gupta, Leena
2017-09-05
The incident command system (ICS) provides a common structure to control and coordinate an emergency response, regardless of scale or predicted impact. The lessons learned from the application of an ICS for large infectious disease outbreaks are documented. However, there is scant evidence on the application of an ICS to manage a local multiagency response to a disease cluster with environmental health risks. The Sydney Local Health District Public Health Unit (PHU) in New South Wales, Australia, was notified of 5 cases of Legionnaires' disease during 2 weeks in May 2016. This unusual incident triggered a multiagency investigation involving an ICS with staff from the PHU, 3 local councils, and the state health department to help prevent any further public health risk. The early and judicious use of ICS enabled a timely and effective response by supporting clear communication lines between the incident controller and field staff. The field team was key in preventing any ongoing public health risk through inspection, sampling, testing, and management of water systems identified to be at-risk for transmission of legionella. Good working relationships between partner agencies and trust in the technical proficiency of environmental health staff aided in the effective management of the response. (Disaster Med Public Health Preparedness. 2017;page 1 of 4).
Globular cluster systems as tracers of environmental effects on Virgo early-type dwarfs
NASA Astrophysics Data System (ADS)
Sánchez-Janssen, R.; Aguerri, J. A. L.
2012-08-01
Early-type dwarfs (dEs) are by far the most abundant galaxy population in nearby clusters. Whether these objects are primordial, or the recent end products of the different physical mechanisms that can transform galaxies once they enter these high-density environments, is still a matter of debate. Here we present a novel approach to test these scenarios by comparing the properties of the globular cluster systems (GCSs) of Virgo dEs and their potential progenitors with simple predictions from gravitational and hydrodynamical interaction models. We show that low-mass (M★ ≲ 2 × 108 M⊙) dEs have GCSs consistent with the descendants of gas-stripped late-type dwarfs. On the other hand, higher mass dEs have properties - including the high mass specific frequencies of their GCSs and their concentrated spatial distribution within Virgo - incompatible with a recent, environmentally driven evolution. They mostly comprise nucleated systems, but also dEs with recent star formation and/or disc features. Bright, nucleated dEs appear to be a population that has long resided within the cluster potential well, but have surprisingly managed to retain very rich and spatially extended GCSs - possibly an indication of high total masses. Our analysis does not favour violent evolutionary mechanisms that result in significant stellar mass-losses, but more gentle processes involving gas removal by a combination of internal and external factors, and highlights the relevant role of initial conditions. Additionally, we briefly comment on the origin of luminous cluster S0 galaxies.
Software system for data management and distributed processing of multichannel biomedical signals.
Franaszczuk, P J; Jouny, C C
2004-01-01
The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.
Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Bloem, Michael J.
2014-01-01
In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.
Continuous Security and Configuration Monitoring of HPC Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.
Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less
Security clustering algorithm based on reputation in hierarchical peer-to-peer network
NASA Astrophysics Data System (ADS)
Chen, Mei; Luo, Xin; Wu, Guowen; Tan, Yang; Kita, Kenji
2013-03-01
For the security problems of the hierarchical P2P network (HPN), the paper presents a security clustering algorithm based on reputation (CABR). In the algorithm, we take the reputation mechanism for ensuring the security of transaction and use cluster for managing the reputation mechanism. In order to improve security, reduce cost of network brought by management of reputation and enhance stability of cluster, we select reputation, the historical average online time, and the network bandwidth as the basic factors of the comprehensive performance of node. Simulation results showed that the proposed algorithm improved the security, reduced the network overhead, and enhanced stability of cluster.
Open Mess Management Career Ladder AFS 742X0 and CEM Code 74200.
1980-12-01
I. OPEN MESS MANAGERS (SPC049, N=187) 11. FOOD / BEVERAGE OPERATIONS ASSISTANI MANAGERS ’LUSTER (GRP076, N=92) a. Bar and Operations Managers (GKP085...said they will or probably will reenlist. 1I. FOOD / BEVERAGE OPERATIONS ASSISTANT MANAGERS CLUSTER (GRP076).- This cluster of 9-2 reslpo nrts-(23...operation of open mess food and beverage functions. The majority of these airmen identify themselves as Assistant Managers of open mess facilities and are
Portfolio of automated trading systems: complexity and learning set size issues.
Raudys, Sarunas
2013-03-01
In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.
Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
Further Structural Intelligence for Sensors Cluster Technology in Manufacturing
Mekid, Samir
2006-01-01
With the ever increasing complex sensing and actuating tasks in manufacturing plants, intelligent sensors cluster in hybrid networks becomes a rapidly expanding area. They play a dominant role in many fields from macro and micro scale. Global object control and the ability to self organize into fault-tolerant and scalable systems are expected for high level applications. In this paper, new structural concepts of intelligent sensors and networks with new intelligent agents are presented. Embedding new functionalities to dynamically manage cooperative agents for autonomous machines are interesting key enabling technologies most required in manufacturing for zero defects production.
Tile-based Level of Detail for the Parallel Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niski, K; Cohen, J D
Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.
A sequential-move game for enhancing safety and security cooperation within chemical clusters.
Pavlova, Yulia; Reniers, Genserik
2011-02-15
The present paper provides a game theoretic analysis of strategic cooperation on safety and security among chemical companies within a chemical industrial cluster. We suggest a two-stage sequential move game between adjacent chemical plants and the so-called Multi-Plant Council (MPC). The MPC is considered in the game as a leader player who makes the first move, and the individual chemical companies are the followers. The MPC's objective is to achieve full cooperation among players through establishing a subsidy system at minimum expense. The rest of the players rationally react to the subsidies proposed by the MPC and play Nash equilibrium. We show that such a case of conflict between safety and security, and social cooperation, belongs to the 'coordination with assurance' class of games, and we explore the role of cluster governance (fulfilled by the MPC) in achieving a full cooperative outcome in domino effects prevention negotiations. The paper proposes an algorithm that can be used by the MPC to develop the subsidy system. Furthermore, a stepwise plan to improve cross-company safety and security management in a chemical industrial cluster is suggested and an illustrative example is provided. Copyright © 2010 Elsevier B.V. All rights reserved.
Dalal, Anuj K; Roy, Christopher L; Poon, Eric G; Williams, Deborah H; Nolido, Nyryan; Yoon, Cathy; Budris, Jonas; Gandhi, Tejal; Bates, David W; Schnipper, Jeffrey L
2014-01-01
Physician awareness of the results of tests pending at discharge (TPADs) is poor. We developed an automated system that notifies responsible physicians of TPAD results via secure, network email. We sought to evaluate the impact of this system on self-reported awareness of TPAD results by responsible physicians, a necessary intermediary step to improve management of TPAD results. We conducted a cluster-randomized controlled trial at a major hospital affiliated with an integrated healthcare delivery network in Boston, Massachusetts. Adult patients with TPADs who were discharged from inpatient general medicine and cardiology services were assigned to the intervention or usual care arm if their inpatient attending physician and primary care physician (PCP) were both randomized to the same study arm. Patients of physicians randomized to discordant study arms were excluded. We surveyed these physicians 72 h after all TPAD results were finalized. The primary outcome was awareness of TPAD results by attending physicians. Secondary outcomes included awareness of TPAD results by PCPs, awareness of actionable TPAD results, and provider satisfaction. We analyzed data on 441 patients. We sent 441 surveys to attending physicians and 353 surveys to PCPs and received 275 and 152 responses from 83 different attending physicians and 112 different PCPs, respectively (attending physician survey response rate of 63%). Intervention attending physicians and PCPs were significantly more aware of TPAD results (76% vs 38%, adjusted/clustered OR 6.30 (95% CI 3.02 to 13.16), p<0.001; 57% vs 33%, adjusted/clustered OR 3.08 (95% CI 1.43 to 6.66), p=0.004, respectively). Intervention attending physicians tended to be more aware of actionable TPAD results (59% vs 29%, adjusted/clustered OR 4.25 (0.65, 27.85), p=0.13). One hundred and eighteen (85%) and 43 (63%) intervention attending physician and PCP survey respondents, respectively, were satisfied with this intervention. Automated email notification represents a promising strategy for managing TPAD results, potentially mitigating an unresolved patient safety concern. ClinicalTrials.gov (NCT01153451).
Dalal, Anuj K; Roy, Christopher L; Poon, Eric G; Williams, Deborah H; Nolido, Nyryan; Yoon, Cathy; Budris, Jonas; Gandhi, Tejal; Bates, David W; Schnipper, Jeffrey L
2014-01-01
Background and objective Physician awareness of the results of tests pending at discharge (TPADs) is poor. We developed an automated system that notifies responsible physicians of TPAD results via secure, network email. We sought to evaluate the impact of this system on self-reported awareness of TPAD results by responsible physicians, a necessary intermediary step to improve management of TPAD results. Methods We conducted a cluster-randomized controlled trial at a major hospital affiliated with an integrated healthcare delivery network in Boston, Massachusetts. Adult patients with TPADs who were discharged from inpatient general medicine and cardiology services were assigned to the intervention or usual care arm if their inpatient attending physician and primary care physician (PCP) were both randomized to the same study arm. Patients of physicians randomized to discordant study arms were excluded. We surveyed these physicians 72 h after all TPAD results were finalized. The primary outcome was awareness of TPAD results by attending physicians. Secondary outcomes included awareness of TPAD results by PCPs, awareness of actionable TPAD results, and provider satisfaction. Results We analyzed data on 441 patients. We sent 441 surveys to attending physicians and 353 surveys to PCPs and received 275 and 152 responses from 83 different attending physicians and 112 different PCPs, respectively (attending physician survey response rate of 63%). Intervention attending physicians and PCPs were significantly more aware of TPAD results (76% vs 38%, adjusted/clustered OR 6.30 (95% CI 3.02 to 13.16), p<0.001; 57% vs 33%, adjusted/clustered OR 3.08 (95% CI 1.43 to 6.66), p=0.004, respectively). Intervention attending physicians tended to be more aware of actionable TPAD results (59% vs 29%, adjusted/clustered OR 4.25 (0.65, 27.85), p=0.13). One hundred and eighteen (85%) and 43 (63%) intervention attending physician and PCP survey respondents, respectively, were satisfied with this intervention. Conclusions Automated email notification represents a promising strategy for managing TPAD results, potentially mitigating an unresolved patient safety concern. Clinical Trial Registration: ClinicalTrials.gov (NCT01153451) PMID:24154834
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Delivery of Learning Knowledge Objects Using Fuzzy Clustering
ERIC Educational Resources Information Center
Sabitha, A. Sai; Mehrotra, Deepti; Bansal, Abhay
2016-01-01
e-Learning industry is rapidly changing and the current learning trends are based on personalized, social and mobile learning, content reusability, cloud-based and talent management. The learning systems have attained a significant growth catering to the needs of a wide range of learners, having different approaches and styles of learning. Objects…
OCEANIDS: Autonomous Data Acquisition, Management and Distribution System
NASA Technical Reports Server (NTRS)
Bingham, Andrew; Rigor, Eric; Cervantes, Alex; Armstrong, Edward
2004-01-01
OCEANIDS is a clearinghouse for mission essential and near-real-time satellite data streams. This viewgraph presentation describes this mission, and includes the following topics: 1) OCEANIDS Motivation; 2) High-Level Architecture; 3) OCEANIDS Features; 4) OCEANIDS GUI: Nodes; 5) OCEANIDS GUI: Cluster; 6) Data Streams; 7) Statistics; and 8) GHRSST-PP.
Medical Laboratory Assistant. Laboratory Occupations Cluster.
ERIC Educational Resources Information Center
Michigan State Univ., East Lansing. Coll. of Agriculture and Natural Resources Education Inst.
This task-based curriculum guide for medical laboratory assistant is intended to help the teacher develop a classroom management system where students learn by doing. Introductory materials include a Dictionary of Occupational Titles job code and title sheet, a career ladder, a matrix relating duty/task numbers to job titles, and a task list. Each…
Auditing Management Practices in Schools: Recurring Communication Problems and Solutions
ERIC Educational Resources Information Center
Zwijze-Koning, Karen H.; de Jong, Menno D. T.
2009-01-01
Purpose: Over the past ten years, most Dutch high schools have been confronted with mergers, curriculum reforms, and managerial changes. As a result, the pressure on the schools' communication systems has increased and several problems have emerged. This paper aims to examine recurring clusters of communication problems in high schools.…
Histologic Technician. Laboratory Occupations Cluster.
ERIC Educational Resources Information Center
Michigan State Univ., East Lansing. Coll. of Agriculture and Natural Resources Education Inst.
This task-based curriculum guide for histologic technician is intended to help the teacher develop a classroom management system where students learn by doing. Introductory materials include a Dictionary of Occupational Titles job code and title sheet, a career ladder, a matrix relating duty/task numbers to job titles, and a task list. Each task…
Onboard photo:Astro-1 in Cargo Bay
NASA Technical Reports Server (NTRS)
1990-01-01
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against dark space. Parts of the Hopkins Ultraviolet Telescope (HUT), Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
Onboard Photo:Astro-1 Ultraviolet Telescope in Cargo Bay
NASA Technical Reports Server (NTRS)
1990-01-01
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against a blue and white Earth. Parts of the Hopkins Ultraviolet Telescope (HUT), the Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
1990-12-02
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against a blue and white Earth. Parts of the Hopkins Ultraviolet Telescope (HUT), the Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
Tang, Haijing; Wang, Siye; Zhang, Yanjun
2013-01-01
Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841
Using background knowledge for picture organization and retrieval
NASA Astrophysics Data System (ADS)
Quintana, Yuri
1997-01-01
A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.
Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster
Fan, Hangyu; Wang, Huandong; Li, Yong
2018-01-01
Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance. PMID:29360792
Distributed controller clustering in software defined networks.
Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.
Integrated cluster management at Manchester
NASA Astrophysics Data System (ADS)
McNab, Andrew; Forti, Alessandra
2012-12-01
We describe an integrated management system using third-party, open source components used in operating a large Tier-2 site for particle physics. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors the services on each host according to the records of what should be running; and cross references tickets with asset records and per-asset monitoring pages. In addition, scripts which detect problems and automatically remove hosts record these new states in the database which are available to operators immediately through the same interface as tickets and monitoring.
Duality in Phase Space and Complex Dynamics of an Integrated Pest Management Network Model
NASA Astrophysics Data System (ADS)
Yuan, Baoyin; Tang, Sanyi; Cheke, Robert A.
Fragmented habitat patches between which plants and animals can disperse can be modeled as networks with varying degrees of connectivity. A predator-prey model with network structures is proposed for integrated pest management (IPM) with impulsive control actions. The model was analyzed using numerical methods to investigate how factors such as the impulsive period, the releasing constant of natural enemies and the mode of connections between the patches affect pest outbreak patterns and the success or failure of pest control. The concept of the cluster as defined by Holland and Hastings is used to describe variations in results ranging from global synchrony when all patches have identical fluctuations to n-cluster solutions with all patches having different dynamics. Heterogeneity in the initial densities of either pest or natural enemy generally resulted in a variety of cluster oscillations. Surprisingly, if n > 1, the clusters fall into two groups one with low amplitude fluctuations and the other with high amplitude fluctuations (i.e. duality in phase space), implying that control actions radically alter the system's characteristics by inducing duality and more complex dynamics. When the impulsive period is small enough, i.e. the control strategy is undertaken frequently, the pest can be eradicated. As the period increases, the pest's dynamics shift from a steady state to become chaotic with periodic windows and more multicluster oscillations arise for heterogenous initial density distributions. Period-doubling bifurcation and periodic halving cascades occur as the releasing constant of the natural enemy increases. For the same ecological system with five differently connected networks, as the randomness of the connectedness increases, the transient duration becomes smaller and the probability of multicluster oscillations appearing becomes higher.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R R; Brugger, E; Cook, R
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less
Karavetian, Mirey; de Vries, Nanne; Elzein, Hafez; Rizk, Rana; Bechwaty, Fida
2015-09-01
Assess the effect of intensive nutrition education by trained dedicated dietitians on osteodystrophy management among hemodialysis patients. Randomized controlled trial in 12 hospital-based hemodialysis units equally distributed over clusters 1 and 2. Cluster 1 patients were either assigned to usual care (n=96) or to individualized intensive staged-based nutrition education by a dedicated renal dietitian (n=88). Cluster 2 patients (n=210) received nutrition education from general hospital dietitians, educating their patients at their spare time from hospital duties. Main outcomes were: (1) dietary knowledge(%), (2) behavioral change, (3) serum phosphorus (mmol/L), each measured at T0 (baseline), T1 (post 6 month intervention) and T2 (post 6 month follow up). Significant improvement was found only among patients receiving intensive education from a dedicated dietitian at T1; the change regressed at T2 without statistical significance: knowledge (T0: 40.3; T1: 64; T2: 63) and serum phosphorus (T0: 1.79; T1: 1.65; T2: 1.70); behavioral stages changed significantly throughout the study (T0: Preparation, T1: Action, T2: Preparation). The intensive protocol showed to be the most effective. Integrating dedicated dietitians and stage-based education in hemodialysis units may improve the nutritional management of patients in Lebanon and countries with similar health care systems. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Exploration of a leadership competency model for medical school faculties in Korea.
Lee, Yong Seok; Oh, Dong Keun; Kim, Myungun; Lee, Yoon Seong; Shin, Jwa Seop
2010-12-01
To adapt to rapid and turbulent changes in the field of medicine, education, and society, medical school faculties need appropriate leadership. To develop leadership competencies through education, coaching, and mentoring, we need a leadership competency model. The purpose of this study was to develop a new leadership competency model that is suitable for medical school faculties in Korea. To collect behavioral episodes with regard to leadership, we interviewed 54 subjects (faculties, residents, nurses) and surveyed 41 faculties with open-ended questionnaires. We classified the behavioral episodes based on Quinn and Cameron's leadership competency model and developed a Likert scale questionnaire to perform a confirmatory factor analysis. Two hundred seven medical school faculties responded to the questionnaire. The competency clusters that were identified by factor analysis were professionalism, citizenship, leadership, and membership to an organization. Accordingly, each cluster was linked with a dimension: self, society, team (that he/she is leading), and organization (to which he/she belongs). The clusters of competencies were: professional ability, ethics/morality, self-management, self-development, and passion; public interest, networking, social participation, and active service; motivating, caring, promoting teamwork, nurturing, conflict management, directing, performance management, and systems thinking; organizational orientation, collaboration, voluntary participation, and cost-benefit orientation. This competency model that fits medical school faculties in Korea can be used to design and develop selection plans, education programs, feedback tools, diagnostic evaluation tools, and career plan support programs.
Shewchuk, Richard M; O'Connor, Stephen J; Fine, David J
2006-01-01
Healthcare organizations, health management professional associations, and educational institutions have begun to examine carefully what it means to be a fully competent healthcare executive. As a result, an upsurge in interest in healthcare management competencies has been observed recently. The present study uses two critically important groups of informants as participants: health management practitioners and faculty. Using the nominal group process, health administrators identified critical environmental issues perceived to have an impact on healthcare executives today. These issues were employed in a card-sort assessment and a survey was administered to a nationwide sample of health administrators. These data were used to create a map and five clusters of the environmental landscape of healthcare management. These clusters of environmental issues provided a framework for having groups of administrators and faculty members generate and rank perceived behavioral competencies relative to each cluster. Implications for healthcare management practice, education, and research are discussed.
Martínez-García, Carlos Galdino; Ugoretz, Sarah Janes; Arriaga-Jordán, Carlos Manuel; Wattiaux, Michel André
2015-02-01
This study explored whether technology adoption and changes in management practices were associated with farm structure, household, and farmer characteristics and to identify processes that may foster productivity and sustainability of small-scale dairy farming in the central highlands of Mexico. Factor analysis of survey data from 44 smallholders identified three factors-related to farm size, farmer's engagement, and household structure-that explained 70 % of cumulative variance. The subsequent hierarchical cluster analysis yielded three clusters. Cluster 1 included the most senior farmers with fewest years of education but greatest years of experience. Cluster 2 included farmers who reported access to extension, cooperative services, and more management changes. Cluster 2 obtained 25 and 35 % more milk than farmers in clusters 1 and 3, respectively. Cluster 3 included the youngest farmers, with most years of education and greatest availability of family labor. Access to a network and membership in a community of peers appeared as important contributors to success. Smallholders gravitated towards easy to implement technologies that have immediate benefits. Nonusers of high investment technologies found them unaffordable because of cost, insufficient farm size, and lack of knowledge or reliable electricity. Multivariate analysis may be a useful tool in planning extension activities and organizing channels of communication to effectively target farmers with varying needs, constraints, and motivations for change and in identifying farmers who may exemplify models of change for others who manage farms that are structurally similar but performing at a lower level.
1989-01-01
In 1986, NASA introduced a Shuttle-borne ultraviolet observatory called Astro. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Astro-1 used a Spacelab pallet system with an instrument pointing system and a cruciform structure for bearing the three ultraviolet instruments mounted in a parallel configuration. The three instruments were: The Hopkins Ultraviolet Telescope (HUT), which was designed to obtain far-ultraviolet spectroscopic data from white dwarfs, emission nebulae, active galaxies, and quasars; the Wisconsin Ultraviolet Photo-Polarimeter Experiment (WUPPE) which was to study polarized ultraviolet light from magnetic white dwarfs, binary stars, reflection nebulae, and active galaxies; and the Ultraviolet Imaging Telescope (UIT) which was to record photographic images in ultraviolet light of galaxies, star clusters, and nebulae. The star trackers that supported the instrument pointing system were also mounted on the cruciform. Also in the payload bay was the Broad Band X-Ray Telescope (BBXRT), which was designed to obtain high-resolution x-ray spectra from stellar corona, x-ray binary stars, active galactic nuclei, and galaxy clusters. Managed by the Marshall Space Flight Center, the Astro-1 observatory was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-16
..., Automotive Holding Group, Instrument Cluster Plant, Currently Known as General Motors Corporation, Including... Corporation, Automotive Holding Group, Instrument Cluster Plant, including on-site leased workers from... Material Management working on-site at Delphi Corporation, Automotive Holding Group, Instrument Cluster...
van Grieken, Rosa A; Kirkenier, Anneloes C E; Koeter, Maarten W J; Schene, Aart H
2014-12-13
Despite the development of various self-management programmes that attempt to ameliorate symptoms of patients with chronic major depressive disorder (MDD), little is known about what these patients perceive as helpful in their struggle during daily live. The present study aims to explore what patients believe they can do themselves to cope with enduring MDD besides professional treatment, and which self-management strategies patients perceive as being most helpful to cope with their MDD. We used concept mapping, a method specifically designed for the conceptualisation of a specific subject, in this case patients' point of view (n = 25) on helpful self-management strategies in their coping with enduring MDD. A purposive sample of participants was invited at the Academic Medical Center and through requests on several MDD-patient websites in the Netherlands. Participants generated strategies in focus group discussions which were successively clustered on a two-dimensional concept map by hierarchical cluster analysis. Fifty strategies were perceived as helpful. They were combined into three meta-clusters each comprising two clusters: A focus on the depression (sub clusters: Being aware that my depression needs active coping and Active coping with professional treatment); An active lifestyle (sub clusters: Active self-care, structure and planning and Free time activities) and Participation in everyday social life (sub clusters: Social engagement and Work-related activities). MDD patients believe they can use various strategies to cope with enduring MDD in daily life. Although current developments in e-health occur, patients emphasise on face-to-face treatments and long-term relations, being engaged in social and working life, and involving their family, friends, colleagues and clinicians in their disease management. Our findings may help clinicians to improve their knowledge about what patients consider beneficial to cope with enduring MDD and to incorporate these suggested self-management strategies in their treatments.
NASA Astrophysics Data System (ADS)
Ebrahimi, A.; Pahlavani, P.; Masoumi, Z.
2017-09-01
Traffic monitoring and managing in urban intelligent transportation systems (ITS) can be carried out based on vehicular sensor networks. In a vehicular sensor network, vehicles equipped with sensors such as GPS, can act as mobile sensors for sensing the urban traffic and sending the reports to a traffic monitoring center (TMC) for traffic estimation. The energy consumption by the sensor nodes is a main problem in the wireless sensor networks (WSNs); moreover, it is the most important feature in designing these networks. Clustering the sensor nodes is considered as an effective solution to reduce the energy consumption of WSNs. Each cluster should have a Cluster Head (CH), and a number of nodes located within its supervision area. The cluster heads are responsible for gathering and aggregating the information of clusters. Then, it transmits the information to the data collection center. Hence, the use of clustering decreases the volume of transmitting information, and, consequently, reduces the energy consumption of network. In this paper, Fuzzy C-Means (FCM) and Fuzzy Subtractive algorithms are employed to cluster sensors and investigate their performance on the energy consumption of sensors. It can be seen that the FCM algorithm and Fuzzy Subtractive have been reduced energy consumption of vehicle sensors up to 90.68% and 92.18%, respectively. Comparing the performance of the algorithms implies the 1.5 percent improvement in Fuzzy Subtractive algorithm in comparison.
Su, Fangli; Kaplan, David; Li, Lifeng; Li, Haifu; Song, Fei; Liu, Haisheng
2017-03-03
In many locations around the globe, large reservoir sustainability is threatened by land use change and direct pollution loading from the upstream watershed. However, the size and complexity of upstream basins makes the planning and implementation of watershed-scale pollution management a challenge. In this study, we established an evaluation system based on 17 factors, representing the potential point and non-point source pollutants and the environmental carrying capacity which are likely to affect the water quality in the Dahuofang Reservoir and watershed in northeastern China. We used entropy methods to rank 118 subwatersheds by their potential pollution threat and clustered subwatersheds according to the potential pollution type. Combining ranking and clustering analyses allowed us to suggest specific areas for prioritized watershed management (in particular, two subwatersheds with the greatest pollution potential) and to recommend the conservation of current practices in other less vulnerable locations (91 small watersheds with low pollution potential). Finally, we identified the factors most likely to influence the water quality of each of the 118 subwatersheds and suggested adaptive control measures for each location. These results provide a scientific basis for improving the watershed management and sustainability of the Dahuofang reservoir and a framework for identifying threats and prioritizing the management of watersheds of large reservoirs around the world.
Su, Fangli; Kaplan, David; Li, Lifeng; Li, Haifu; Song, Fei; Liu, Haisheng
2017-01-01
In many locations around the globe, large reservoir sustainability is threatened by land use change and direct pollution loading from the upstream watershed. However, the size and complexity of upstream basins makes the planning and implementation of watershed-scale pollution management a challenge. In this study, we established an evaluation system based on 17 factors, representing the potential point and non-point source pollutants and the environmental carrying capacity which are likely to affect the water quality in the Dahuofang Reservoir and watershed in northeastern China. We used entropy methods to rank 118 subwatersheds by their potential pollution threat and clustered subwatersheds according to the potential pollution type. Combining ranking and clustering analyses allowed us to suggest specific areas for prioritized watershed management (in particular, two subwatersheds with the greatest pollution potential) and to recommend the conservation of current practices in other less vulnerable locations (91 small watersheds with low pollution potential). Finally, we identified the factors most likely to influence the water quality of each of the 118 subwatersheds and suggested adaptive control measures for each location. These results provide a scientific basis for improving the watershed management and sustainability of the Dahuofang reservoir and a framework for identifying threats and prioritizing the management of watersheds of large reservoirs around the world. PMID:28273834
Evaluation of Job Queuing/Scheduling Software: Phase I Report
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.
Hydroperiod regime controls the organization of plant species in wetlands
Foti, Romano; del Jesus, Manuel; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio
2012-01-01
With urban, agricultural, and industrial needs growing throughout the past decades, wetland ecosystems have experienced profound changes. Most critically, the biodiversity of wetlands is intimately linked to its hydrologic dynamics, which in turn are being drastically altered by ongoing climate changes. Hydroperiod regimes, e.g., percentage of time a site is inundated, exert critical control in the creation of niches for different plant species in wetlands. However, the spatial signatures of the organization of plant species in wetlands and how the different drivers interact to yield such signatures are unknown. Focusing on Everglades National Park (ENP) in Florida, we show here that cluster sizes of each species follow a power law probability distribution and that such clusters have well-defined fractal characteristics. Moreover, we individuate and model those signatures via the interplay between global forcings arising from the hydroperiod regime and local controls exerted by neighboring vegetation. With power law clustering often associated with systems near critical transitions, our findings are highly relevant for the management of wetland ecosystems. In addition, our results show that changes in climate and land management have a quantifiable predictable impact on the type of vegetation and its spatial organization in wetlands. PMID:23150589
Dong, Skye T; Butow, Phyllis N; Agar, Meera; Lovell, Melanie R; Boyle, Frances; Stockler, Martin; Forster, Benjamin C; Tong, Allison
2016-04-01
Managing symptom clusters or multiple concurrent symptoms in patients with advanced cancer remains a clinical challenge. The optimal processes constituting effective management of symptom clusters remain uncertain. To describe the attitudes and strategies of clinicians in managing multiple co-occurring symptoms in patients with advanced cancer. Semistructured interviews were conducted with 48 clinicians (palliative care physicians [n = 10], oncologists [n = 6], general practitioners [n = 6], nurses [n = 12], and allied health providers [n = 14]), purposively recruited from two acute hospitals, two palliative care centers, and four community general practices in Sydney, Australia. Transcripts were analyzed using thematic analysis and adapted grounded theory. Six themes were identified: uncertainty in decision making (inadequacy of scientific evidence, relying on experiential knowledge, and pressure to optimize care); attunement to patient and family (sensitivity to multiple cues, prioritizing individual preferences, addressing psychosocial and physical interactions, and opening Pandora's box); deciphering cause to guide intervention (disaggregating symptoms and interactions, flexibility in assessment, and curtailing investigative intrusiveness); balancing complexities in medical management (trading off side effects, minimizing mismatched goals, and urgency in resolving severe symptoms); fostering hope and empowerment (allaying fear of the unknown, encouraging meaning making, championing patient empowerment, and truth telling); and depending on multidisciplinary expertise (maximizing knowledge exchange, sharing management responsibility, contending with hierarchical tensions, and isolation and discontinuity of care). Management of symptom clusters, as both an art and a science, is currently fraught with uncertainty in decision making. Strengthening multidisciplinary collaboration, continuity of care, more pragmatic planning of clinical trials to address more than one symptom, and training in symptom cluster management are required. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Bouhlal, Sofia; McBride, Colleen M.; Trivedi, Niraj S.; Agurs-Collins, Tanya; Persky, Susan
2017-01-01
Common reports of over-response to food cues, difficulties with calorie restriction, and difficulty adhering to dietary guidelines suggest that eating behaviors could be interrelated in ways that influence weight management efforts. The feasibility of identifying robust eating phenotypes (showing face, content, and criterion validity) was explored based on well-validated individual eating behavior assessments. Adults (n=260; mean age 34 years) completed online questionnaires with measurements of nine eating behaviors including: appetite for palatable foods, binge eating, bitter taste sensitivity, disinhibition, food neophobia, pickiness and satiety responsiveness. Discovery-based visualization procedures that have the combined strengths of heatmaps and hierarchical clustering were used to investigate: 1) how eating behaviors cluster, 2) how participants can be grouped within eating behavior clusters, and 3) whether group clustering is associated with body mass index (BMI) and dietary self-efficacy levels. Two distinct eating behavior clusters and participant groups that aligned within these clusters were identified: one with higher drive to eat and another with food avoidance behaviors. Participants’ BMI (p=.0002) and dietary self-efficacy (p<.0001) were associated with cluster membership. Eating behavior clusters showed content and criterion validity based on their association with BMI (associated, but not entirely overlapping) and dietary self-efficacy. Identifying eating behavior phenotypes appears viable. These efforts could be expanded and ultimately inform tailored weight management interventions. PMID:28043857
Population substructure and space use of Foxe Basin polar bears.
Sahanatien, Vicki; Peacock, Elizabeth; Derocher, Andrew E
2015-07-01
Climate change has been identified as a major driver of habitat change, particularly for sea ice-dependent species such as the polar bear (Ursus maritimus). Population structure and space use of polar bears have been challenging to quantify because of their circumpolar distribution and tendency to range over large areas. Knowledge of movement patterns, home range, and habitat is needed for conservation and management. This is the first study to examine the spatial ecology of polar bears in the Foxe Basin management unit of Nunavut, Canada. Foxe Basin is in the mid-Arctic, part of the seasonal sea ice ecoregion and it is being negatively affected by climate change. Our objectives were to examine intrapopulation spatial structure, to determine movement patterns, and to consider how polar bear movements may respond to changing sea ice habitat conditions. Hierarchical and fuzzy cluster analyses were used to assess intrapopulation spatial structure of geographic position system satellite-collared female polar bears. Seasonal and annual movement metrics (home range, movement rates, time on ice) and home-range fidelity (static and dynamic overlap) were compared to examine the influence of regional sea ice on movements. The polar bears were distributed in three spatial clusters, and there were differences in the movement metrics between clusters that may reflect sea ice habitat conditions. Within the clusters, bears moved independently of each other. Annual and seasonal home-range fidelity was observed, and the bears used two movement patterns: on-ice range residency and annual migration. We predict that home-range fidelity may decline as the spatial and temporal predictability of sea ice changes. These new findings also provide baseline information for managing and monitoring this polar bear population.
Decentralized control of units in smart grids for the support of renewable energy supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenschein, Michael, E-mail: Michael.Sonnenschein@Uni-Oldenburg.DE; Lünsdorf, Ontje, E-mail: Ontje.Luensdorf@OFFIS.DE; Bremer, Jörg, E-mail: Joerg.Bremer@Uni-Oldenburg.DE
Due to the significant environmental impact of power production from fossil fuels and nuclear fission, future energy systems will increasingly rely on distributed and renewable energy sources (RES). The electrical feed-in from photovoltaic (PV) systems and wind energy converters (WEC) varies greatly both over short and long time periods (from minutes to seasons), and (not only) by this effect the supply of electrical power from RES and the demand for electrical power are not per se matching. In addition, with a growing share of generation capacity especially in distribution grids, the top-down paradigm of electricity distribution is gradually replaced bymore » a bottom-up power supply. This altogether leads to new problems regarding the safe and reliable operation of power grids. In order to address these challenges, the notion of Smart Grids has been introduced. The inherent flexibilities, i.e. the set of feasible power schedules, of distributed power units have to be controlled in order to support demand–supply matching as well as stable grid operation. Controllable power units are e.g. combined heat and power plants, power storage systems such as batteries, and flexible power consumers such as heat pumps. By controlling the flexibilities of these units we are particularly able to optimize the local utilization of RES feed-in in a given power grid by integrating both supply and demand management measures with special respect to the electrical infrastructure. In this context, decentralized systems, autonomous agents and the concept of self-organizing systems will become key elements of the ICT based control of power units. In this contribution, we first show how a decentralized load management system for battery charging/discharging of electrical vehicles (EVs) can increase the locally used share of supply from PV systems in a low voltage grid. For a reliable demand side management of large sets of appliances, dynamic clustering of these appliances into uniformly controlled appliance sets is necessary. We introduce a method for self-organized clustering for this purpose and show how control of such clusters can affect load peaks in distribution grids. Subsequently, we give a short overview on how we are going to expand the idea of self-organized clusters of units into creating a virtual control center for dynamic virtual power plants (DVPP) offering products at a power market. For an efficient organization of DVPPs, the flexibilities of units have to be represented in a compact and easy to use manner. We give an introduction how the problem of representing a set of possibly 10{sup 100} feasible schedules can be solved by a machine-learning approach. In summary, this article provides an overall impression how we use agent based control techniques and methods of self-organization to support the further integration of distributed and renewable energy sources into power grids and energy markets. - Highlights: • Distributed load management for electrical vehicles supports local supply from PV. • Appliances can self-organize into so called virtual appliances for load control. • Dynamic VPPs can be controlled by extensively decentralized control centers. • Flexibilities of units can efficiently be represented by support-vector descriptions.« less
Symptom Clusters and Quality of Life in Hospice Patients with Cancer
Omran, Suha; Khader, Yousef; McMillan, Susan
2017-01-01
Background: Symptom control is an important part of palliative care and important to achieve optimal quality of life (QOL). Studies have shown that patients with advanced cancer suffer from diverse and often severe physical and psychological symptoms. The aim is to explore the influence of symptom clusters on QOL among patients with advanced cancer. Materials and Methods: 709 patients with advanced cancer were recruited to participate in a clinical trial focusing on symptom management and QOL. Patients were adults newly admitted to hospice home care in one of two hospices in southwest Florida, who could pass mental status screening. The instruments used for data collection were the Demographic Data Form, Memorial Symptom Assessment Scale (MSAS), and the Hospice Quality of Life Index-14. Results: Exploratory factor analysis and multiple regression were used to identify symptom clusters and their influence on QOL. The results revealed that the participants experienced multiple concurrent symptoms. There were four symptom clusters found among these cancer patients. Individual symptom distress scores that were the strongest predictors of QOL were: feeling pain; dry mouth; feeling drowsy; nausea; difficulty swallowing; worrying and feeling nervous. Conclusions: Patients with advanced cancer reported various concurrent symptoms, and these form symptom clusters of four main categories. The four symptoms clusters have a negative influence on patients’ QOL and required specific care from different members of the hospice healthcare team. The results of this study should be used to guide health care providers’ symptom management. Proper attention to symptom clusters should be the basis for accurate planning of effective interventions to manage the symptom clusters experienced by advanced cancer patients. The health care provider needs to plan ahead for these symptoms and manage any concurrent symptoms for successful promotion of their patient’s QOL. PMID:28950683
Symptom Clusters and Quality of Life in Hospice Patients with Cancer
Omran, Suha; Khader, Yousef; McMillan, Susan
2017-09-27
Background: Symptom control is an important part of palliative care and important to achieve optimal quality of life (QOL). Studies have shown that patients with advanced cancer suffer from diverse and often severe physical and psychological symptoms. The aim is to explore the influence of symptom clusters on QOL among patients with advanced cancer. Materials and Methods: 709 patients with advanced cancer were recruited to participate in a clinical trial focusing on symptom management and QOL. Patients were adults newly admitted to hospice home care in one of two hospices in southwest Florida, who could pass mental status screening. The instruments used for data collection were the Demographic Data Form, Memorial Symptom Assessment Scale (MSAS), and the Hospice Quality of Life Index-14. Results: Exploratory factor analysis and multiple regression were used to identify symptom clusters and their influence on QOL. The results revealed that the participants experienced multiple concurrent symptoms. There were four symptom clusters found among these cancer patients. Individual symptom distress scores that were the strongest predictors of QOL were: feeling pain; dry mouth; feeling drowsy; nausea; difficulty swallowing; worrying and feeling nervous. Conclusions: Patients with advanced cancer reported various concurrent symptoms, and these form symptom clusters of four main categories. The four symptoms clusters have a negative influence on patients’ QOL and required specific care from different members of the hospice healthcare team. The results of this study should be used to guide health care providers’ symptom management. Proper attention to symptom clusters should be the basis for accurate planning of effective interventions to manage the symptom clusters experienced by advanced cancer patients. The health care provider needs to plan ahead for these symptoms and manage any concurrent symptoms for successful promotion of their patient’s QOL. Creative Commons Attribution License
Döpp, Carola M E; Graff, Maud J L; Teerenstra, Steven; Nijhuis-van der Sanden, Maria W G; Olde Rikkert, Marcel G M; Vernooij-Dassen, Myrra J F J
2013-05-30
To evaluate the effectiveness of a multifaceted implementation strategy on physicians' referral rate to and knowledge on the community occupational therapy in dementia program (COTiD program). A cluster randomized controlled trial with 28 experimental and 17 control clusters was conducted. Cluster included a minimum of one physician, one manager, and two occupational therapists. In the control group physicians and managers received no interventions and occupational therapists received a postgraduate course. In the experimental group physicians and managers had access to a website, received newsletters, and were approached by telephone. In addition, physicians were offered one outreach visit. In the experimental group occupational therapists received the postgraduate course, training days, outreach visits, regional meetings, and access to a reporting system. Main outcome measure was the number of COTiD referrals received by each cluster which was assessed at 6 and 12 months after the start of the intervention. Referrals were included from both participating physicians (enrolled in the study and received either the control or experimental intervention) and non-participating physicians (not enrolled but of whom referrals were received by participating occupational therapists). Mixed model analyses were used to analyze the data. All analyses were based on the principle of intention-to-treat. At 12 months experimental clusters received significantly more referrals with an average of 5,24 referrals (SD 5,75) to the COTiD program compared to 2,07 referrals in the control group (SD 5,14). The effect size at 12 months was 0.58. Although no difference in referral rate was found for the physicians participating in the study, the number of referrals from non-participating physicians (t -2,55 / 43 / 0,02) differed significantly at 12 months. Passive dissemination strategies are less likely to result in changes in professional behavior. The amount of physicians exposed to active strategies was limited. In spite of this we found a significant difference in the number of referrals which was accounted for by more referrals of non-participating physicians in the experimental clusters. We hypothesize that the increase in referrals was caused by an increase in occupational therapists' efforts to promote their services within their network. NCT01117285.
Wang, Wei-Ming; Zhou, Hua-Yun; Liu, Yao-Bao; Li, Ju-Lin; Cao, Yuan-Yuan; Cao, Jun
2013-04-01
To explore a new mode of malaria elimination through the application of digital earth system in malaria epidemic management and surveillance. While we investigated the malaria cases and deal with the epidemic areas in Jiangsu Province in 2011, we used JISIBAO UniStrong G330 GIS data acquisition unit (GPS) to collect the latitude and longitude of the cases located, and then established a landmark library about early-warning areas and an image management system by using Google Earth Free 6.2 and its image processing software. A total of 374 malaria cases were reported in Jiangsu Province in 2011. Among them, there were 13 local vivax malaria cases, 11 imported vivax malaria cases from other provinces, 20 abroad imported vivax malaria cases, 309 abroad imported falciparum malaria cases, 7 abroad imported quartan malaria cases (Plasmodium malaria infection), and 14 abroad imported ovale malaria cases (P. ovale infection). Through the analysis of Google Earth Mapping system, these malaria cases showed a certain degree of aggregation except the abroad imported quartan malaria cases which were highly sporadic. The local vivax malaria cases mainly concentrated in Sihong County, the imported vivax malaria cases from other provinces mainly concentrated in Suzhou City and Wuxi City, the abroad imported vivax malaria cases concentrated in Nanjing City, the abroad imported falciparum malaria cases clustered in the middle parts of Jiangsu Province, and the abroad imported ovale malaria cases clustered in Liyang City. The operation of Google Earth Free 6.2 is simple, convenient and quick, which could help the public health authority to make the decision of malaria prevention and control, including the use of funds and other health resources.
Building Capacity in a Self-Managing Schooling System: The New Zealand Experience
ERIC Educational Resources Information Center
Robinson, Viviane M. J.; McNaughton, Stuart; Timperley, Helen
2011-01-01
Purpose: The purpose of this paper is to evaluate two recent examples of the New Zealand Ministry of Education's approach to reducing the persistent disparities in achievement between students of different social and ethnic groups. The first example is cluster-based school improvement, and the second is the development of national standards for…
A framework using cluster-based hybrid network architecture for collaborative virtual surgery.
Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann
2009-12-01
Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.
Illinois Occupational Skill Standards: Housekeeping Management Cluster.
ERIC Educational Resources Information Center
Illinois Occupational Skill Standards and Credentialing Council, Carbondale.
This document contains 44 occupational skill standards for the housekeeping management occupational cluster, as required for the state of Illinois. Skill standards, which were developed by committees that included educators and representatives from business, industry, and labor, are intended to promote education and training investment and ensure…
Lutfey, Karen E; Gerstenberger, Eric; McKinlay, John B
2013-06-01
To identify styles of physician decision making (as opposed to singular clinical actions) and to analyze their association with variations in the management of a vignette presentation of coronary heart disease (CHD). Primary data were collected from primary care physicians in North and South Carolina. In a balanced factorial experimental design, primary care physicians viewed one of 16 (2(4)) video vignette presentations of CHD and provided detailed information about how they would manage the case. 256 MD primary care physicians were interviewed face-to-face in North and South Carolina. We identify three clusters depicting unique styles of CHD management that are robust to controls for physician (gender and level of experience) and patient characteristics (age, gender, socioeconomic status, and race) as well as key organizational features of physicians' work settings. Physicians in Cluster 1 "Cardiac" (N = 92) were more likely to focus on cardiac issues compared with their counterparts; physicians in Cluster 2 "Talkers" (N = 93) were more likely to give advice and take additional medical history; whereas physicians in Cluster 3 "Minimalists" (N = 71) were less likely than their counterparts to take action on any of the types of management behavior. Variations in styles of decision making, which encompass multiple outcome variables and extend beyond individual-level demographic predictors, may add to our understanding of disparities in health quality and outcomes. © Health Research and Educational Trust.
Knox, Stephanie A; Chondros, Patty
2004-01-01
Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248
Mustian, Karen M.; Cole, Calvin L.; Lin, Po Ju; Asare, Matt; Fung, Chunkit; Janelsins, Michelle C.; Kamen, Charles S.; Peppone, Luke J.; Magnuson, Allison
2017-01-01
Objective To review existing exercise guidelines for cancer patients and survivors for the management of symptom clusters. Data source Review of Pubmed literature and published exercise guidelines. Conclusion Cancer and its treatments are responsible for a copious number of incapacitating symptoms that markedly impair quality of life (QOL). The exercise oncology literature provides consistent support for the safety and efficacy of exercise interventions in managing cancer- and treatment-related symptoms as well as improving quality of life in cancer patients and survivors. Implications for Nursing Practice Effective management of symptoms enhances recovery, resumption of normal life activities and QOL for patients and survivors. Exercise is a safe, appropriate and effective therapeutic option before, during, and after the completion of treatment for alleviating symptoms and symptom clusters. PMID:27776835
National disease management plans for key chronic non-communicable diseases in Singapore.
Tan, C C
2002-07-01
In Singapore, chronic, non-communicable diseases, namely coronary heart disease, stroke and cancer, account for more than 60% of all deaths and a high burden of disability and healthcare expenditure. The burden of these diseases is likely to rise with our rapidly ageing population and changing lifestyles, and will present profound challenges to our healthcare delivery and financing systems over the next 20 to 30 years. The containment and optimal management of these conditions require a strong emphasis on patient education and the development of integrated models of healthcare delivery in place of the present uncoordinated, compartmentalised way of delivering healthcare. To meet these challenges, the Ministry of Health's major thrusts are disease control measures which focus mainly on primary prevention; and disease management, which coordinates the national effort to reduce the incidence of these key diseases and their predisposing factors and to ameliorate their long-term impact by optimising control to reduce mortality, morbidity and complications, and improving functional status through rehabilitation. The key initiatives include restructuring of the public sector healthcare institutions into two clusters, each comprising a network of primary health care polyclinics, regional hospitals and tertiary institutions. The functional integration of these healthcare elements within each cluster under a common senior administrative and professional management, and the development of common clinical IT systems will greatly facilitate the implementation of disease management programmes. Secondly, the Ministry is establishing National Disease Registries in coronary heart disease, cancer, stroke, myopia and kidney failure, which will be valuable sources of clinical and outcomes data. Thirdly, in partnership with expert groups, national committees and professional agencies, the Ministry will produce clinical practice guidelines which will assist doctors and healthcare professionals to better manage important aspects of the key diseases. Finally, the Ministry has committed funds to support selected National Disease Management programmes, illustrated by the disease management plan for asthma.
Data Mining and Knowledge Management in Higher Education -Potential Applications.
ERIC Educational Resources Information Center
Luan, Jing
This paper introduces a new decision support tool, data mining, in the context of knowledge management. The most striking features of data mining techniques are clustering and prediction. The clustering aspect of data mining offers comprehensive characteristics analysis of students, while the predicting function estimates the likelihood for a…
Classified and clustered data constellation: An efficient approach of 3D urban data management
NASA Astrophysics Data System (ADS)
Azri, Suhaibah; Ujang, Uznir; Castro, Francesc Antón; Rahman, Alias Abdul; Mioc, Darka
2016-03-01
The growth of urban areas has resulted in massive urban datasets and difficulties handling and managing issues related to urban areas. Huge and massive datasets can degrade data retrieval and information analysis performance. In addition, the urban environment is very difficult to manage because it involves various types of data, such as multiple types of zoning themes in the case of urban mixed-use development. Thus, a special technique for efficient handling and management of urban data is necessary. This paper proposes a structure called Classified and Clustered Data Constellation (CCDC) for urban data management. CCDC operates on the basis of two filters: classification and clustering. To boost up the performance of information retrieval, CCDC offers a minimal percentage of overlap among nodes and coverage area to avoid repetitive data entry and multipath query. The results of tests conducted on several urban mixed-use development datasets using CCDC verify that it efficiently retrieves their semantic and spatial information. Further, comparisons conducted between CCDC and existing clustering and data constellation techniques, from the aspect of preservation of minimal overlap and coverage, confirm that the proposed structure is capable of preserving the minimum overlap and coverage area among nodes. Our overall results indicate that CCDC is efficient in handling and managing urban data, especially urban mixed-use development applications.
Gholami, Mohammad; Brennan, Robert W
2016-01-06
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency.
Gholami, Mohammad; Brennan, Robert W.
2016-01-01
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency. PMID:26751447
McGregor, Karla K.; Oleson, Jacob
2017-01-01
Purpose The purpose of this study is to determine whether deficits in executive function and lexical-semantic memory compromise the linguistic performance of young adults with specific learning disabilities (LD) enrolled in postsecondary studies. Method One hundred eighty-five students with LD (n = 53) or normal language development (ND, n = 132) named items in the categories animals and food for 1 minute for each category and completed tests of lexical-semantic knowledge and executive control of memory. Groups were compared on total names, mean cluster size, frequency of embedded clusters, frequency of cluster switches, and change in fluency over time. Secondary analyses of variability within the LD group were also conducted. Results The LD group was less fluent than the ND group. Within the LD group, lexical-semantic knowledge predicted semantic fluency and cluster size; executive control of memory predicted semantic fluency and cluster switches. The LD group produced smaller clusters and fewer embedded clusters than the ND group. Groups did not differ in switching or change over time. Conclusions Deficits in the lexical-semantic system associated with LD may persist into young adulthood, even among those who have managed their disability well enough to attend college. Lexical-semantic deficits are associated with compromised semantic fluency, and the two problems are more likely among students with more severe disabilities. PMID:28267833
Hall, Jessica; McGregor, Karla K; Oleson, Jacob
2017-03-01
The purpose of this study is to determine whether deficits in executive function and lexical-semantic memory compromise the linguistic performance of young adults with specific learning disabilities (LD) enrolled in postsecondary studies. One hundred eighty-five students with LD (n = 53) or normal language development (ND, n = 132) named items in the categories animals and food for 1 minute for each category and completed tests of lexical-semantic knowledge and executive control of memory. Groups were compared on total names, mean cluster size, frequency of embedded clusters, frequency of cluster switches, and change in fluency over time. Secondary analyses of variability within the LD group were also conducted. The LD group was less fluent than the ND group. Within the LD group, lexical-semantic knowledge predicted semantic fluency and cluster size; executive control of memory predicted semantic fluency and cluster switches. The LD group produced smaller clusters and fewer embedded clusters than the ND group. Groups did not differ in switching or change over time. Deficits in the lexical-semantic system associated with LD may persist into young adulthood, even among those who have managed their disability well enough to attend college. Lexical-semantic deficits are associated with compromised semantic fluency, and the two problems are more likely among students with more severe disabilities.
Bouhlal, Sofia; McBride, Colleen M; Trivedi, Niraj S; Agurs-Collins, Tanya; Persky, Susan
2017-04-01
Common reports of over-response to food cues, difficulties with calorie restriction, and difficulty adhering to dietary guidelines suggest that eating behaviors could be interrelated in ways that influence weight management efforts. The feasibility of identifying robust eating phenotypes (showing face, content, and criterion validity) was explored based on well-validated individual eating behavior assessments. Adults (n = 260; mean age 34 years) completed online questionnaires with measurements of nine eating behaviors including: appetite for palatable foods, binge eating, bitter taste sensitivity, disinhibition, food neophobia, pickiness and satiety responsiveness. Discovery-based visualization procedures that have the combined strengths of heatmaps and hierarchical clustering were used to investigate: 1) how eating behaviors cluster, 2) how participants can be grouped within eating behavior clusters, and 3) whether group clustering is associated with body mass index (BMI) and dietary self-efficacy levels. Two distinct eating behavior clusters and participant groups that aligned within these clusters were identified: one with higher drive to eat and another with food avoidance behaviors. Participants' BMI (p = 0.0002) and dietary self-efficacy (p < 0.0001) were associated with cluster membership. Eating behavior clusters showed content and criterion validity based on their association with BMI (associated, but not entirely overlapping) and dietary self-efficacy. Identifying eating behavior phenotypes appears viable. These efforts could be expanded and ultimately inform tailored weight management interventions. Published by Elsevier Ltd.
Dynamic PROOF clusters with PoD: architecture and user experience
NASA Astrophysics Data System (ADS)
Manafov, Anar
2011-12-01
PROOF on Demand (PoD) is a tool-set, which sets up a PROOF cluster on any resource management system. PoD is a user oriented product with an easy to use GUI and a command-line interface. It is fully automated. No administrative privileges or special knowledge is required to use it. PoD utilizes a plug-in system, to use different job submission front-ends. The current PoD distribution is shipped with LSF, Torque (PBS), Grid Engine, Condor, gLite, and SSH plug-ins. The product is to be extended. We therefore plan to implement a plug-in for AliEn Grid as well. Recently developed algorithms made it possible to efficiently maintain two types of connections: packet-forwarding and native PROOF connections. This helps to properly handle most kinds of workers, with and without firewalls. PoD maintains the PROOF environment automatically and, for example, prevents resource misusage in case when workers idle for too long. As PoD matures as a product and provides more plug-ins, it's used as a standard for setting up dynamic PROOF clusters in many different institutions. The GSI Analysis Facility (GSIAF) is in production since 2007. The static PROOF cluster has been phased out end of 2009. GSIAF is now completely based on PoD. Users create private dynamic PROOF clusters on the general purpose batch farm. This provides an easier resource sharing between interactive local batch and Grid usage. The main user communities are FAIR and ALICE.
NASA Astrophysics Data System (ADS)
Kim, Chan Moon; Parnichkun, Manukid
2017-11-01
Coagulation is an important process in drinking water treatment to attain acceptable treated water quality. However, the determination of coagulant dosage is still a challenging task for operators, because coagulation is nonlinear and complicated process. Feedback control to achieve the desired treated water quality is difficult due to lengthy process time. In this research, a hybrid of k-means clustering and adaptive neuro-fuzzy inference system ( k-means-ANFIS) is proposed for the settled water turbidity prediction and the optimal coagulant dosage determination using full-scale historical data. To build a well-adaptive model to different process states from influent water, raw water quality data are classified into four clusters according to its properties by a k-means clustering technique. The sub-models are developed individually on the basis of each clustered data set. Results reveal that the sub-models constructed by a hybrid k-means-ANFIS perform better than not only a single ANFIS model, but also seasonal models by artificial neural network (ANN). The finally completed model consisting of sub-models shows more accurate and consistent prediction ability than a single model of ANFIS and a single model of ANN based on all five evaluation indices. Therefore, the hybrid model of k-means-ANFIS can be employed as a robust tool for managing both treated water quality and production costs simultaneously.
Distributed controller clustering in software defined networks
Gani, Abdullah; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability. PMID:28384312
Medical Named Entity Recognition for Indonesian Language Using Word Representations
NASA Astrophysics Data System (ADS)
Rahman, Arief
2018-03-01
Nowadays, Named Entity Recognition (NER) system is used in medical texts to obtain important medical information, like diseases, symptoms, and drugs. While most NER systems are applied to formal medical texts, informal ones like those from social media (also called semi-formal texts) are starting to get recognition as a gold mine for medical information. We propose a theoretical Named Entity Recognition (NER) model for semi-formal medical texts in our medical knowledge management system by comparing two kinds of word representations: cluster-based word representation and distributed representation.
Reefing Line Tension in CPAS Main Parachute Clusters
NASA Technical Reports Server (NTRS)
Ray, Eric S.
2013-01-01
Reefing lines are an essential feature to manage inflation loads. During each Engineering Development Unit (EDU) test of the Capsule Parachute Assembly System (CPAS), a chase aircraft is staged to be level with the cluster of Main ringsail parachutes during the initial inflation and reefed stages. This allows for capturing high-quality still photographs of the reefed skirt, suspension line, and canopy geometry. The over-inflation angles are synchronized with measured loads data in order to compute the tension force in the reefing line. The traditional reefing tension equation assumes radial symmetry, but cluster effects cause the reefed skirt of each parachute to elongate to a more elliptical shape. This effect was considered in evaluating multiple parachutes to estimate the semi-major and semi-minor axes. Three flight tests are assessed, including one with a skipped first stage, which had peak reefing line tension over three times higher than the nominal parachute disreef sequence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas
2012-07-14
The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively onmore » such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.« less
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
Decentralized formation flying control in a multiple-team hierarchy.
Mueller, Joseph B; Thomas, Stephanie J
2005-12-01
In recent years, formation flying has been recognized as an enabling technology for a variety of mission concepts in both the scientific and defense arenas. Examples of developing missions at NASA include magnetospheric multiscale (MMS), solar imaging radio array (SIRA), and terrestrial planet finder (TPF). For each of these missions, a multiple satellite approach is required in order to accomplish the large-scale geometries imposed by the science objectives. In addition, the paradigm shift of using a multiple satellite cluster rather than a large, monolithic spacecraft has also been motivated by the expected benefits of increased robustness, greater flexibility, and reduced cost. However, the operational costs of monitoring and commanding a fleet of close-orbiting satellites is likely to be unreasonable unless the onboard software is sufficiently autonomous, robust, and scalable to large clusters. This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple team framework. The objective is to divide large clusters into teams of "manageable" size, so that the communication and computation demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using a messaging architecture for networking and threaded applications (MANTA). In this architecture, tasks may be remotely added, removed, or replaced post launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in Matlab, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Information-educational environment with adaptive control of learning process
NASA Astrophysics Data System (ADS)
Modjaev, A. D.; Leonova, N. M.
2017-01-01
Recent years, a new scientific branch connected with the activities in social sphere management developing intensively and it is called "Social Cybernetics". In the framework of this scientific branch, theory and methods of management of social sphere are formed. Considerable attention is paid to the management, directly in real time. However, the decision of such management tasks is largely constrained by the lack of or insufficiently deep study of the relevant sections of the theory and methods of management. The article discusses the use of cybernetic principles in solving problems of control in social systems. Applying to educational activities a model of composite interrelated objects representing the behaviour of students at various stages of educational process is introduced. Statistical processing of experimental data obtained during the actual learning process is being done. If you increase the number of features used, additionally taking into account the degree and nature of variability of levels of current progress of students during various types of studies, new properties of students' grouping are discovered. L-clusters were identified, reflecting the behaviour of learners with similar characteristics during lectures. It was established that the characteristics of the clusters contain information about the dynamics of learners' behaviour, allowing them to be used in additional lessons. The ways of solving the problem of adaptive control based on the identified dynamic characteristics of the learners are planned.
Karn, Elizabeth; Jasieniuk, Marie
2017-07-01
Management of agroecosystems with herbicides imposes strong selection pressures on weedy plants leading to the evolution of resistance against those herbicides. Resistance to glyphosate in populations of Lolium perenne L. ssp. multiflorum is increasingly common in California, USA, causing economic losses and the loss of effective management tools. To gain insights into the recent evolution of glyphosate resistance in L. perenne in perennial cropping systems of northwest California and to inform management, we investigated the frequency of glyphosate resistance and the genetic diversity and structure of 14 populations. The sampled populations contained frequencies of resistant plants ranging from 10% to 89%. Analyses of neutral genetic variation using microsatellite markers indicated very high genetic diversity within all populations regardless of resistance frequency. Genetic variation was distributed predominantly among individuals within populations rather than among populations or sampled counties, as would be expected for a wide-ranging outcrossing weed species. Bayesian clustering analysis provided evidence of population structuring with extensive admixture between two genetic clusters or gene pools. High genetic diversity and admixture, and low differentiation between populations, strongly suggest the potential for spread of resistance through gene flow and the need for management that limits seed and pollen dispersal in L. perenne .
Global Gradients of Coral Exposure to Environmental Stresses and Implications for Local Management
Maina, Joseph; McClanahan, Tim R.; Venus, Valentijn; Ateweberhan, Mebrahtu; Madin, Joshua
2011-01-01
Background The decline of coral reefs globally underscores the need for a spatial assessment of their exposure to multiple environmental stressors to estimate vulnerability and evaluate potential counter-measures. Methodology/Principal Findings This study combined global spatial gradients of coral exposure to radiation stress factors (temperature, UV light and doldrums), stress-reinforcing factors (sedimentation and eutrophication), and stress-reducing factors (temperature variability and tidal amplitude) to produce a global map of coral exposure and identify areas where exposure depends on factors that can be locally managed. A systems analytical approach was used to define interactions between radiation stress variables, stress reinforcing variables and stress reducing variables. Fuzzy logic and spatial ordinations were employed to quantify coral exposure to these stressors. Globally, corals are exposed to radiation and reinforcing stress, albeit with high spatial variability within regions. Based on ordination of exposure grades, regions group into two clusters. The first cluster was composed of severely exposed regions with high radiation and low reducing stress scores (South East Asia, Micronesia, Eastern Pacific and the central Indian Ocean) or alternatively high reinforcing stress scores (the Middle East and the Western Australia). The second cluster was composed of moderately to highly exposed regions with moderate to high scores in both radiation and reducing factors (Caribbean, Great Barrier Reef (GBR), Central Pacific, Polynesia and the western Indian Ocean) where the GBR was strongly associated with reinforcing stress. Conclusions/Significance Despite radiation stress being the most dominant stressor, the exposure of coral reefs could be reduced by locally managing chronic human impacts that act to reinforce radiation stress. Future research and management efforts should focus on incorporating the factors that mitigate the effect of coral stressors until long-term carbon reductions are achieved through global negotiations. PMID:21860667
Land management: data availability and process understanding for global change studies.
Erb, Karl-Heinz; Luyssaert, Sebastiaan; Meyfroidt, Patrick; Pongratz, Julia; Don, Axel; Kloster, Silvia; Kuemmerle, Tobias; Fetzel, Tamara; Fuchs, Richard; Herold, Martin; Haberl, Helmut; Jones, Chris D; Marín-Spiotta, Erika; McCallum, Ian; Robertson, Eddy; Seufert, Verena; Fritz, Steffen; Valade, Aude; Wiltshire, Andrew; Dolman, Albertus J
2017-02-01
In the light of daunting global sustainability challenges such as climate change, biodiversity loss and food security, improving our understanding of the complex dynamics of the Earth system is crucial. However, large knowledge gaps related to the effects of land management persist, in particular those human-induced changes in terrestrial ecosystems that do not result in land-cover conversions. Here, we review the current state of knowledge of ten common land management activities for their biogeochemical and biophysical impacts, the level of process understanding and data availability. Our review shows that ca. one-tenth of the ice-free land surface is under intense human management, half under medium and one-fifth under extensive management. Based on our review, we cluster these ten management activities into three groups: (i) management activities for which data sets are available, and for which a good knowledge base exists (cropland harvest and irrigation); (ii) management activities for which sufficient knowledge on biogeochemical and biophysical effects exists but robust global data sets are lacking (forest harvest, tree species selection, grazing and mowing harvest, N fertilization); and (iii) land management practices with severe data gaps concomitant with an unsatisfactory level of process understanding (crop species selection, artificial wetland drainage, tillage and fire management and crop residue management, an element of crop harvest). Although we identify multiple impediments to progress, we conclude that the current status of process understanding and data availability is sufficient to advance with incorporating management in, for example, Earth system or dynamic vegetation models in order to provide a systematic assessment of their role in the Earth system. This review contributes to a strategic prioritization of research efforts across multiple disciplines, including land system research, ecological research and Earth system modelling. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Uchegbu, Smart N.
Plan and policy development usually define the course, goal, execution, success or failure of any public utilities initiative. Urban water supply is not an exception. Planning and management in public water supply systems often determine the quality of service the water supply authorities can render. This paper, therefore, addresses the issue of effective planning and management as critical determinants of urban water supply and management with respect to two Nigerian cities Umuahia and Aba both in Abia State. Appropriate sampling methods systematic sampling and cluster techniques were employed in order to collect data for the study. The collected data were analyzed using multiple linear regression. The findings of the study indicate that planning and management indices such as funding, manpower, water storage tank capacity greatly influence the volume of water supplied in the study areas. Funding was identified as a major determinant of the efficiency of the water supply system. Therefore, the study advocates the need for sector reforms that would usher in private participants in the water sector both for improved funding and enhanced productivity.
Hammer, Antje; Arah, Onyebuchi A; Dersarkissian, Maral; Thompson, Caroline A; Mannion, Russell; Wagner, Cordula; Ommen, Oliver; Sunol, Rosa; Pfaff, Holger
2013-01-01
Strategic leadership is an important organizational capability and is essential for quality improvement in hospital settings. Furthermore, the quality of leadership depends crucially on a common set of shared values and mutual trust between hospital management board members. According to the concept of social capital, these are essential requirements for successful cooperation and coordination within groups. We assume that social capital within hospital management boards is an important factor in the development of effective organizational systems for overseeing health care quality. We hypothesized that the degree of social capital within the hospital management board is associated with the effectiveness and maturity of the quality management system in European hospitals. We used a mixed-method approach to data collection and measurement in 188 hospitals in 7 European countries. For this analysis, we used responses from hospital managers. To test our hypothesis, we conducted a multilevel linear regression analysis of the association between social capital and the quality management system score at the hospital level, controlling for hospital ownership, teaching status, number of beds, number of board members, organizational culture, and country clustering. The average social capital score within a hospital management board was 3.3 (standard deviation: 0.5; range: 1-4) and the average hospital score for the quality management index was 19.2 (standard deviation: 4.5; range: 0-27). Higher social capital was associated with higher quality management system scores (regression coefficient: 1.41; standard error: 0.64, p=0.029). The results suggest that a higher degree of social capital exists in hospitals that exhibit higher maturity in their quality management systems. Although uncontrolled confounding and reverse causation cannot be completely ruled out, our new findings, along with the results of previous research, could have important implications for the work of hospital managers and the design and evaluation of hospital quality management systems.
Hammer, Antje; Arah, Onyebuchi A.; DerSarkissian, Maral; Thompson, Caroline A.; Mannion, Russell; Wagner, Cordula; Ommen, Oliver; Sunol, Rosa; Pfaff, Holger
2013-01-01
Background Strategic leadership is an important organizational capability and is essential for quality improvement in hospital settings. Furthermore, the quality of leadership depends crucially on a common set of shared values and mutual trust between hospital management board members. According to the concept of social capital, these are essential requirements for successful cooperation and coordination within groups. Objectives We assume that social capital within hospital management boards is an important factor in the development of effective organizational systems for overseeing health care quality. We hypothesized that the degree of social capital within the hospital management board is associated with the effectiveness and maturity of the quality management system in European hospitals. Methods We used a mixed-method approach to data collection and measurement in 188 hospitals in 7 European countries. For this analysis, we used responses from hospital managers. To test our hypothesis, we conducted a multilevel linear regression analysis of the association between social capital and the quality management system score at the hospital level, controlling for hospital ownership, teaching status, number of beds, number of board members, organizational culture, and country clustering. Results The average social capital score within a hospital management board was 3.3 (standard deviation: 0.5; range: 1-4) and the average hospital score for the quality management index was 19.2 (standard deviation: 4.5; range: 0-27). Higher social capital was associated with higher quality management system scores (regression coefficient: 1.41; standard error: 0.64, p=0.029). Conclusion The results suggest that a higher degree of social capital exists in hospitals that exhibit higher maturity in their quality management systems. Although uncontrolled confounding and reverse causation cannot be completely ruled out, our new findings, along with the results of previous research, could have important implications for the work of hospital managers and the design and evaluation of hospital quality management systems. PMID:24392027
D Geomarketing Segmentation: a Higher Spatial Dimension Planning Perspective
NASA Astrophysics Data System (ADS)
Suhaibah, A.; Uznir, U.; Rahman, A. A.; Anton, F.; Mioc, D.
2016-09-01
Geomarketing is a discipline which uses geographic information in the process of planning and implementation of marketing activities. It can be used in any aspect of the marketing such as price, promotion or geo targeting. The analysis of geomarketing data use a huge data pool such as location residential areas, topography, it also analyzes demographic information such as age, genre, annual income and lifestyle. This information can help users to develop successful promotional campaigns in order to achieve marketing goals. One of the common activities in geomarketing is market segmentation. The segmentation clusters the data into several groups based on its geographic criteria. To refine the search operation during analysis, we proposed an approach to cluster the data using a clustering algorithm. However, with the huge data pool, overlap among clusters may happen and leads to inefficient analysis. Moreover, geomarketing is usually active in urban areas and requires clusters to be organized in a three-dimensional (3D) way (i.e. multi-level shop lots, residential apartments). This is a constraint with the current Geographic Information System (GIS) framework. To avoid this issue, we proposed a combination of market segmentation based on geographic criteria and clustering algorithm for 3D geomarketing data management. The proposed approach is capable in minimizing the overlap region during market segmentation. In this paper, geomarketing in urban area is used as a case study. Based on the case study, several locations of customers and stores in 3D are used in the test. The experiments demonstrated in this paper substantiated that the proposed approach is capable of minimizing overlapping segmentation and reducing repetitive data entries. The structure is also tested for retrieving the spatial records from the database. For marketing purposes, certain radius of point is used to analyzing marketing targets. Based on the presented tests in this paper, we strongly believe that the structure is capable in handling and managing huge pool of geomarketing data. For future outlook, this paper also discusses the possibilities of expanding the structure.
Cluster of the Technische Universität Dresden for greenhouse gas and water fluxes
NASA Astrophysics Data System (ADS)
Moderow, Uta; Eichelmann, Uwe; Grünwald, Thomas; Prasse, Heiko; Queck, Ronald; Spank, Uwe; Bernhofer, Christian
2017-04-01
How different land uses change CO2-fluxes under similar climatic conditions is a core question concerning the estimation of carbon sinks. Here, the TUD-cluster forms an excellent basis since it provides long-term measurements of Eddy-Covariance fluxes for different land uses. Measurements started at the Anchor Station Tharandter Wald (Spruce) in 1996. Since then the TUD-cluster has been successively complemented by continuous greenhouse gas flux observatories at Grillenburg (grassland), Klingenberg (crop rotation) and Spreewald (wetland), which have been operated since 2002, 2004 and 2010. The results of the TUD-cluster have been shared internationally in research frameworks such as EUROFLUX and subsequent research frameworks and is now part of ICOS-D (Integrated Carbon Observation System), the German branch to ICOS Europe. This contribution focuses on the presentation of the different sites with comparatively similar climatic conditions but different CO2-fluxes, water fluxes and energy fluxes. Influences of management and climatic conditions will be shown which are apparent in long-term data as well as interesting aspects of distinct land uses.
Analysis of Multi-Flight Common Routes for Traffic Flow Management
NASA Technical Reports Server (NTRS)
Sheth, Kapil; Clymer, Alexis; Morando, Alex; Shih, Fu-Tai
2016-01-01
This paper presents an approach for creating common weather avoidance reroutes for multiple flights and the associated benefits analysis, which is an extension of the single flight advisories generated using the Dynamic Weather Routes (DWR) concept. These multiple flight advisories are implemented in the National Airspace System (NAS) Constraint Evaluation and Notification Tool (NASCENT), a nation-wide simulation environment to generate time- and fuel-saving alternate routes for flights during severe weather events. These single flight advisories are clustered together in the same Center by considering parameters such as a common return capture fix. The clustering helps propose routes called, Multi-Flight Common Routes (MFCR), that avoid weather and other airspace constraints, and save time and fuel. It is expected that these routes would also provide lower workload for traffic managers and controllers since a common route is found for several flights, and presumably the route clearances would be easier and faster. This study was based on 30-days in 2014 and 2015 each, which had most delays attributed to convective weather. The results indicate that many opportunities exist where individual flight routes can be clustered to fly along a common route to save a significant amount of time and fuel, and potentially reducing the amount of coordination needed.
Kruis, Annemarije L; Boland, Melinde R S; Assendelft, Willem J J; Gussekloo, Jacobijn; Tsiachristas, Apostolos; Stijnen, Theo; Blom, Coert; Sont, Jacob K; Rutten-van Mölken, Maureen P H M; Chavannes, Niels H
2014-09-10
To investigate the long term effectiveness of integrated disease management delivered in primary care on quality of life in patients with chronic obstructive pulmonary disease (COPD) compared with usual care. 24 month, multicentre, pragmatic cluster randomised controlled trial 40 general practices in the western part of the Netherlands Patients with COPD according to GOLD (Global Initiative for COPD) criteria. Exclusion criteria were terminal illness, cognitive impairment, alcohol or drug misuse, and inability to fill in Dutch questionnaires. Practices were included if they were willing to create a multidisciplinary COPD team. General practitioners, practice nurses, and specialised physiotherapists in the intervention group received a two day training course on incorporating integrated disease management in practice, including early recognition of exacerbations and self management, smoking cessation, physiotherapeutic reactivation, optimal diagnosis, and drug adherence. Additionally, the course served as a network platform and collaborating healthcare providers designed an individual practice plan to integrate integrated disease management into daily practice. The control group continued usual care (based on international guidelines). The primary outcome was difference in health status at 12 months, measured by the Clinical COPD Questionnaire (CCQ); quality of life, Medical Research Council dyspnoea, exacerbation related outcomes, self management, physical activity, and level of integrated care (PACIC) were also assessed as secondary outcomes. Of a total of 1086 patients from 40 clusters, 20 practices (554 patients) were randomly assigned to the intervention group and 20 clusters (532 patients) to the usual care group. No difference was seen between groups in the CCQ at 12 months (mean difference -0.01, 95% confidence interval -0.10 to 0.08; P=0.8). After 12 months, no differences were seen in secondary outcomes between groups, except for the PACIC domain "follow-up/coordination" (indicating improved integration of care) and proportion of physically active patients. Exacerbation rates as well as number of days in hospital did not differ between groups. After 24 months, no differences were seen in outcomes, except for the PACIC follow-up/coordination domain. In this pragmatic study, an integrated disease management approach delivered in primary care showed no additional benefit compared with usual care, except improved level of integrated care and a self reported higher degree of daily activities. The contradictory findings to earlier positive studies could be explained by differences between interventions (provider versus patient targeted), selective reporting of positive trials, or little room for improvement in the already well developed Dutch healthcare system. Netherlands Trial Register NTR2268. © Kruis et al 2014.
Comparison of Clustering Techniques for Residential Energy Behavior using Smart Meter Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ling; Lee, Doris; Sim, Alex
Current practice in whole time series clustering of residential meter data focuses on aggregated or subsampled load data at the customer level, which ignores day-to-day differences within customers. This information is critical to determine each customer’s suitability to various demand side management strategies that support intelligent power grids and smart energy management. Clustering daily load shapes provides fine-grained information on customer attributes and sources of variation for subsequent models and customer segmentation. In this paper, we apply 11 clustering methods to daily residential meter data. We evaluate their parameter settings and suitability based on 6 generic performance metrics and post-checkingmore » of resulting clusters. Finally, we recommend suitable techniques and parameters based on the goal of discovering diverse daily load patterns among residential customers. To the authors’ knowledge, this paper is the first robust comparative review of clustering techniques applied to daily residential load shape time series in the power systems’ literature.« less
Close Encounters of the Stellar Kind
NASA Astrophysics Data System (ADS)
2003-07-01
NASA's Chandra X-ray Observatory has confirmed that close encounters between stars form X-ray emitting, double-star systems in dense globular star clusters. These X-ray binaries have a different birth process than their cousins outside globular clusters, and should have a profound influence on the cluster's evolution. A team of scientists led by David Pooley of the Massachusetts Institute of Technology in Cambridge took advantage of Chandra's unique ability to precisely locate and resolve individual sources to determine the number of X-ray sources in 12 globular clusters in our Galaxy. Most of the sources are binary systems containing a collapsed star such as a neutron star or a white dwarf star that is pulling matter off a normal, Sun-like companion star. "We found that the number of X-ray binaries is closely correlated with the rate of encounters between stars in the clusters," said Pooley. "Our conclusion is that the binaries are formed as a consequence of these encounters. It is a case of nurture not nature." A similar study led by Craig Heinke of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. confirmed this conclusion, and showed that roughly 10 percent of these X-ray binary systems contain neutron stars. Most of these neutron stars are usually quiet, spending less than 10% of their time actively feeding from their companion. NGC 7099 NGC 7099 A globular cluster is a spherical collection of hundreds of thousands or even millions of stars buzzing around each other in a gravitationally-bound stellar beehive that is about a hundred light years in diameter. The stars in a globular cluster are often only about a tenth of a light year apart. For comparison, the nearest star to the Sun, Proxima Centauri, is 4.2 light years away. With so many stars moving so close together, interactions between stars occur frequently in globular clusters. The stars, while rarely colliding, do get close enough to form binary star systems or cause binary stars to exchange partners in intricate dances. The data suggest that X-ray binary systems are formed in dense clusters known as globular clusters about once a day somewhere in the universe. Observations by NASA's Uhuru X-ray satellite in the 1970's showed that globular clusters seemed to contain a disproportionately large number of X-ray binary sources compared to the Galaxy as a whole. Normally only one in a billion stars is a member of an X-ray binary system containing a neutron star, whereas in globular clusters, the fraction is more like one in a million. The present research confirms earlier suggestions that the chance of forming an X-ray binary system is dramatically increased by the congestion in a globular cluster. Under these conditions two processes, known as three-star exchange collisions, and tidal captures, can lead to a thousandfold increase in the number of X-ray sources in globular clusters. 47 Tucanae 47 Tucanae In an exchange collision, a lone neutron star encounters a pair of ordinary stars. The intense gravity of the neutron star can induce the most massive ordinary star to "change partners," and pair up with the neutron star while ejecting the lighter star. A neutron star could also make a grazing collision with a single normal star, and the intense gravity of the neutron star could distort the gravity of the normal star in the process. The energy lost in the distortion, could prevent the normal star from escaping from the neutron star, leading to what is called tidal capture. "In addition to solving a long-standing mystery, Chandra data offer an opportunity for a deeper understanding of globular cluster evolution," said Heinke. "For example, the energy released in the formation of close binary systems could keep the central parts of the cluster from collapsing to form a massive black hole." NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the Office of Space Science, NASA Headquarters, Washington. Northrop Grumman of Redondo Beach, Calif., formerly TRW, Inc., was the prime development contractor for the observatory. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass. The image and additional information are available at: http://chandra.harvard.edu and http://chandra.nasa.gov
Robinson, Jo; Too, Lay San; Pirkis, Jane; Spittal, Matthew J
2016-11-22
A suicide cluster has been defined as a group of suicides that occur closer together in time and space than would normally be expected. We aimed to examine the extent to which suicide clusters exist among young people and adults in Australia and to determine whether differences exist between cluster and non-cluster suicides. Suicide data were obtained from the National Coronial Information System for the period 2010 and 2012. Data on date of death, postcode, age at the time of death, sex, suicide method, ICD-10 code for cause of death, marital status, employment status, and aboriginality were retrieved. We examined the presence of spatial clusters separately for youth suicides and adult suicides using the Scan statistic. Pearson's chi-square was used to compare the characteristics of cluster suicides with non-cluster suicides. We identified 12 spatial clusters between 2010 and 2012. Five occurred among young people (n = 53, representing 5.6% [53/940] of youth suicides) and seven occurred among adults (n = 137, representing 2.3% [137/5939] of adult suicides). Clusters ranged in size from three to 21 for youth and from three to 31 for adults. When compared to adults, suicides by young people were significantly more likely to occur as part of a cluster (difference = 3.3%, 95% confidence interval [CI] = 1.8 to 4.8, p < 0.0001). Suicides by people with an Indigenous background were also significantly more likely to occur in a cluster than suicide by non-Indigenous people and this was the case among both young people and adults. Suicide clusters have a significant negative impact on the communities in which they occur. As a result it is important to find effective ways of managing and containing suicide clusters. To date there is limited evidence for the effectiveness of those strategies typically employed, in particular in Indigenous settings, and developing this evidence base needs to be a future priority. Future research that examines in more depth the socio-demographic and clinical factors associated with suicide clusters is also warranted in order that appropriate interventions can be developed.
Game engines and immersive displays
NASA Astrophysics Data System (ADS)
Chang, Benjamin; Destefano, Marc
2014-02-01
While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.
The design of multiplayer online video game systems
NASA Astrophysics Data System (ADS)
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
2003-11-01
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
Impacts of storm chronology on the morphological changes of the Formby beach and dune system, UK
NASA Astrophysics Data System (ADS)
Dissanayake, P.; Brown, J.; Karunarathna, H.
2015-07-01
Impacts of storm chronology within a storm cluster on beach/dune erosion are investigated by applying the state-of-the-art numerical model XBeach to the Sefton coast, northwest England. Six temporal storm clusters of different storm chronologies were formulated using three storms observed during the 2013/2014 winter. The storm power values of these three events nearly halve from the first to second event and from the second to third event. Cross-shore profile evolution was simulated in response to the tide, surge and wave forcing during these storms. The model was first calibrated against the available post-storm survey profiles. Cumulative impacts of beach/dune erosion during each storm cluster were simulated by using the post-storm profile of an event as the pre-storm profile for each subsequent event. For the largest event the water levels caused noticeable retreat of the dune toe due to the high water elevation. For the other events the greatest evolution occurs over the bar formations (erosion) and within the corresponding troughs (deposition) of the upper-beach profile. The sequence of events impacting the size of this ridge-runnel feature is important as it consequently changes the resilience of the system to the most extreme event that causes dune retreat. The highest erosion during each single storm event was always observed when that storm initialised the storm cluster. The most severe storm always resulted in the most erosion during each cluster, no matter when it occurred within the chronology, although the erosion volume due to this storm was reduced when it was not the primary event. The greatest cumulative cluster erosion occurred with increasing storm severity; however, the variability in cumulative cluster impact over a beach/dune cross section due to storm chronology is minimal. Initial storm impact can act to enhance or reduce the system resilience to subsequent impact, but overall the cumulative impact is controlled by the magnitude and number of the storms. This model application provides inter-survey information about morphological response to repeated storm impact. This will inform local managers of the potential beach response and dune vulnerability to variable storm configurations.
Impacts of storm chronology on the morphological changes of the Formby beach and dune system, UK
NASA Astrophysics Data System (ADS)
Dissanayake, P.; Brown, J.; Karunarathna, H.
2015-04-01
Impacts of storm chronology within a storm cluster on beach/dune erosion are investigated by applying the state-of-the-art numerical model XBeach to the Sefton coast, northwest England. Six temporal storm clusters of different storm chronologies were formulated using three storms observed during the 2013/14 winter. The storm power values of these three events nearly halve from the first to second event and from the second to third event. Cross-shore profile evolution was simulated in response to the tide, surge and wave forcing during these storms. The model was first calibrated against the available post-storm survey profiles. Cumulative impacts of beach/dune erosion during each storm cluster were simulated by using the post-storm profile of an event as the pre-storm profile for each subsequent event. For the largest event the water levels caused noticeable retreat of the dune toe due to the high water elevation. For the other events the greatest evolution occurs over the bar formations (erosion) and within the corresponding troughs (deposition) of the upper beach profile. The sequence of events impacting the size of this ridge-runnel feature is important as it consequently changes the resilience of the system to the most extreme event that causes dune retreat. The highest erosion during each single storm event was always observed when that storm initialised the storm cluster. The most severe storm always resulted in the most erosion during each cluster, no matter when it occurred within the chronology, although the erosion volume due to this storm was reduced when it was not the primary event. The greatest cumulative cluster erosion occurred with increasing storm severity; however, the variability in cumulative cluster impact over a beach/dune cross-section due to storm chronology is minimal. Initial storm impact can act to enhance or reduce the system resilience to subsequent impact, but overall the cumulative impact is controlled by the magnitude and number of the storms. This model application provides inter-survey information about morphological response to repeated storm impact. This will inform local managers of the potential beach response and dune vulnerability to variable storm configurations.
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
The Evolution of Globular Cluster Systems In Early-Type Galaxies
NASA Astrophysics Data System (ADS)
Grillmair, Carl
1999-07-01
We will measure structural parameters {core radii and concentrations} of globular clusters in three early-type galaxies using deep, four-point dithered observations. We have chosen globular cluster systems which have young, medium-age and old cluster populations, as indicated by cluster colors and luminosities. Our primary goal is to test the hypothesis that globular cluster luminosity functions evolve towards a ``universal'' form. Previous observations have shown that young cluster systems have exponential luminosity functions rather than the characteristic log-normal luminosity function of old cluster systems. We will test to see whether such young system exhibits a wider range of structural parameters than an old systems, and whether and at what rate plausible disruption mechanisms will cause the luminosity function to evolve towards a log-normal form. A simple observational comparison of structural parameters between different age cluster populations and between diff er ent sub-populations within the same galaxy will also provide clues concerning both the formation and destruction mechanisms of star clusters, the distinction between open and globular clusters, and the advisability of using globular cluster luminosity functions as distance indicators.
Knowledge Management for Command and Control
2004-06-01
interfaces relies on rich visual and conceptual understanding of what is sketched, rather than the pattern-recognition technologies that most systems use...recognizers) required by other approaches. • The underlying conceptual representations that nuSketch uses enable it to serve as a front end to knowledge...constructing enemy-intent hypotheses via mixed visual and conceptual analogies. II.C. Multi-ViewPoint Clustering Analysis (MVP-CA) technology To
Second Evaluation of Job Queuing/Scheduling Software. Phase 1
NASA Technical Reports Server (NTRS)
Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)
1997-01-01
The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.
van der Molen, Thys; Fletcher, Monica; Price, David
Asthma is a highly heterogeneous disease that can be classified into different clinical phenotypes, and treatment may be tailored accordingly. However, factors beyond purely clinical traits, such as patient attitudes and behaviors, can also have a marked impact on treatment outcomes. The objective of this study was to further analyze data from the REcognise Asthma and LInk to Symptoms and Experience (REALISE) Europe survey, to identify distinct patient groups sharing common attitudes toward asthma and its management. Factor analysis of respondent data (N = 7,930) from the REALISE Europe survey consolidated the 34 attitudinal variables provided by the study population into a set of 8 summary factors. Cluster analyses were used to identify patient clusters that showed similar attitudes and behaviors toward each of the 8 summary factors. Five distinct patient clusters were identified and named according to the key characteristics comprising that cluster: "Confident and self-managing," "Confident and accepting of their asthma," "Confident but dependent on others," "Concerned but confident in their health care professional (HCP)," and "Not confident in themselves or their HCP." Clusters showed clear variability in attributes such as degree of confidence in managing their asthma, use of reliever and preventer medication, and level of asthma control. The 5 patient clusters identified in this analysis displayed distinctly different personal attitudes that would require different approaches in the consultation room certainly for asthma but probably also for other chronic diseases. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Pyglidein - A Simple HTCondor Glidein Service
NASA Astrophysics Data System (ADS)
Schultz, D.; Riedel, B.; Merino, G.
2017-10-01
A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.
Improving the Statistical Modeling of the TRMM Extreme Precipitation Monitoring System
NASA Astrophysics Data System (ADS)
Demirdjian, L.; Zhou, Y.; Huffman, G. J.
2016-12-01
This project improves upon an existing extreme precipitation monitoring system based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach, where data from similar grid locations are pooled to increase the quality and stability of the resulting model parameter estimates to compensate for the short data record. The regional frequency analysis is divided into two stages. In the first stage, the region defined by the TRMM measurements is partitioned into approximately 27,000 non-overlapping clusters using a recursive k-means clustering scheme. In the second stage, a statistical model is used to characterize the extreme precipitation events occurring in each cluster. Instead of utilizing the block-maxima approach used in the existing system, where annual maxima are fit to the Generalized Extreme Value (GEV) probability distribution at each cluster separately, the present work adopts the peak-over-threshold (POT) method of classifying points as extreme if they exceed a pre-specified threshold. Theoretical considerations motivate the use of the Generalized-Pareto (GP) distribution for fitting threshold exceedances. The fitted parameters can be used to construct simple and intuitive average recurrence interval (ARI) maps which reveal how rare a particular precipitation event is given its spatial location. The new methodology eliminates much of the random noise that was produced by the existing models due to a short data record, producing more reasonable ARI maps when compared with NOAA's long-term Climate Prediction Center (CPC) ground based observations. The resulting ARI maps can be useful for disaster preparation, warning, and management, as well as increased public awareness of the severity of precipitation events. Furthermore, the proposed methodology can be applied to various other extreme climate records.
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
NASA Astrophysics Data System (ADS)
Asmi, A.; Sorvari, S.; Kutsch, W. L.; Laj, P.
2017-12-01
European long-term environmental research infrastructures (often referred as ESFRI RIs) are the core facilities for providing services for scientists in their quest for understanding and predicting the complex Earth system and its functioning that requires long-term efforts to identify environmental changes (trends, thresholds and resilience, interactions and feedbacks). Many of the research infrastructures originally have been developed to respond to the needs of their specific research communities, however, it is clear that strong collaboration among research infrastructures is needed to serve the trans-boundary research requires exploring scientific questions at the intersection of different scientific fields, conducting joint research projects and developing concepts, devices, and methods that can be used to integrate knowledge. European Environmental research infrastructures have already been successfully worked together for many years and have established a cluster - ENVRI cluster - for their collaborative work. ENVRI cluster act as a collaborative platform where the RIs can jointly agree on the common solutions for their operations, draft strategies and policies and share best practices and knowledge. Supporting project for the ENVRI cluster, ENVRIplus project, brings together 21 European research infrastructures and infrastructure networks to work on joint technical solutions, data interoperability, access management, training, strategies and dissemination efforts. ENVRI cluster act as one stop shop for multidisciplinary RI users, other collaborative initiatives, projects and programmes and coordinates and implement jointly agreed RI strategies.
NASA Astrophysics Data System (ADS)
Varua, M. E.; Ward, J.; Maheshwari, B.; Oza, S.; Purohit, R.; Hakimuddin; Chinnasamy, P.
2016-06-01
The absence of either state regulations or markets to coordinate the operation of individual wells has focussed attention on community level institutions as the primary loci for sustainable groundwater management in Rajasthan and Gujarat, India. The reported research relied on theoretical propositions that livelihood strategies, groundwater management and the propensity to cooperate are associated with the attitudinal orientations of well owners in the Meghraj and Dharta watersheds, located in Gujarat and Rajasthan respectively. The research tested the hypothesis that attitudes to groundwater management and farming practices, household income and trust levels of assisting agencies were not consistent across the watersheds, implying that a targeted approach, in contrast to default uniform programs, would assist communities craft rules to manage groundwater across multiple hydro-geological settings. Hierarchical cluster analysis of attitudes held by survey respondents revealed four statistically significant discrete clusters, supporting acceptance of the hypothesis. Further analyses revealed significant differences in farming practices, household wealth and willingness to adapt across the four groundwater management clusters. In conclusion, the need to account for attitudinal diversity is highlighted and a framework to guide the specific design of processes to assist communities craft coordinating instruments to sustainably manage local aquifers described.
Marc G. Genton; David T. Butry; Marcia L. Gumpertz; Jeffrey P. Prestemon
2006-01-01
We analyse the spatio-temporal structure of wildfire ignitions in the St. Johns River Water Management District in north-eastern Florida. We show, using tools to analyse point patterns (e.g. the L-function), that wildfire events occur in clusters. Clustering of these events correlates with irregular distribution of fire ignitions, including lightning...
ERIC Educational Resources Information Center
Illinois Univ., Urbana. Office of Agricultural Communications and Education.
This curriculum guide contains 5 teaching units for 44 agricultural business and management cluster problem areas. These problem areas have been selected as suggested areas of study to be included in a core curriculum for secondary students enrolled in an agricultural education program. The five units are as follows: (1) agribusiness operation and…
Yu, Ming; Cao, Qi-chen; Su, Yu-xi; Sui, Xin; Yang, Hong-jun; Huang, Lu-qi; Wang, Wen-ping
2015-08-01
Malignant tumor is one of the main causes for death in the world at present as well as a major disease seriously harming human health and life and restricting the social and economic development. There are many kinds of reports about traditional Chinese medicine patent prescriptions, empirical prescriptions and self-made prescriptions treating cancer, and prescription rules were often analyzed based on medication frequency. Such methods were applicable for discovering dominant experience but hard to have an innovative discovery and knowledge. In this paper, based on the traditional Chinese medicine inheritance assistance system, the software integration of mutual information improvement method, complex system entropy clustering and unsupervised entropy-level clustering data mining methods was adopted to analyze the rules of traditional Chinese medicine prescriptions for cancer. Totally 114 prescriptions were selected, the frequency of herbs in prescription was determined, and 85 core combinations and 13 new prescriptions were indentified. The traditional Chinese medicine inheritance assistance system, as a valuable traditional Chinese medicine research-supporting tool, can be used to record, manage, inquire and analyze prescription data.
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
Risk Evaluation for Cyclic Aliphatic Bromide Cluster (HBCD Cluster)
EPA's existing chemicals programs address pollution prevention, risk assessment, hazard and exposure assessment and/or characterization, and risk management for chemicals substances in commercial use.
Murphy, Matthew; MacCarthy, M Jayne; McAllister, Lynda; Gilbert, Robert
2014-12-05
Competency profiles for occupational clusters within Canada's substance abuse workforce (SAW) define the need for skill and knowledge in evidence-based practice (EBP) across all its members. Members of the Senior Management occupational cluster hold ultimate responsibility for decisions made within addiction services agencies and therefore must possess the highest level of proficiency in EBP. The objective of this study was to assess the knowledge of the principles of EBP, and use of the components of the evidence-based decision making (EBDM) process in members of this occupational cluster from selected addiction services agencies in Nova Scotia. A convenience sampling method was used to recruit participants from addiction services agencies. Semi-structured qualitative interviews were conducted with eighteen Senior Management. The interviews were audio-recorded, transcribed verbatim and checked by the participants. Interview transcripts were coded and analyzed for themes using content analysis and assisted by qualitative data analysis software (NVivo 9.0). Data analysis revealed four main themes: 1) Senior Management believe that addictions services agencies are evidence-based; 2) Consensus-based decision making is the norm; 3) Senior Management understand the principles of EBP and; 4) Senior Management do not themselves use all components of the EBDM process when making decisions, oftentimes delegating components of this process to decision support staff. Senior Management possess an understanding of the principles of EBP, however, when making decisions they often delegate components of the EBDM process to decision support staff. Decision support staff are not defined as an occupational cluster in Canada's SAW and have not been ascribed a competency profile. As such, there is no guarantee that this group possesses competency in EBDM. There is a need to advocate for the development of a defined occupational cluster and associated competency profile for this critical group.
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
Kadambala, Ravi; Powell, Jon; Singh, Karamjit; Townsend, Timothy G
2016-12-01
Vertical liquids addition systems have been used at municipal landfills as a leachate management method and to enhance biostabilization of waste. Drawbacks of these systems include a limitation on pressurized injection and the occurrence of seepage. A novel vertical well system that employed buried wells constructed below a lift of compacted waste was operated for 153 days at a landfill in Florida, USA. The system included 54 wells installed in six clusters of nine wells connected with a horizontally-oriented manifold system. A cumulative volume of 8430 m 3 of leachate was added intermittently into the well clusters over the duration of the project with no incidence of surface seeps. Achievable average flow rates ranged from 9.3 × 10 -4 m 3 s -1 to 14.2 × 10 -4 m 3 s -1 , which was similar to or greater than flow rates achieved in a previous study using traditional vertical wells at the same landfill site. The results demonstrated that pressurized liquids addition in vertical wells at municipal solid waste landfills can be achieved while avoiding typical operational and maintenance issues associated with seeps. © The Author(s) 2016.
Experience on HTCondor batch system for HEP and other research fields at KISTI-GSDC
NASA Astrophysics Data System (ADS)
Ahn, S. U.; Jaikar, A.; Kong, B.; Yeo, I.; Bae, S.; Kim, J.
2017-10-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) located at Daejeon in South Korea is the unique datacenter in the country which helps with its computing resources fundamental research fields dealing with the large-scale of data. For historical reason, it has run Torque batch system while recently it starts running HTCondor for new systems. Having different kinds of batch systems implies inefficiency in terms of resource management and utilization. We conducted a research on resource management with HTCondor for several user scenarios corresponding to the user environments that currently GSDC supports. A recent research on the resource usage patterns at GSDC is considered in this research to build the possible user scenarios. Checkpointing and Super-Collector model of HTCondor give us more efficient and flexible way to manage resources and Grid Gate provided by HTCondor helps to interface with the Grid environment. In this paper, the overview on the essential features of HTCondor exploited in this work is described and the practical examples for HTCondor cluster configuration in our cases are presented.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Clustering execution in a processing system to increase power savings
Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.; Vega, Augusto J.
2018-03-20
Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling the tasks.
Region Templates: Data Representation and Management for High-Throughput Image Analysis
Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel
2015-01-01
We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953
Security practices and regulatory compliance in the healthcare industry.
Kwon, Juhee; Johnson, M Eric
2013-01-01
Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Hospitals in the highest level of compliance were significantly managing third parties' breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption.
Security practices and regulatory compliance in the healthcare industry
Kwon, Juhee; Johnson, M Eric
2013-01-01
Objective Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. Design We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. Measurement We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Results Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Conclusions Hospitals in the highest level of compliance were significantly managing third parties’ breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption. PMID:22955497
Alawneh, J I; Barnes, T S; Parke, C; Lapuz, E; David, E; Basinang, V; Baluyut, A; Villar, E; Lopez, E L; Blackall, P J
2014-05-01
A cross-sectional study was conducted between October 2011 and March 2012 in two major pig producing provinces in the Philippines. Four hundred and seventy one pig farms slaughtering finisher pigs at government operated abattoirs participated in this study. The objectives of this study were to group: (a) smallholder (S) and commercial (C) production systems into patterns according to their herd health providers (HHPs), and obtain descriptive information about the grouped S and C production systems; and (b) identify key HHPs within each production system using social network analysis. On-farm veterinarians, private consultants, pharmaceutical company representatives, government veterinarians, livestock and agricultural technicians, and agricultural supply stores were found to be actively interacting with pig farmers. Four clusters were identified based on production system and their choice of HHPs. Differences in management and biosecurity practices were found between S and C clusters. Private HHPs provided a service to larger C and some larger S farms, and have little or no interaction with the other HHPs. Government HHPs provided herd health service mainly to S farms and small C farms. Agricultural supply stores were identified as a dominant solitary HHP and provided herd health services to the majority of farmers. Increased knowledge of the routine management and biosecurity practices of S and C farmers and the key HHPs that are likely to be associated with those practices would be of value as this information could be used to inform a risk-based approach to disease surveillance and control. Copyright © 2014 Elsevier B.V. All rights reserved.
Alcaraz, Saul; González-Saiz, Francisco; Trujols, Joan; Vergara-Moragues, Esperanza; Siñol, Núria; Pérez de Los Cobos, José
2018-06-01
Buprenorphine dosage is a crucial factor influencing outcomes of buprenorphine treatment for heroin use disorders. Therefore, the aim of the present study is to identify naturally occurring profiles of heroin-dependent patients regarding individualized management of buprenorphine dosage in clinical practice of buprenorphine-naloxone maintenance treatment. 316 patients receiving buprenorphine-naloxone maintenance treatment were surveyed at 16 Spanish centers during the stabilization phase of this treatment. Patients were grouped using cluster analysis based on three key indicators of buprenorphine dosage management: dose, adequacy according to physician, and adjustment according to patient. The clusters obtained were compared regarding different facets of patient clinical condition. Four clusters were identified and labeled as follows (buprenorphine average dose and percentage of participants in each cluster are given in brackets): "Clinically Adequate and Adjusted to Patient Desired Low Dosage" (2.60 mg/d, 37.05%); "Clinically Adequate and Adjusted to Patient Desired High Dosage" (10.71 mg/d, 29.18%); "Clinically Adequate and Patient Desired Reduction of Low Dosage" (3.38 mg/d, 20.0%); and "Clinically Inadequate and Adjusted to Patient Desired Moderate Dosage" (7.55 mg/d, 13.77%). Compared to patients from the other three clusters, participants in the latter cluster reported more frequent use of heroin and cocaine during last week, lower satisfaction with buprenorphine-naloxone as a medication, higher prevalence of buprenorphine-naloxone adverse effects and poorer psychological adjustment. Our results show notable differences between clusters of heroin-dependent patients regarding buprenorphine dosage management. We also identified a group of patients receiving clinically inadequate buprenorphine dosage, which was related to poorer clinical condition. Copyright © 2018 Elsevier B.V. All rights reserved.
Resource utilization groups. A patient classification system for long-term care.
Fries, B E; Cooney, L M
1985-02-01
The ability to understand, control, manage, regulate, and reimburse nursing home care has been hampered by the unavailability of a classification system of long-term care patients. A study of 1,469 patients in Connecticut nursing homes has resulted in such a classification system that clusters patients with similar relative needs for resources, in particular, for nursing time. The nine groups formed can be used to develop a case-mix profile of the relative care needs of these patients, and their development demonstrates that only a few measures of the functional status of patients, rather than diagnosis or psychosocial/behavioral problems, are sufficient to form such a system.
Certification of Completion of ASC FY08 Level-2 Milestone ID #2933
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipari, D A
2008-06-12
This report documents the satisfaction of the completion criteria associated with ASC FY08 Milestone ID No.2933: 'Deploy Moab resource management services on BlueGene/L'. Specifically, this milestone represents LLNL efforts to enhance both SLURM and Moab to extend Moab's capabilities to schedule and manage BlueGene/L, and increases portability of user scripts between ASC systems. The completion criteria for the milestone are the following: (1) Batch jobs can be specified, submitted to Moab, scheduled and run on the BlueGene/L system; (2) Moab will be able to support the markedly increased scale in node count as well as the wiring geometry that ismore » unique to BlueGene/L; and (3) Moab will also prepare and report statistics of job CPU usage just as it does for the current systems it supports. This document presents the completion evidence for both of the stated milestone certification methods: Completion evidence for this milestone will be in the form of (1) documentation--a report that certifies that the completion criteria have been met; and (2) user hand-off. As the selected Tri-Lab workload manager, Moab was chosen to replace LCRM as the enterprise-wide scheduler across Livermore Computing (LC) systems. While LCRM/SLURM successfully scheduled jobs on BG/L, the effort to replace LCRM with Moab on BG/L represented a significant challenge. Moab is a commercial product developed and sold by Cluster Resources, Inc. (CRI). Moab receives the users batch job requests and dispatches these jobs to run on a specific cluster. SLURM is an open-source resource manager whose development is managed by members of the Integrated Computational Resource Management Group (ICRMG) within the Services and Development Division at LLNL. SLURM is responsible for launching and running jobs on an individual cluster. Replacing LCRM with Moab on BG/L required substantial changes to both Moab and SLURM. While the ICRMG could directly manage the SLURM development effort, the work to enhance Moab had to be done by Moab's vendor. Members of the ICRMG held many meetings with CRI developers to develop the design and specify the requirements for what Moab needed to do. Extensions to SLURM are used to run jobs on the BlueGene/L architecture. These extensions support the three dimensional network topology unique to BG/L. While BG/L geometry support was already in SLURM, enhancements were needed to provide backfill capability and answer 'will-run' queries from Moab. For its part, the Moab architecture needed to be modified to interact with SLURM in a more coordinated way. It needed enhancements to support SLURM's shorthand notation for representing thousands of compute nodes and report this information using Moab's existing status commands. The LCRM wrapper scripts that emulated LCRM commands also needed to be enhanced to support BG/L usage. The effort was successful as Moab 5.2.2 and SLURM 1.3 was installed on the 106496 node BG/L machine on May 21, 2008, and turned over to the users to run production.« less
Massoud, May A; Tarhini, Akram; Nasr, Joumana A
2009-01-01
Providing reliable and affordable wastewater treatment in rural areas is a challenge in many parts of the world, particularly in developing countries. The problems and limitations of the centralized approaches for wastewater treatment are progressively surfacing. Centralized wastewater collection and treatment systems are costly to build and operate, especially in areas with low population densities and dispersed households. Developing countries lack both the funding to construct centralized facilities and the technical expertise to manage and operate them. Alternatively, the decentralized approach for wastewater treatment which employs a combination of onsite and/or cluster systems is gaining more attention. Such an approach allows for flexibility in management, and simple as well as complex technologies are available. The decentralized system is not only a long-term solution for small communities but is more reliable and cost effective. This paper presents a review of the various decentralized approaches to wastewater treatment and management. A discussion as to their applicability in developing countries, primarily in rural areas, and challenges faced is emphasized all through the paper. While there are many impediments and challenges towards wastewater management in developing countries, these can be overcome by suitable planning and policy implementation. Understanding the receiving environment is crucial for technology selection and should be accomplished by conducting a comprehensive site evaluation process. Centralized management of the decentralized wastewater treatment systems is essential to ensure they are inspected and maintained regularly. Management strategies should be site specific accounting for social, cultural, environmental and economic conditions in the target area.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... economically dynamic regional innovation cluster focused on energy efficient buildings technologies and systems... DEPARTMENT OF ENERGY Energy Efficient Building Systems Regional Innovation Cluster Initiative... February 8, 2010, titled the Energy Efficient Building Systems Regional Innovation Cluster Initiative. A...
How Teachers Use and Manage Their Blogs? A Cluster Analysis of Teachers' Blogs in Taiwan
ERIC Educational Resources Information Center
Liu, Eric Zhi-Feng; Hou, Huei-Tse
2013-01-01
The development of Web 2.0 has ushered in a new set of web-based tools, including blogs. This study focused on how teachers use and manage their blogs. A sample of 165 teachers' blogs in Taiwan was analyzed by factor analysis, cluster analysis and qualitative content analysis. First, the teachers' blogs were analyzed according to six criteria…
Dynamic Task Optimization in Remote Diabetes Monitoring Systems.
Suh, Myung-Kyung; Woodbridge, Jonathan; Moin, Tannaz; Lan, Mars; Alshurafa, Nabil; Samy, Lauren; Mortazavi, Bobak; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid
2012-09-01
Diabetes is the seventh leading cause of death in the United States, but careful symptom monitoring can prevent adverse events. A real-time patient monitoring and feedback system is one of the solutions to help patients with diabetes and their healthcare professionals monitor health-related measurements and provide dynamic feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the domain of remote health monitoring. This paper presents a wireless health project (WANDA) that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. The WANDA dynamic task management function applies data analytics in real-time to discretize continuous features, applying data clustering and association rule mining techniques to manage a sliding window size dynamically and to prioritize required user tasks. The developed algorithm minimizes the number of daily action items required by patients with diabetes using association rules that satisfy a minimum support, confidence and conditional probability thresholds. Each of these tasks maximizes information gain, thereby improving the overall level of patient adherence and satisfaction. Experimental results from applying EM-based clustering and Apriori algorithms show that the developed algorithm can predict further events with higher confidence levels and reduce the number of user tasks by up to 76.19 %.
Dynamic Task Optimization in Remote Diabetes Monitoring Systems
Suh, Myung-kyung; Woodbridge, Jonathan; Moin, Tannaz; Lan, Mars; Alshurafa, Nabil; Samy, Lauren; Mortazavi, Bobak; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid
2016-01-01
Diabetes is the seventh leading cause of death in the United States, but careful symptom monitoring can prevent adverse events. A real-time patient monitoring and feedback system is one of the solutions to help patients with diabetes and their healthcare professionals monitor health-related measurements and provide dynamic feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the domain of remote health monitoring. This paper presents a wireless health project (WANDA) that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. The WANDA dynamic task management function applies data analytics in real-time to discretize continuous features, applying data clustering and association rule mining techniques to manage a sliding window size dynamically and to prioritize required user tasks. The developed algorithm minimizes the number of daily action items required by patients with diabetes using association rules that satisfy a minimum support, confidence and conditional probability thresholds. Each of these tasks maximizes information gain, thereby improving the overall level of patient adherence and satisfaction. Experimental results from applying EM-based clustering and Apriori algorithms show that the developed algorithm can predict further events with higher confidence levels and reduce the number of user tasks by up to 76.19 %. PMID:27617297
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Clustering execution in a processing system to increase power savings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.
Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling themore » tasks.« less
Performance indicators and indices of sludge management in urban wastewater treatment plants.
Silva, C; Saldanha Matos, J; Rosa, M J
2016-12-15
Sludge (or biosolids) management is highly complex and has a significant cost associated with the biosolids disposal, as well as with the energy and flocculant consumption in the sludge processing units. The sludge management performance indicators (PIs) and indices (PXs) are thus core measures of the performance assessment system developed for urban wastewater treatment plants (WWTPs). The key PIs proposed cover the sludge unit production and dry solids concentration (DS), disposal/beneficial use, quality compliance for agricultural use and costs, whereas the complementary PIs assess the plant reliability and the chemical reagents' use. A key PI was also developed for assessing the phosphorus reclamation, namely through the beneficial use of the biosolids and the reclaimed water in agriculture. The results of a field study with 17 Portuguese urban WWTPs in a 5-year period were used to derive the PI reference values which are neither inherent to the PI formulation nor literature-based. Clusters by sludge type (primary, activated, trickling filter and mixed sludge) and by digestion and dewatering processes were analysed and the reference values for sludge production and dry solids were proposed for two clusters: activated sludge or biofilter WWTPs with primary sedimentation, sludge anaerobic digestion and centrifuge dewatering; activated sludge WWTPs without primary sedimentation and anaerobic digestion and with centrifuge dewatering. The key PXs are computed for the DS after each processing unit and the complementary PXs for the energy consumption and the operating conditions DS-determining. The PX reference values are treatment specific and literature based. The PI and PX system was applied to a WWTP and the results demonstrate that it diagnosis the situation and indicates opportunities and measures for improving the WWTP performance in sludge management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alternative states of a semiarid grassland ecosystem: implications for ecosystem services
Miller, Mark E.; Belote, R. Travis; Bowker, Matthew A.; Garman, Steven L.
2011-01-01
Ecosystems can shift between alternative states characterized by persistent differences in structure, function, and capacity to provide ecosystem services valued by society. We examined empirical evidence for alternative states in a semiarid grassland ecosystem where topographic complexity and contrasting management regimes have led to spatial variations in levels of livestock grazing. Using an inventory data set, we found that plots (n = 72) cluster into three groups corresponding to generalized alternative states identified in an a priori conceptual model. One cluster (biocrust) is notable for high coverage of a biological soil crust functional group in addition to vascular plants. Another (grass-bare) lacks biological crust but retains perennial grasses at levels similar to the biocrust cluster. A third (annualized-bare) is dominated by invasive annual plants. Occurrence of grass-bare and annualized-bare conditions in areas where livestock have been excluded for over 30 years demonstrates the persistence of these states. Significant differences among all three clusters were found for percent bare ground, percent total live cover, and functional group richness. Using data for vegetation structure and soil erodibility, we also found large among-cluster differences in average levels of dust emissions predicted by a wind-erosion model. Predicted emissions were highest for the annualized-bare cluster and lowest for the biocrust cluster, which was characterized by zero or minimal emissions even under conditions of extreme wind. Results illustrate potential trade-offs among ecosystem services including livestock production, soil retention, carbon storage, and biodiversity conservation. Improved understanding of these trade-offs may assist ecosystem managers when evaluating alternative management strategies.
Community involvement in dengue vector control: cluster randomised trial.
Vanlerberghe, V; Toledo, M E; Rodríguez, M; Gómez, D; Baly, A; Benítez, J R; Van der Stuyft, P
2010-01-01
To assess the effectiveness of an integrated community based environmental management strategy to control Aedes aegypti, the vector of dengue, compared with a routine strategy. Design Cluster randomised trial. Setting Guantanamo, Cuba. Participants 32 circumscriptions (around 2000 inhabitants each). Interventions The circumscriptions were randomly allocated to control clusters (n=16) comprising routine Aedes control programme (entomological surveillance, source reduction, selective adulticiding, and health education) and to intervention clusters (n=16) comprising the routine Aedes control programme combined with a community based environmental management approach. The primary outcome was levels of Aedes infestation: house index (number of houses positive for at least one container with immature stages of Ae aegypti per 100 inspected houses), Breteau index (number of containers positive for immature stages of Ae aegypti per 100 inspected houses), and the pupae per inhabitant statistic (number of Ae aegypti pupae per inhabitant). All clusters were subjected to the intended intervention; all completed the study protocol up to February 2006 and all were included in the analysis. At baseline the Aedes infestation levels were comparable between intervention and control clusters: house index 0.25% v 0.20%, pupae per inhabitant 0.44 x 10(-3) v 0.29 x 10(-3). At the end of the intervention these indices were significantly lower in the intervention clusters: rate ratio for house indices 0.49 (95% confidence interval 0.27 to 0.88) and rate ratio for pupae per inhabitant 0.27 (0.09 to 0.76). A community based environmental management embedded in a routine control programme was effective at reducing levels of Aedes infestation. Trial Registration Current Controlled Trials ISRCTN88405796.
Community involvement in dengue vector control: cluster randomised trial.
Vanlerberghe, V; Toledo, M E; Rodríguez, M; Gomez, D; Baly, A; Benitez, J R; Van der Stuyft, P
2009-06-09
To assess the effectiveness of an integrated community based environmental management strategy to control Aedes aegypti, the vector of dengue, compared with a routine strategy. Cluster randomised trial. Guantanamo, Cuba. 32 circumscriptions (around 2000 inhabitants each). The circumscriptions were randomly allocated to control clusters (n=16) comprising routine Aedes control programme (entomological surveillance, source reduction, selective adulticiding, and health education) and to intervention clusters (n=16) comprising the routine Aedes control programme combined with a community based environmental management approach. The primary outcome was levels of Aedes infestation: house index (number of houses positive for at least one container with immature stages of Ae aegypti per 100 inspected houses), Breteau index (number of containers positive for immature stages of Ae aegypti per 100 inspected houses), and the pupae per inhabitant statistic (number of Ae aegypti pupae per inhabitant). All clusters were subjected to the intended intervention; all completed the study protocol up to February 2006 and all were included in the analysis. At baseline the Aedes infestation levels were comparable between intervention and control clusters: house index 0.25% v 0.20%, pupae per inhabitant 0.44x10(-3) v 0.29x10(-3). At the end of the intervention these indices were significantly lower in the intervention clusters: rate ratio for house indices 0.49 (95% confidence interval 0.27 to 0.88) and rate ratio for pupae per inhabitant 0.27 (0.09 to 0.76). A community based environmental management embedded in a routine control programme was effective at reducing levels of Aedes infestation. Current Controlled Trials ISRCTN88405796.
Webster, Gordon; Embley, T Martin; Prosser, James I
2002-01-01
The impact of soil management practices on ammonia oxidizer diversity and spatial heterogeneity was determined in improved (addition of N fertilizer), unimproved (no additions), and semi-improved (intermediate management) grassland pastures at the Sourhope Research Station in Scotland. Ammonia oxidizer diversity within each grassland soil was assessed by PCR amplification of microbial community DNA with both ammonia oxidizer-specific, 16S rRNA gene (rDNA) and functional, amoA, gene primers. PCR products were analysed by denaturing gradient gel electrophoresis, phylogenetic analysis of partial 16S rDNA and amoA sequences, and hybridization with ammonia oxidizer-specific oligonucleotide probes. Ammonia oxidizer populations in unimproved soils were more diverse than those in improved soils and were dominated by organisms representing Nitrosospira clusters 1 and 3 and Nitrosomonas cluster 7 (closely related phylogenetically to Nitrosomonas europaea). Improved soils were only dominated by Nitrosospira cluster 3 and Nitrosomonas cluster 7. These differences were also reflected in functional gene (amoA) diversity, with amoA gene sequences of both Nitrosomonas and Nitrosospira species detected. Replicate 0.5-g samples of unimproved soil demonstrated significant spatial heterogeneity in 16S rDNA-defined ammonia oxidizer clusters, which was reflected in heterogeneity in ammonium concentration and pH. Heterogeneity in soil characteristics and ammonia oxidizer diversity were lower in improved soils. The results therefore demonstrate significant effects of soil management on diversity and heterogeneity of ammonia oxidizer populations that are related to similar changes in relevant soil characteristics.
Distributed cluster management techniques for unattended ground sensor networks
NASA Astrophysics Data System (ADS)
Essawy, Magdi A.; Stelzig, Chad A.; Bevington, James E.; Minor, Sharon
2005-05-01
Smart Sensor Networks are becoming important target detection and tracking tools. The challenging problems in such networks include the sensor fusion, data management and communication schemes. This work discusses techniques used to distribute sensor management and multi-target tracking responsibilities across an ad hoc, self-healing cluster of sensor nodes. Although miniaturized computing resources possess the ability to host complex tracking and data fusion algorithms, there still exist inherent bandwidth constraints on the RF channel. Therefore, special attention is placed on the reduction of node-to-node communications within the cluster by minimizing unsolicited messaging, and distributing the sensor fusion and tracking tasks onto local portions of the network. Several challenging problems are addressed in this work including track initialization and conflict resolution, track ownership handling, and communication control optimization. Emphasis is also placed on increasing the overall robustness of the sensor cluster through independent decision capabilities on all sensor nodes. Track initiation is performed using collaborative sensing within a neighborhood of sensor nodes, allowing each node to independently determine if initial track ownership should be assumed. This autonomous track initiation prevents the formation of duplicate tracks while eliminating the need for a central "management" node to assign tracking responsibilities. Track update is performed as an ownership node requests sensor reports from neighboring nodes based on track error covariance and the neighboring nodes geo-positional location. Track ownership is periodically recomputed using propagated track states to determine which sensing node provides the desired coverage characteristics. High fidelity multi-target simulation results are presented, indicating the distribution of sensor management and tracking capabilities to not only reduce communication bandwidth consumption, but to also simplify multi-target tracking within the cluster.
MPIGeneNet: Parallel Calculation of Gene Co-Expression Networks on Multicore Clusters.
Gonzalez-Dominguez, Jorge; Martin, Maria J
2017-10-10
In this work we present MPIGeneNet, a parallel tool that applies Pearson's correlation and Random Matrix Theory to construct gene co-expression networks. It is based on the state-of-the-art sequential tool RMTGeneNet, which provides networks with high robustness and sensitivity at the expenses of relatively long runtimes for large scale input datasets. MPIGeneNet returns the same results as RMTGeneNet but improves the memory management, reduces the I/O cost, and accelerates the two most computationally demanding steps of co-expression network construction by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on two different systems using three typical input datasets shows that MPIGeneNet is significantly faster than RMTGeneNet. As an example, our tool is up to 175.41 times faster on a cluster with eight nodes, each one containing two 12-core Intel Haswell processors. Source code of MPIGeneNet, as well as a reference manual, are available at https://sourceforge.net/projects/mpigenenet/.
High performance data transfer
NASA Astrophysics Data System (ADS)
Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.
2017-10-01
The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.
MWAHCA: a multimedia wireless ad hoc cluster architecture.
Diaz, Juan R; Lloret, Jaime; Jimenez, Jose M; Sendra, Sandra
2014-01-01
Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal.
A qualitative evaluation of medication management services in six Minnesota health systems.
Sorensen, Todd D; Pestka, Deborah; Sorge, Lindsay A; Wallace, Margaret L; Schommer, Jon
2016-03-01
The initiation, establishment, and sustainability of medication management programs in six Minnesota health systems are described. Six Minnesota health systems with well-established medication management programs were invited to participate in this study: Essentia Health, Fairview Health Services, HealthPartners, Hennepin County Medical Center, Mayo Clinic, and Park Nicollet Health Services. Qualitative methods were employed by conducting group interviews with key staff from each institution who were influential in the development of medication management services within their organization. Kotter's theory of eight steps for leading organizational change served as the framework for the question guide. The interviews were audio recorded, transcribed, and analyzed for recurring and emergent themes. A total of 13 distinct themes were associated with the successful integration of medication management services across the six healthcare systems. Identified themes clustered within three stages of Kotter's model for leading organizational change: creating a climate for change, engaging and enabling the whole organization, and implementing and sustaining change. The 13 themes included (1) external influences, (2) pharmacists as an untapped resource, (3) principles and professionalism, (4) organizational culture, (5) momentum champions, (6) collaborative relationships, (7) service promotion, (8) team-based care, (9) implementation strategies, (10) overcoming challenges, (11) supportive care model process, (12) measuring and reporting results, and (13) sustainability strategies. A qualitative survey of six health systems that successfully implemented medication management services in ambulatory care clinics revealed that a supportive culture and team-based collaborative care are among the themes identified as necessary for service sustainability. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Identification and validation of asthma phenotypes in Chinese population using cluster analysis.
Wang, Lei; Liang, Rui; Zhou, Ting; Zheng, Jing; Liang, Bing Miao; Zhang, Hong Ping; Luo, Feng Ming; Gibson, Peter G; Wang, Gang
2017-10-01
Asthma is a heterogeneous airway disease, so it is crucial to clearly identify clinical phenotypes to achieve better asthma management. To identify and prospectively validate asthma clusters in a Chinese population. Two hundred eighty-four patients were consecutively recruited and 18 sociodemographic and clinical variables were collected. Hierarchical cluster analysis was performed by the Ward method followed by k-means cluster analysis. Then, a prospective 12-month cohort study was used to validate the identified clusters. Five clusters were successfully identified. Clusters 1 (n = 71) and 3 (n = 81) were mild asthma phenotypes with slight airway obstruction and low exacerbation risk, but with a sex differential. Cluster 2 (n = 65) described an "allergic" phenotype, cluster 4 (n = 33) featured a "fixed airflow limitation" phenotype with smoking, and cluster 5 (n = 34) was a "low socioeconomic status" phenotype. Patients in clusters 2, 4, and 5 had distinctly lower socioeconomic status and more psychological symptoms. Cluster 2 had a significantly increased risk of exacerbations (risk ratio [RR] 1.13, 95% confidence interval [CI] 1.03-1.25), unplanned visits for asthma (RR 1.98, 95% CI 1.07-3.66), and emergency visits for asthma (RR 7.17, 95% CI 1.26-40.80). Cluster 4 had an increased risk of unplanned visits (RR 2.22, 95% CI 1.02-4.81), and cluster 5 had increased emergency visits (RR 12.72, 95% CI 1.95-69.78). Kaplan-Meier analysis confirmed that cluster grouping was predictive of time to the first asthma exacerbation, unplanned visit, emergency visit, and hospital admission (P < .0001 for all comparisons). We identified 3 clinical clusters as "allergic asthma," "fixed airflow limitation," and "low socioeconomic status" phenotypes that are at high risk of severe asthma exacerbations and that have management implications for clinical practice in developing countries. Copyright © 2017 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Roets-Merken, Lieve M; Graff, Maud J L; Zuidema, Sytse U; Hermsen, Pieter G J M; Teerenstra, Steven; Kempen, Gertrudis I J M; Vernooij-Dassen, Myrra J F J
2013-10-07
Five to 25 percent of residents in aged care settings have a combined hearing and visual sensory impairment. Usual care is generally restricted to single sensory impairment, neglecting the consequences of dual sensory impairment on social participation and autonomy. The aim of this study is to evaluate the effectiveness of a self-management program for seniors who acquired dual sensory impairment at old age. In a cluster randomized, single-blind controlled trial, with aged care settings as the unit of randomization, the effectiveness of a self-management program will be compared to usual care. A minimum of 14 and maximum of 20 settings will be randomized to either the intervention cluster or the control cluster, aiming to include a total of 132 seniors with dual sensory impairment. Each senior will be linked to a licensed practical nurse working at the setting. During a five to six month intervention period, nurses at the intervention clusters will be trained in a self-management program to support and empower seniors to use self-management strategies. In two separate diaries, nurses keep track of the interviews with the seniors and their reflections on their own learning process. Nurses of the control clusters offer care as usual. At senior level, the primary outcome is the social participation of the seniors measured using the Hearing Handicap Questionnaire and the Activity Card Sort, and secondary outcomes are mood, autonomy and quality of life. At nurse level, the outcome is job satisfaction. Effectiveness will be evaluated using linear mixed model analysis. The results of this study will provide evidence for the effectiveness of the Self-Management Program for seniors with dual sensory impairment living in aged care settings. The findings are expected to contribute to the knowledge on the program's potential to enhance social participation and autonomy of the seniors, as well as increasing the job satisfaction of the licensed practical nurses. Furthermore, an extensive process evaluation will take place which will offer insight in the quality and feasibility of the sampling and intervention process. If it is shown to be effective and feasible, this Self-Management Program could be widely disseminated. ClinicalTrials.gov, NCT01217502.
2013-01-01
Background Five to 25 percent of residents in aged care settings have a combined hearing and visual sensory impairment. Usual care is generally restricted to single sensory impairment, neglecting the consequences of dual sensory impairment on social participation and autonomy. The aim of this study is to evaluate the effectiveness of a self-management program for seniors who acquired dual sensory impairment at old age. Methods/Design In a cluster randomized, single-blind controlled trial, with aged care settings as the unit of randomization, the effectiveness of a self-management program will be compared to usual care. A minimum of 14 and maximum of 20 settings will be randomized to either the intervention cluster or the control cluster, aiming to include a total of 132 seniors with dual sensory impairment. Each senior will be linked to a licensed practical nurse working at the setting. During a five to six month intervention period, nurses at the intervention clusters will be trained in a self-management program to support and empower seniors to use self-management strategies. In two separate diaries, nurses keep track of the interviews with the seniors and their reflections on their own learning process. Nurses of the control clusters offer care as usual. At senior level, the primary outcome is the social participation of the seniors measured using the Hearing Handicap Questionnaire and the Activity Card Sort, and secondary outcomes are mood, autonomy and quality of life. At nurse level, the outcome is job satisfaction. Effectiveness will be evaluated using linear mixed model analysis. Discussion The results of this study will provide evidence for the effectiveness of the Self-Management Program for seniors with dual sensory impairment living in aged care settings. The findings are expected to contribute to the knowledge on the program’s potential to enhance social participation and autonomy of the seniors, as well as increasing the job satisfaction of the licensed practical nurses. Furthermore, an extensive process evaluation will take place which will offer insight in the quality and feasibility of the sampling and intervention process. If it is shown to be effective and feasible, this Self-Management Program could be widely disseminated. Clinical trials registration ClinicalTrials.gov, NCT01217502. PMID:24099315
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Cluster dynamics and cluster size distributions in systems of self-propelled particles
NASA Astrophysics Data System (ADS)
Peruani, F.; Schimansky-Geier, L.; Bär, M.
2010-12-01
Systems of self-propelled particles (SPP) interacting by a velocity alignment mechanism in the presence of noise exhibit rich clustering dynamics. Often, clusters are responsible for the distribution of (local) information in these systems. Here, we investigate the properties of individual clusters in SPP systems, in particular the asymmetric spreading behavior of clusters with respect to their direction of motion. In addition, we formulate a Smoluchowski-type kinetic model to describe the evolution of the cluster size distribution (CSD). This model predicts the emergence of steady-state CSDs in SPP systems. We test our theoretical predictions in simulations of SPP with nematic interactions and find that our simple kinetic model reproduces qualitatively the transition to aggregation observed in simulations.
Boyack, Kevin W; Chen, Mei-Ching; Chacko, George
2014-01-01
The National Institutes of Health (NIH) is the largest source of funding for biomedical research in the world. This funding is largely effected through a competitive grants process. Each year the Center for Scientific Review (CSR) at NIH manages the evaluation, by peer review, of more than 55,000 grant applications. A relevant management question is how this scientific evaluation system, supported by finite resources, could be continuously evaluated and improved for maximal benefit to the scientific community and the taxpaying public. Towards this purpose, we have created the first system-level description of peer review at CSR by applying text analysis, bibliometric, and graph visualization techniques to administrative records. We identify otherwise latent relationships across scientific clusters, which in turn suggest opportunities for structural reorganization of the system based on expert evaluation. Such studies support the creation of monitoring tools and provide transparency and knowledge to stakeholders.
The Chandra Source Catalog 2.0: Building The Catalog
NASA Astrophysics Data System (ADS)
Grier, John D.; Plummer, David A.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
To build release 2.0 of the Chandra Source Catalog (CSC2), we require scientific software tools and processing pipelines to evaluate and analyze the data. Additionally, software and hardware infrastructure is needed to coordinate and distribute pipeline execution, manage data i/o, and handle data for Quality Assurance (QA) intervention. We also provide data product staging for archive ingestion.Release 2 utilizes a database driven system used for integration and production. Included are four distinct instances of the Automatic Processing (AP) system (Source Detection, Master Match, Source Properties and Convex Hulls) and a high performance computing (HPC) cluster that is managed to provide efficient catalog processing. In this poster we highlight the internal systems developed to meet the CSC2 challenge.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Data Analytics for Smart Parking Applications.
Piovesan, Nicola; Turi, Leo; Toigo, Enrico; Martinez, Borja; Rossi, Michele
2016-09-23
We consider real-life smart parking systems where parking lot occupancy data are collected from field sensor devices and sent to backend servers for further processing and usage for applications. Our objective is to make these data useful to end users, such as parking managers, and, ultimately, to citizens. To this end, we concoct and validate an automated classification algorithm having two objectives: (1) outlier detection: to detect sensors with anomalous behavioral patterns, i.e., outliers; and (2) clustering: to group the parking sensors exhibiting similar patterns into distinct clusters. We first analyze the statistics of real parking data, obtaining suitable simulation models for parking traces. We then consider a simple classification algorithm based on the empirical complementary distribution function of occupancy times and show its limitations. Hence, we design a more sophisticated algorithm exploiting unsupervised learning techniques (self-organizing maps). These are tuned following a supervised approach using our trace generator and are compared against other clustering schemes, namely expectation maximization, k-means clustering and DBSCAN, considering six months of data from a real sensor deployment. Our approach is found to be superior in terms of classification accuracy, while also being capable of identifying all of the outliers in the dataset.
OCCAM: a flexible, multi-purpose and extendable HPC cluster
NASA Astrophysics Data System (ADS)
Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.
2017-10-01
The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.
Data Analytics for Smart Parking Applications
Piovesan, Nicola; Turi, Leo; Toigo, Enrico; Martinez, Borja; Rossi, Michele
2016-01-01
We consider real-life smart parking systems where parking lot occupancy data are collected from field sensor devices and sent to backend servers for further processing and usage for applications. Our objective is to make these data useful to end users, such as parking managers, and, ultimately, to citizens. To this end, we concoct and validate an automated classification algorithm having two objectives: (1) outlier detection: to detect sensors with anomalous behavioral patterns, i.e., outliers; and (2) clustering: to group the parking sensors exhibiting similar patterns into distinct clusters. We first analyze the statistics of real parking data, obtaining suitable simulation models for parking traces. We then consider a simple classification algorithm based on the empirical complementary distribution function of occupancy times and show its limitations. Hence, we design a more sophisticated algorithm exploiting unsupervised learning techniques (self-organizing maps). These are tuned following a supervised approach using our trace generator and are compared against other clustering schemes, namely expectation maximization, k-means clustering and DBSCAN, considering six months of data from a real sensor deployment. Our approach is found to be superior in terms of classification accuracy, while also being capable of identifying all of the outliers in the dataset. PMID:27669259
Grošelj, Petra; Zadnik Stirn, Lidija
2015-09-15
Environmental management problems can be dealt with by combining participatory methods, which make it possible to include various stakeholders in a decision-making process, and multi-criteria methods, which offer a formal model for structuring and solving a problem. This paper proposes a three-phase decision making approach based on the analytic network process and SWOT (strengths, weaknesses, opportunities and threats) analysis. The approach enables inclusion of various stakeholders or groups of stakeholders in particular stages of decision making. The structure of the proposed approach is composed of a network consisting of an objective cluster, a cluster of strategic goals, a cluster of SWOT factors and a cluster of alternatives. The application of the suggested approach is applied to a management problem of Pohorje, a mountainous area in Slovenia. Stakeholders from sectors that are important for Pohorje (forestry, agriculture, tourism and nature protection agencies) who can offer a wide range of expert knowledge were included in the decision-making process. The results identify the alternative of "sustainable development" as the most appropriate for development of Pohorje. The application in the paper offers an example of employing the new approach to an environmental management problem. This can also be applied to decision-making problems in various other fields. Copyright © 2015 Elsevier Ltd. All rights reserved.
A resilient and secure software platform and architecture for distributed spacecraft
NASA Astrophysics Data System (ADS)
Otte, William R.; Dubey, Abhishek; Karsai, Gabor
2014-06-01
A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.
Roets-Merken, Lieve M; Zuidema, Sytse U; Vernooij-Dassen, Myrra J F J; Teerenstra, Steven; Hermsen, Pieter G J M; Kempen, Gertrudis I J M; Graff, Maud J L
2018-01-01
Objective To evaluate the effectiveness of a nurse-supported self-management programme to improve social participation of dual sensory impaired older adults in long-term care homes. Design Cluster randomised controlled trial. Setting Thirty long-term care homes across the Netherlands. Participants Long-term care homes were randomised into intervention clusters (n=17) and control clusters (n=13), involving 89 dual sensory impaired older adults and 56 licensed practical nurses. Intervention Nurse-supported self-management programme. Measurements Effectiveness was evaluated by the primary outcome social participation using a participation scale adapted for visually impaired older adults distinguishing four domains: instrumental activities of daily living, social-cultural activities, high-physical-demand and low-physical-demand leisure activities. A questionnaire assessing hearing-related participation problems was added as supportive outcome. Secondary outcomes were autonomy, control, mood and quality of life and nurses’ job satisfaction. For effectiveness analyses, linear mixed models were used. Sampling and intervention quality were analysed using descriptive statistics. Results Self-management did not affect all four domains of social participation; however. the domain ‘instrumental activities of daily living’ had a significant effect in favour of the intervention group (P=0.04; 95% CI 0.12 to 8.5). Sampling and intervention quality was adequate. Conclusions A nurse-supported self-management programme was effective in empowering the dual sensory impaired older adults to address the domain ‘instrumental activities of daily living’, but no differences were found in addressing the other three participation domains. Self-management showed to be beneficial for managing practical problems, but not for those problems requiring behavioural adaptations of other persons. Trial registration number NCT01217502; Results. PMID:29371264
Musekamp, Gunda; Gerlich, Christian; Ehlebracht-König, Inge; Faller, Hermann; Reusch, Andrea
2016-02-03
Fibromyalgia syndrome (FMS) is a complex chronic condition that makes high demands on patients' self-management skills. Thus, patient education is considered an important component of multimodal therapy, although evidence regarding its effectiveness is scarce. The main objective of this study is to assess the effectiveness of an advanced self-management patient education program for patients with FMS as compared to usual care in the context of inpatient rehabilitation. We conducted a multicenter cluster randomized controlled trial in 3 rehabilitation clinics. Clusters are groups of patients with FMS consecutively recruited within one week after admission. Patients of the intervention group receive the advanced multidisciplinary self-management patient education program (considering new knowledge on FMS, with a focus on transfer into everyday life), whereas patients in the control group receive standard patient education programs including information on FMS and coping with pain. A total of 566 patients are assessed at admission, at discharge and after 6 and 12 months, using patient reported questionnaires. Primary outcomes are patients' disease- and treatment-specific knowledge at discharge and self-management skills after 6 months. Secondary outcomes include satisfaction, attitudes and coping competences, health-promoting behavior, psychological distress, health impairment and participation. Treatment effects between groups are evaluated using multilevel regression analysis adjusting for baseline values. The study evaluates the effectiveness of a self-management patient education program for patients with FMS in the context of inpatient rehabilitation in a cluster randomized trial. Study results will show whether self-management patient education is beneficial for this group of patients. German Clinical Trials Register, DRKS00008782 , Registered 8 July 2015.
Profiles of criminal justice system involvement of mentally ill homeless adults.
Roy, Laurence; Crocker, Anne G; Nicholls, Tonia L; Latimer, Eric; Gozdzik, Agnes; O'Campo, Patricia; Rae, Jennifer
2016-01-01
This study aims to examine the rates of self-reported contacts with the criminal justice system among homeless adults with mental illness, to identify the characteristics of participants who have had contacts with the criminal justice system, to report the dimensional structure of criminal justice system involvement in this sample, and to identify typologies of justice-involved participants. Self-report data on criminal justice system involvement of 2221 adults participating in a Canadian Housing First trial were analyzed using multiple correspondence and cluster analysis. Almost half of the participants had at least one contact with the criminal justice system in the 6months prior to study enrollment. Factors associated with justice involvement included age, gender, ethnic background, diagnosis, substance misuse, impulse control, compliance, victimization, service use, and duration of homelessness. A typology of criminal justice involvement was developed. Seven criminal justice system involvement profiles emerged; substance use and impulse control distinguished the clusters, whereas demographic and contextual variables did not. The large number of profiles indicates the need for a diverse and flexible range of interventions that could be integrated within or in addition to current support of housing services, including integrated substance use and mental health interventions, risk management strategies, and trauma-oriented services. Copyright © 2016 Elsevier Ltd. All rights reserved.
78 FR 32161 - Oklahoma: Final Authorization of State Hazardous Waste Management Program Revision
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... statutory and regulatory provisions necessary to administer the provisions of RCRA Cluster XXI, and... July 1, 2010 Through June 30, 2011 RCRA Cluster XXI prepared on June 14, 2012. The DEQ incorporates the... the authorizations at 77 FR 1236-1262, 75 FR 15273 through 15276 for RCRA Cluster XXI. The Federal...
Patient Loyalty in a Mature IDS Market: Is Population Health Management Worth It?
Carlin, Caroline S
2014-01-01
Objective To understand patient loyalty to providers over time, informing effective population health management. Study Setting Patient care-seeking patterns over a 6-year timeframe in Minnesota, where care systems have a significant portion of their revenue generated by shared-saving contracts with public and private payers. Study Design Weibull duration and probit models were used to examine patterns of patient attribution to a care system and the continuity of patient affiliation with a care system. Clustering of errors within family unit was used to account for within-family correlation in unobserved characteristics that affect patient loyalty. Data Collection The payer provided data from health plan administrative files, matched to U.S. Census-based characteristics of the patient's neighborhood. Patients were retrospectively attributed to health care systems based on patterns of primary care. Principal Findings I find significant patient loyalty, with past loyalty a very strong predictor of future relationship. Relationships were shorter when the patient's health status was complex and when the patient's care system was smaller. Conclusions Population health management can be beneficial to the care system making this investment, particularly for patients exhibiting prior continuity in care system choice. The results suggest that co-located primary and specialty services are important in maintaining primary care loyalty. PMID:24461030
Patient loyalty in a mature IDS market: is population health management worth it?
Carlin, Caroline S
2014-06-01
To understand patient loyalty to providers over time, informing effective population health management. Patient care-seeking patterns over a 6-year timeframe in Minnesota, where care systems have a significant portion of their revenue generated by shared-saving contracts with public and private payers. Weibull duration and probit models were used to examine patterns of patient attribution to a care system and the continuity of patient affiliation with a care system. Clustering of errors within family unit was used to account for within-family correlation in unobserved characteristics that affect patient loyalty. The payer provided data from health plan administrative files, matched to U.S. Census-based characteristics of the patient's neighborhood. Patients were retrospectively attributed to health care systems based on patterns of primary care. I find significant patient loyalty, with past loyalty a very strong predictor of future relationship. Relationships were shorter when the patient's health status was complex and when the patient's care system was smaller. Population health management can be beneficial to the care system making this investment, particularly for patients exhibiting prior continuity in care system choice. The results suggest that co-located primary and specialty services are important in maintaining primary care loyalty. © Health Research and Educational Trust.
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
2016-01-01
Introduction Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. Methods and analysis A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). Ethics and dissemination The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Trial registration number Australian New Zealand Clinical Trials Registry (ANZCTR) 370325. PMID:27797997
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Mumford, V; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
2016-10-21
Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Australian New Zealand Clinical Trials Registry (ANZCTR) 370325. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Determining requirements for patient-centred care: a participatory concept mapping study.
Ogden, Kathryn; Barr, Jennifer; Greenfield, David
2017-11-28
Recognition of a need for patient-centred care is not new, however making patient-centred care a reality remains a challenge to organisations. We need empirical studies to extend current understandings, create new representations of the complexity of patient-centred care, and guide collective action toward patient-centred health care. To achieve these ends, the research aim was to empirically determine what organisational actions are required for patient-centred care to be achieved. We used an established participatory concept mapping methodology. Cross-sector stakeholders contributed to the development of statements for patient-centred care requirements, sorting statements into groupings according to similarity, and rating each statement according to importance, feasibility, and achievement. The resultant data were analysed to produce a visual concept map representing participants' conceptualisation of patient-centred care requirements. Analysis included the development of a similarity matrix, multidimensional scaling, hierarchical cluster analysis, selection of the number of clusters and their labels, identifying overarching domains and quantitative representation of rating data. The outcome was the development of a conceptual map for the Requirements of Patient-Centred Care Systems (ROPCCS). ROPCCS incorporates 123 statements sorted into 13 clusters. Cluster labels were: shared responsibility for personalised health literacy; patient provider dynamic for care partnership; collaboration; shared power and responsibility; resources for coordination of care; recognition of humanity - skills and attributes; knowing and valuing the patient; relationship building; system review evaluation and new models; commitment to supportive structures and processes; elements to facilitate change; professional identity and capability development; and explicit education and learning. The clusters were grouped into three overarching domains, representing a cross-sectoral approach: humanity and partnership; career spanning education and training; and health systems, policy and management. Rating of statements allowed the generation of go-zone maps for further interrogation of the relative importance, feasibility, and achievement of each patient-centred care requirement and cluster. The study has empirically determined requirements for patient-centred care through the development of ROPCCS. The unique map emphasises collaborative responsibility of stakeholders to ensure that patient-centred care is comprehensively progressed. ROPCCS allows the complex requirements for patient-centred care to be understood, implemented, evaluated, measured, and shown to be occurring.
Soulat, J; Picard, B; Léger, S; Monteils, V
2018-06-01
In this study, four prediction models were developed by logistic regression using individual data from 96 heifers. Carcass and sensory rectus abdominis quality clusters were identified then predicted using the rearing factors data. The obtained models from rearing factors applied during the fattening period were compared to those characterising the heifers' whole life. The highest prediction power of carcass and meat quality clusters were obtained from the models considering the whole life, with success rates of 62.8% and 54.9%, respectively. Rearing factors applied during both pre-weaning and fattening periods influenced carcass and meat quality. According to models, carcass traits were improved when heifer's mother was older for first calving, calves ingested concentrates during pasture preceding weaning and heifers were slaughtered older. Meat traits were improved by the genetic of heifers' parents (i.e., calving ease and early muscularity) and when heifers were slaughtered older. A management of carcass and meat quality traits is possible at different periods of the heifers' life. Copyright © 2018 Elsevier Ltd. All rights reserved.
Scalable and cost-effective NGS genotyping in the cloud.
Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P
2015-10-15
While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
Walker, Anne-Sophie; Gladieux, Pierre; Decognet, Véronique; Fermaud, Marc; Confais, Johann; Roudet, Jean; Bardin, Marc; Bout, Alexandre; Nicot, Philippe C; Poncet, Christine; Fournier, Elisabeth
2015-04-01
Understanding the causes of population subdivision is of fundamental importance, as studying barriers to gene flow between populations may reveal key aspects of the process of adaptive divergence and, for pathogens, may help forecasting disease emergence and implementing sound management strategies. Here, we investigated population subdivision in the multihost fungus Botrytis cinerea based on comprehensive multiyear sampling on different hosts in three French regions. Analyses revealed a weak association between population structure and geography, but a clear differentiation according to the host plant of origin. This was consistent with adaptation to hosts, but the distribution of inferred genetic clusters and the frequency of admixed individuals indicated a lack of strict host specificity. Differentiation between individuals collected in the greenhouse (on Solanum) and outdoor (on Vitis and Rubus) was stronger than that observed between individuals from the two outdoor hosts, probably reflecting an additional isolating effect associated with the cropping system. Three genetic clusters coexisted on Vitis but did not persist over time. Linkage disequilibrium analysis indicated that outdoor populations were regularly recombining, whereas clonality was predominant in the greenhouse. Our findings open up new perspectives for disease control by managing plant debris in outdoor conditions and reinforcing prophylactic measures indoor. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.
Recalculation of dose for each fraction of treatment on TomoTherapy.
Thomas, Simon J; Romanchikova, Marina; Harrison, Karl; Parker, Michael A; Bates, Amy M; Scaife, Jessica E; Sutcliffe, Michael P F; Burnet, Neil G
2016-01-01
The VoxTox study, linking delivered dose to toxicity requires recalculation of typically 20-37 fractions per patient, for nearly 2000 patients. This requires a non-interactive interface permitting batch calculation with multiple computers. Data are extracted from the TomoTherapy(®) archive and processed using the computational task-management system GANGA. Doses are calculated for each fraction of radiotherapy using the daily megavoltage (MV) CT images. The calculated dose cube is saved as a digital imaging and communications in medicine RTDOSE object, which can then be read by utilities that calculate dose-volume histograms or dose surface maps. The rectum is delineated on daily MV images using an implementation of the Chan-Vese algorithm. On a cluster of up to 117 central processing units, dose cubes for all fractions of 151 patients took 12 days to calculate. Outlining the rectum on all slices and fractions on 151 patients took 7 h. We also present results of the Hounsfield unit (HU) calibration of TomoTherapy MV images, measured over an 8-year period, showing that the HU calibration has become less variable over time, with no large changes observed after 2011. We have developed a system for automatic dose recalculation of TomoTherapy dose distributions. This does not tie up the clinically needed planning system but can be run on a cluster of independent machines, enabling recalculation of delivered dose without user intervention. The use of a task management system for automation of dose calculation and outlining enables work to be scaled up to the level required for large studies.
Chen, Yi-Bu; Chattopadhyay, Ansuman; Bergen, Phillip; Gadd, Cynthia; Tannery, Nancy
2007-01-01
To bridge the gap between the rising information needs of biological and medical researchers and the rapidly growing number of online bioinformatics resources, we have created the Online Bioinformatics Resources Collection (OBRC) at the Health Sciences Library System (HSLS) at the University of Pittsburgh. The OBRC, containing 1542 major online bioinformatics databases and software tools, was constructed using the HSLS content management system built on the Zope Web application server. To enhance the output of search results, we further implemented the Vivísimo Clustering Engine, which automatically organizes the search results into categories created dynamically based on the textual information of the retrieved records. As the largest online collection of its kind and the only one with advanced search results clustering, OBRC is aimed at becoming a one-stop guided information gateway to the major bioinformatics databases and software tools on the Web. OBRC is available at the University of Pittsburgh's HSLS Web site (http://www.hsls.pitt.edu/guides/genetics/obrc).
NASA Astrophysics Data System (ADS)
Hooijer, A.; van Os, A. G.
Recent flood events and socio-economic developments have increased the awareness of the need for improved flood risk management along the Rhine and Meuse Rivers. In response to this, the IRMA-SPONGE program incorporated 13 research projects in which over 30 organisations from all 6 River Basin Countries co-operated. The pro- gram is financed partly by the European INTERREG Rhine-Meuse Activities (IRMA). The main aim of IRMA-SPONGE is defined as: "The development of methodologies and tools to assess the impact of flood risk reduction measures and of land-use and climate change scenarios. This to support the spatial planning process in establish- ing alternative strategies for an optimal realisation of the hydraulic, economical and ecological functions of the Rhine and Meuse River Basins." Further important objec- tives are to promote transboundary co-operation in flood risk management by both scientific and management organisations, and to promote public participation in flood management issues. The projects in the program are grouped in three clusters, looking at measures from different scientific angles. The results of the projects in each cluster have been evaluated to define recommendations for flood risk management; some of these outcomes call for a change to current practices, e.g.: 1. (Flood Risk and Hydrol- ogy cluster): hydrological changes due to climate change exceed those due to further land use change, and are significant enough to necessitate a change in flood risk man- agement strategies if the currently claimed protection levels are to be sustained. 2. (Flood Protection and Ecology cluster): to not only provide flood protection but also enhance the ecological quality of rivers and floodplains, new flood risk management concepts ought to integrate ecological knowledge from start to finish, with a clear perspective on the type of nature desired and the spatial and time scales considered. 3. (Flood Risk Management and Spatial Planning cluster): extreme floods can not be prevented by taking mainly upstream measures; significant and space-consuming lo- cal measures will therefore be needed in the lower Rhine and Meuse deltas. However, there is also a need for improved flood risk management upstream, which calls for better spatial planning procedures. More detailed information on the IRMA-SPONGE program can be found on our website: www.irma-sponge.org.
Community involvement in dengue vector control: cluster randomised trial
Toledo, M E; Rodríguez, M; Gomez, D; Baly, A; Benitez, J R; Van der Stuyft, P
2009-01-01
Objective To assess the effectiveness of an integrated community based environmental management strategy to control Aedes aegypti, the vector of dengue, compared with a routine strategy. Design Cluster randomised trial. Setting Guantanamo, Cuba. Participants 32 circumscriptions (around 2000 inhabitants each). Interventions The circumscriptions were randomly allocated to control clusters (n=16) comprising routine Aedes control programme (entomological surveillance, source reduction, selective adulticiding, and health education) and to intervention clusters (n=16) comprising the routine Aedes control programme combined with a community based environmental management approach. Main outcome measures The primary outcome was levels of Aedes infestation: house index (number of houses positive for at least one container with immature stages of Ae aegypti per 100 inspected houses), Breteau index (number of containers positive for immature stages of Ae aegypti per 100 inspected houses), and the pupae per inhabitant statistic (number of Ae aegypti pupae per inhabitant). Results All clusters were subjected to the intended intervention; all completed the study protocol up to February 2006 and all were included in the analysis. At baseline the Aedes infestation levels were comparable between intervention and control clusters: house index 0.25% v 0.20%, pupae per inhabitant 0.44×10−3 v 0.29×10−3. At the end of the intervention these indices were significantly lower in the intervention clusters: rate ratio for house indices 0.49 (95% confidence interval 0.27 to 0.88) and rate ratio for pupae per inhabitant 0.27 (0.09 to 0.76). Conclusion A community based environmental management embedded in a routine control programme was effective at reducing levels of Aedes infestation. Trial registration Current Controlled Trials ISRCTN88405796. PMID:19509031
Hahn, Noel G.
2017-01-01
Geospatial analyses were used to investigate the spatial distribution of populations of Halyomorpha halys, an important invasive agricultural pest in mid-Atlantic peach orchards. This spatial analysis will improve efficiency by allowing growers and farm managers to predict insect arrangement and target management strategies. Data on the presence of H. halys were collected from five peach orchards at four farms in New Jersey from 2012–2014 located in different land-use contexts. A point pattern analysis, using Ripley’s K function, was used to describe clustering of H. halys. In addition, the clustering of damage indicative of H. halys feeding was described. With low populations early in the growing season, H. halys did not exhibit signs of clustering in the orchards at most distances. At sites with low populations throughout the season, clustering was not apparent. However, later in the season, high infestation levels led to more evident clustering of H. halys. Damage, although present throughout the entire orchard, was found at low levels. When looking at trees with greater than 10% fruit damage, damage was shown to cluster in orchards. The Moran’s I statistic showed that spatial autocorrelation of H. halys was present within the orchards on the August sample dates, in relation to both populations density and levels of damage. Kriging the abundance of H. halys and the severity of damage to peaches revealed that the estimations of these are generally found in the same region of the orchards. This information on the clustering of H. halys populations will be useful to help predict presence of insects for use in management or scouting programs. PMID:28362797
Guwatudde, David; Absetz, Pilvikki; Delobelle, Peter; Östenson, Claes-Göran; Olmen Van, Josefien; Alvesson, Helle Molsted; Mayega, Roy William; Ekirapa Kiracho, Elizabeth; Kiguli, Juliet; Sundberg, Carl Johan; Sanders, David; Tomson, Göran; Puoane, Thandi; Peterson, Stefan; Daivadanam, Meena
2018-03-17
Type 2 diabetes (T2D) is increasingly contributing to the global burden of disease. Health systems in most parts of the world are struggling to diagnose and manage T2D, especially in low-income and middle-income countries, and among disadvantaged populations in high-income countries. The aim of this study is to determine the added benefit of community interventions onto health facility interventions, towards glycaemic control among persons with diabetes, and towards reduction in plasma glucose among persons with prediabetes. An adaptive implementation cluster randomised trial is being implemented in two rural districts in Uganda with three clusters per study arm, in an urban township in South Africa with one cluster per study arm, and in socially disadvantaged suburbs in Stockholm, Sweden with one cluster per study arm. Clusters are communities within the catchment areas of participating primary healthcare facilities. There are two study arms comprising a facility plus community interventions arm and a facility-only interventions arm. Uganda has a third arm comprising usual care. Intervention strategies focus on organisation of care, linkage between health facility and the community, and strengthening patient role in self-management, community mobilisation and a supportive environment. Among T2D participants, the primary outcome is controlled plasma glucose; whereas among prediabetes participants the primary outcome is reduction in plasma glucose. The study has received approval in Uganda from the Higher Degrees, Research and Ethics Committee of Makerere University School of Public Health and from the Uganda National Council for Science and Technology; in South Africa from the Biomedical Science Research Ethics Committee of the University of the Western Cape; and in Sweden from the Regional Ethical Board in Stockholm. Findings will be disseminated through peer-reviewed publications and scientific meetings. ISRCTN11913581; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Discourse coalitions in Swiss waste management: gridlock or winds of change?
Duygan, Mert; Stauffacher, Michael; Meylan, Grégoire
2018-02-01
As a complex socio-technical system, waste management is crucially important for the sustainable management of material and energy flows. Transition to better performing waste management systems requires not only determining what needs to be changed but also finding out how this change can be realized. Without understanding the political context, insights from decision support tools such as life cycle assessment (LCA) are likely to be lost in translation to decision and policy making. This study strives to provide a first insight into the political context and address the opportunities and barriers pertinent to initiating a change in Swiss waste management. For this purpose, the discourses around a major policy process are analysed to uncover the policy beliefs and preferences of actors. Discourse coalitions are delineated by referring to the Advocacy Coalition Framework (Sabatier, 1998) and using the Discourse Network Analysis (Leifeld and Haunss, 2012) method. The results display an incoherent regime (Fuenfschilling and Truffer, 2014) with divergent belief clusters on core issues in waste management. Yet, some actors holding different beliefs appear to have overlapping interests on secondary issues such as the treatment of biogenic waste or plastics. Although the current political context hinders a system-wide disruptive change, transitions can be initiated at local or regional scale by utilizing the shared interest across different discourse coalitions. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kempler, Steve; Mathews, Tiffany
2016-01-01
The continuum of ever-evolving data management systems affords great opportunities to the enhancement of knowledge and facilitation of science research. To take advantage of these opportunities, it is essential to understand and develop methods that enable data relationships to be examined and the information to be manipulated. This presentation describes the efforts of the Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster to understand, define, and facilitate the implementation of ESDA to advance science research. As a result of the void of Earth science data analytics publication material, the cluster has defined ESDA along with 10 goals to set the framework for a common understanding of tools and techniques that are available and still needed to support ESDA.
A taxonomy of hospitals participating in Medicare accountable care organizations.
Bazzoli, Gloria J; Harless, David W; Chukmaitov, Askar S
2017-03-03
Medicare was an early innovator of accountable care organizations (ACOs), establishing the Medicare Shared Savings Program (MSSP) and Pioneer programs in 2012-2013. Existing research has documented that ACOs bring together an array of health providers with hospitals serving as important participants. Hospitals vary markedly in their service structure and organizational capabilities, and thus, one would expect hospital ACO participants to vary in these regards. Our research identifies hospital subgroups that share certain capabilities and competencies. Such research, in conjunction with existing ACO research, provides deeper understanding of the structure and operation of these organizations. Given that Medicare was an initiator of the ACO concept, our findings provide a baseline to track the evolution of ACO hospitals over time. Hierarchical clustering methods are used in separate analyses of MSSP and Pioneer ACO hospitals. Hospitals participating in ACOs with 2012-2013 start dates are identified through multiple sources. Study data come from the Centers for Medicare and Medicaid Services, American Hospital Association, and Health Information and Management Systems Society. Five-cluster solutions were developed separately for the MSSP and Pioneer hospital samples. Both the MSSP and Pioneer taxonomies had several clusters with high levels of health information technology capabilities. Also distinct clusters with strong physician linkages were present. We examined Pioneer ACO hospitals that subsequently left the program and found that they commonly had low levels of ambulatory care services or health information technology. Distinct subgroups of hospitals exist in both the MSSP and Pioneer programs, suggesting that individual hospitals serve different roles within an ACO. Health information technology and physician linkages appear to be particularly important features in ACO hospitals. ACOs need to consider not only geographic and service mix when selecting hospital participants but also their vertical integration features and management competencies.
Salimi, Parisa; Hamedi, Mohsen; Jamshidi, Nima; Vismeh, Milad
2017-04-01
Diabetes and its associated complications are realized as one of the most challenging medical conditions threatening more than 29 million people only in the USA. The forecasts suggest a suffering of more than half a billion worldwide by 2030. Amid all diabetic complications, diabetic foot ulcer (DFU) has attracted much scientific investigations to lead to a better management of this disease. In this paper, a system thinking methodology is adopted to investigate the dynamic nature of the ulceration. The causal loop diagram as a tool is utilized to illustrate the well-researched relations and interrelations between causes of the DFU. The result of clustering causality evaluation suggests a vicious loop that relates external trauma to callus. Consequently a hypothesis is presented which localizes development of foot ulceration considering distribution of normal and shear stress. It specifies that normal and tangential forces, as the main representatives of external trauma, play the most important role in foot ulceration. The evaluation of this hypothesis suggests the significance of the information related to both normal and shear stress for managing DFU. The results also discusses how these two react on different locations on foot such as metatarsal head, heel and hallux. The findings of this study can facilitate tackling the complexity of DFU problem and looking for constructive mitigation measures. Moreover they lead to developing a more promising methodology for managing DFU including better prognosis, designing prosthesis and insoles for DFU and patient caring recommendations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
The COMPTEL Processing and Analysis Software system (COMPASS)
NASA Astrophysics Data System (ADS)
de Vries, C. P.; COMPTEL Collaboration
The data analysis system of the gamma-ray Compton Telescope (COMPTEL) onboard the Compton-GRO spacecraft is described. A continous stream of data of the order of 1 kbytes per second is generated by the instrument. The data processing and analysis software is build around a relational database managment system (RDBMS) in order to be able to trace heritage and processing status of all data in the processing pipeline. Four institutes cooperate in this effort requiring procedures to keep local RDBMS contents identical between the sites and swift exchange of data using network facilities. Lately, there has been a gradual move of the system from central processing facilities towards clusters of workstations.
Health Care Leadership: Managing Knowledge Bases as Stakeholders.
Rotarius, Timothy
Communities are composed of many organizations. These organizations naturally form clusters based on common patterns of knowledge, skills, and abilities of the individual organizations. Each of these spontaneous clusters represents a distinct knowledge base. The health care knowledge base is shown to be the natural leader of any community. Using the Central Florida region's 5 knowledge bases as an example, each knowledge base is categorized as a distinct type of stakeholder, and then a specific stakeholder management strategy is discussed to facilitate managing both the cooperative potential and the threatening potential of each "knowledge base" stakeholder.
Xing, Jian; Burkom, Howard; Moniz, Linda; Edgerton, James; Leuze, Michael; Tokars, Jerome
2009-01-01
Background The Centers for Disease Control and Prevention's (CDC's) BioSense system provides near-real time situational awareness for public health monitoring through analysis of electronic health data. Determination of anomalous spatial and temporal disease clusters is a crucial part of the daily disease monitoring task. Our study focused on finding useful anomalies at manageable alert rates according to available BioSense data history. Methods The study dataset included more than 3 years of daily counts of military outpatient clinic visits for respiratory and rash syndrome groupings. We applied four spatial estimation methods in implementations of space-time scan statistics cross-checked in Matlab and C. We compared the utility of these methods according to the resultant background cluster rate (a false alarm surrogate) and sensitivity to injected cluster signals. The comparison runs used a spatial resolution based on the facility zip code in the patient record and a finer resolution based on the residence zip code. Results Simple estimation methods that account for day-of-week (DOW) data patterns yielded a clear advantage both in background cluster rate and in signal sensitivity. A 28-day baseline gave the most robust results for this estimation; the preferred baseline is long enough to remove daily fluctuations but short enough to reflect recent disease trends and data representation. Background cluster rates were lower for the rash syndrome counts than for the respiratory counts, likely because of seasonality and the large scale of the respiratory counts. Conclusion The spatial estimation method should be chosen according to characteristics of the selected data streams. In this dataset with strong day-of-week effects, the overall best detection performance was achieved using subregion averages over a 28-day baseline stratified by weekday or weekend/holiday behavior. Changing the estimation method for particular scenarios involving different spatial resolution or other syndromes can yield further improvement. PMID:19615075
The observed clustering of damaging extra-tropical cyclones in Europe
NASA Astrophysics Data System (ADS)
Cusack, S.
2015-12-01
The clustering of severe European windstorms on annual timescales has substantial impacts on the re/insurance industry. Management of the risk is impaired by large uncertainties in estimates of clustering from historical storm datasets typically covering the past few decades. The uncertainties are unusually large because clustering depends on the variance of storm counts. Eight storm datasets are gathered for analysis in this study in order to reduce these uncertainties. Six of the datasets contain more than 100~years of severe storm information to reduce sampling errors, and the diversity of information sources and analysis methods between datasets sample observational errors. All storm severity measures used in this study reflect damage, to suit re/insurance applications. It is found that the shortest storm dataset of 42 years in length provides estimates of clustering with very large sampling and observational errors. The dataset does provide some useful information: indications of stronger clustering for more severe storms, particularly for southern countries off the main storm track. However, substantially different results are produced by removal of one stormy season, 1989/1990, which illustrates the large uncertainties from a 42-year dataset. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm datasets show a greater degree of clustering with increasing storm severity and suggest clustering of severe storms is much more material than weaker storms. Further, they contain signs of stronger clustering in areas off the main storm track, and weaker clustering for smaller-sized areas, though these signals are smaller than uncertainties in actual values. Both the improvement of existing storm records and development of new historical storm datasets would help to improve management of this risk.
Dymond, Caren C; Field, Robert D; Roswintiarti, Orbita; Guswanto
2005-04-01
Vegetation fires have become an increasing problem in tropical environments as a consequence of socioeconomic pressures and subsequent land-use change. In response, fire management systems are being developed. This study set out to determine the relationships between two aspects of the fire problems in western Indonesia and Malaysia, and two components of the Canadian Forest Fire Weather Index System. The study resulted in a new method for calibrating components of fire danger rating systems based on satellite fire detection (hotspot) data. Once the climate was accounted for, a problematic number of fires were related to high levels of the Fine Fuel Moisture Code. The relationship between climate, Fine Fuel Moisture Code, and hotspot occurrence was used to calibrate Fire Occurrence Potential classes where low accounted for 3% of the fires from 1994 to 2000, moderate accounted for 25%, high 26%, and extreme 38%. Further problems arise when there are large clusters of fires burning that may consume valuable land or produce local smoke pollution. Once the climate was taken into account, the hotspot load (number and size of clusters of hotspots) was related to the Fire Weather Index. The relationship between climate, Fire Weather Index, and hotspot load was used to calibrate Fire Load Potential classes. Low Fire Load Potential conditions (75% of an average year) corresponded with 24% of the hotspot clusters, which had an average size of 30% of the largest cluster. In contrast, extreme Fire Load Potential conditions (1% of an average year) corresponded with 30% of the hotspot clusters, which had an average size of 58% of the maximum. Both Fire Occurrence Potential and Fire Load Potential calibrations were successfully validated with data from 2001. This study showed that when ground measurements are not available, fire statistics derived from satellite fire detection archives can be reliably used for calibration. More importantly, as a result of this work, Malaysia and Indonesia have two new sources of information to initiate fire prevention and suppression activities.
Eher, R; Windhaber, J; Rau, H; Schmitt, M; Kellner, E
2000-05-01
Conflict and conflict resolution in intimate relationships are not only among the most important factors influencing relationship satisfaction but are also seen in association with clinical symptoms. Styles of conflict will be assessed in patients suffering from panic disorder with and without agoraphobia, in alcoholics and in patients suffering from rheumatoid arthritis. 176 patients and healthy controls filled out the Styles of Conflict Inventory and questionnaires concerning severity of clinical symptoms. A cluster analysis revealed 5 types of conflict management. Healthy controls showed predominantely assertive and constructive styles, patients with panic disorder showed high levels of cognitive and/or behavioral aggression. Alcoholics showed high levels of repressed aggression, and patients with rheumatoid arthritis often did not exhibit any aggression during conflict. 5 Clusters of conflict pattern have been identified by cluster analysis. Each patient group showed considerable different patterns of conflict management.
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
Koyama, Nao; Ueno, Yoshikazu; Eguchi, Yusuke; Uetake, Katsuji; Tanaka, Toshio
2012-07-01
This study investigated the effects of changes in daily management on behavior of a solitary female elephant in a zoo. The activity budget and space utilization of the subject and the management changes were recorded for 1 year after the conspecific male died. The observation days could be categorized into five clusters (C1-C5) by the characteristic behavioral pattern of each day. C1 had the highest percentage of resting of all clusters, and was observed after the loss of the conspecific and the beginning of use of the indoor exhibition room at night. C2, which had the highest percentage of stereotypy of any cluster, was observed after the beginning of habituation to the indoor exhibition room. Also, when the time schedule of management was changed irregularly, the subject frequently exhibited stereotypic pacing (C2, C4). The subject tended to rest when exhibiting lameness in the left hind limb (C3). In C5, activity reached a high level when she could utilize a familiar place under a stable management schedule. These results indicate that management changes affected the mental stability of an elephant in the early stage of social isolation. © 2012 The Authors. Animal Science Journal © 2012 Japanese Society of Animal Science.
Agricultural land usage transforms nitrifier population ecology.
Bertagnolli, Anthony D; McCalmont, Dylan; Meinhardt, Kelley A; Fransen, Steven C; Strand, Stuart; Brown, Sally; Stahl, David A
2016-06-01
Application of nitrogen fertilizer has altered terrestrial ecosystems. Ammonia is nitrified by ammonia and nitrite-oxidizing microorganisms, converting ammonia to highly mobile nitrate, contributing to the loss of nitrogen, soil nutrients and production of detrimental nitrogen oxides. Mitigating these costs is of critical importance to a growing bioenergy industry. To resolve the impact of management on nitrifying populations, amplicon sequencing of markers associated with ammonia and nitrite-oxidizing taxa (ammonia monooxygenase-amoA, nitrite oxidoreductase-nxrB, respectively) was conducted from long-term managed and nearby native soils in Eastern Washington, USA. Native nitrifier population structure was altered profoundly by management. The native ammonia-oxidizing archaeal community (comprised primarily by Nitrososphaera sister subclusters 1.1 and 2) was displaced by populations of Nitrosopumilus, Nitrosotalea and different assemblages of Nitrososphaera (subcluster 1.1, and unassociated lineages of Nitrososphaera). A displacement of ammonia-oxidizing bacterial taxa was associated with management, with native groups of Nitrosospira (cluster 2 related, cluster 3A.2) displaced by Nitrosospira clusters 8B and 3A.1. A shift in nitrite-oxidizing bacteria (NOB) was correlated with management, but distribution patterns could not be linked exclusively to management. Dominant nxrB sequences displayed only distant relationships to other NOB isolates and environmental clones. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.
Huntink, E; Wensing, M; Klomp, M A; van Lieshout, J
2015-12-15
Although conditions for high quality cardiovascular risk management in primary care in the Netherlands are favourable, there still remains a gap between practice guideline recommendations and practice. The aim of the current study was to identify determinants of cardiovascular primary care in the Netherlands. We performed a qualitative study, using semi-structured interviews with healthcare professionals and patients with established cardiovascular diseases or at high cardiovascular risk. A framework analysis was used to cluster the determinants into seven domains: 1) guideline factors, 2) individual healthcare professional factors, 3) patient factors, 4) professional interaction, 5) incentives and recourses, 6) mandate, authority and accountability, and 7) social, political and legal factors. Twelve healthcare professionals and 16 patients were interviewed. Healthcare professionals and patients mentioned a variety of factors concerning all seven domains. Determinants of practice according to the health care professionals were related to communication between healthcare professionals, patients' lack of knowledge and self-management, time management, market mechanisms in the Dutch healthcare system and motivational interviewing skills of healthcare professionals. Patients mentioned determinants related to their knowledge of risk factors for cardiovascular diseases, medication adherence and self-management as key determinants. A key finding is the mismatch between healthcare professionals' and patients' views on patient's knowledge and self-management. Perceived determinants of cardiovascular risk management were mainly related to patient behaviors and (but only for health professionals) to the healthcare system. Though health care professionals and patients agree upon the importance of patients' knowledge and self-management, their judgment of the current state of knowledge and self-management is entirely different.
Berwanger, Otávio; Guimarães, Hélio P; Laranjeira, Ligia N; Cavalcanti, Alexandre B; Kodama, Alessandra; Zazula, Ana Denise; Santucci, Eliana; Victor, Elivane; Flato, Uri A; Tenuta, Marcos; Carvalho, Vitor; Mira, Vera Lucia; Pieper, Karen S; Mota, Luiz Henrique; Peterson, Eric D; Lopes, Renato D
2012-03-01
Translating evidence into clinical practice in the management of acute coronary syndromes (ACS) is challenging. Few ACS quality improvement interventions have been rigorously evaluated to determine their impact on patient care and clinical outcomes. We designed a pragmatic, 2-arm, cluster-randomized trial involving 34 clusters (Brazilian public hospitals). Clusters were randomized to receive a multifaceted quality improvement intervention (experimental group) or routine practice (control group). The 6-month educational intervention included reminders, care algorithms, a case manager, and distribution of educational materials to health care providers. The primary end point was a composite of evidence-based post-ACS therapies within 24 hours of admission, with the secondary measure of major cardiovascular clinical events (death, nonfatal myocardial infarction, nonfatal cardiac arrest, and nonfatal stroke). Prescription of evidence-based therapies at hospital discharge were also evaluated as part of the secondary outcomes. All analyses were performed by the intention-to-treat principle and took the cluster design into account using individual-level regression modeling (generalized estimating equations). If proven effective, this multifaceted intervention would have wide use as a means of promoting optimal use of evidence-based interventions for the management of ACS. Copyright © 2012 Mosby, Inc. All rights reserved.
Management of district hospitals--exploring success.
Couper, Ian D; Hugo, Jannie F M
2005-01-01
The aim of the study was to explore and document what assists a rural district hospital to function well. The lessons learned may be applicable to similar hospitals all over the world. A cross-sectional exploratory study was carried out using in-depth interviews with 21 managers of well-functioning district hospitals in two districts in South Africa. Thirteen themes were identified, integrated into three clusters, namely 'Teams working together for a purpose', 'Foundational framework and values' and 'Health Service and the community'. Teamwork and teams was a dominant theme. Teams working together are held together by the cement of good relationships and are enhanced by purposeful meetings. Unity is grown through solving difficult problems together and commitment to serving the community guides commitment towards each other, and towards patients and staff. Open communication and sharing lots of information between people and teams is the way in which these things happen. The structure and systems that have developed over years form the basis for teamwork. The different management structures and processes are developed with a view to supporting service and teamwork. A long history of committed people who hand over the baton when they leave creates a stable context. The health service and community theme cluster describes how integration in the community and community services is important for these managers. There is also a focus on involving community representatives in the hospital development and governance. Capacity building for staff is seen in the same spirit of serving people and thus serving staff, all aimed at reaching out to people in need in the community. The three clusters and thirteen themes and the relationships between them are described in detail through diagrams and narrative in the article. Much can be learned from the experience of these managers. The key issue is the development of a team in the hospital, a team with a unified vision of giving patients priority, respecting each other as well as patients, and working in and with the community to achieve optimal health care in the district hospital.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... DEPARTMENT OF ENERGY Energy Efficient Building Systems Regional Innovation Cluster Initiative... Energy Efficient Building Systems Regional Innovation Cluster Initiative. A single proposal submitted by... systems design. The DOE funded Energy Efficient Building Systems Design Hub (the ``Hub'') will serve as a...
MSFC Skylab program engineering and integration
NASA Technical Reports Server (NTRS)
1974-01-01
A technical history and managerial critique of the MSFC role in the Skylab program is presented. The George C. Marshall Space Flight Center had primary hardware development responsibility for the Saturn Workshop Modules and many of the designated experiments in addition to the system integration responsibility for the entire Skylab Orbital Cluster. The report also includes recommendations and conclusions applicable to hardware design, test program philosophy and performance, and program management techniques with potential application to future programs.
Qin, J; Choi, K S; Ho, Simon S M; Heng, P A
2008-01-01
A force prediction algorithm is proposed to facilitate virtual-reality (VR) based collaborative surgical simulation by reducing the effect of network latencies. State regeneration is used to correct the estimated prediction. This algorithm is incorporated into an adaptive transmission protocol in which auxiliary features such as view synchronization and coupling control are equipped to ensure the system consistency. We implemented this protocol using multi-threaded technique on a cluster-based network architecture.
DPM — efficient storage in diverse environments
NASA Astrophysics Data System (ADS)
Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio
2014-06-01
Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.
Solving the scalability issue in quantum-based refinement: Q|R#1.
Zheng, Min; Moriarty, Nigel W; Xu, Yanting; Reimers, Jeffrey R; Afonine, Pavel V; Waller, Mark P
2017-12-01
Accurately refining biomacromolecules using a quantum-chemical method is challenging because the cost of a quantum-chemical calculation scales approximately as n m , where n is the number of atoms and m (≥3) is based on the quantum method of choice. This fundamental problem means that quantum-chemical calculations become intractable when the size of the system requires more computational resources than are available. In the development of the software package called Q|R, this issue is referred to as Q|R#1. A divide-and-conquer approach has been developed that fragments the atomic model into small manageable pieces in order to solve Q|R#1. Firstly, the atomic model of a crystal structure is analyzed to detect noncovalent interactions between residues, and the results of the analysis are represented as an interaction graph. Secondly, a graph-clustering algorithm is used to partition the interaction graph into a set of clusters in such a way as to minimize disruption to the noncovalent interaction network. Thirdly, the environment surrounding each individual cluster is analyzed and any residue that is interacting with a particular cluster is assigned to the buffer region of that particular cluster. A fragment is defined as a cluster plus its buffer region. The gradients for all atoms from each of the fragments are computed, and only the gradients from each cluster are combined to create the total gradients. A quantum-based refinement is carried out using the total gradients as chemical restraints. In order to validate this interaction graph-based fragmentation approach in Q|R, the entire atomic model of an amyloid cross-β spine crystal structure (PDB entry 2oNA) was refined.
Organizing and Typing Persistent Objects Within an Object-Oriented Framework
NASA Technical Reports Server (NTRS)
Madany, Peter W.; Campbell, Roy H.
1991-01-01
Conventional operating systems provide little or no direct support for the services required for an efficient persistent object system implementation. We have built a persistent object scheme using a customization and extension of an object-oriented operating system called Choices. Choices includes a framework for the storage of persistent data that is suited to the construction of both conventional file system and persistent object system. In this paper we describe three areas in which persistent object support differs from file system support: storage organization, storage management, and typing. Persistent object systems must support various sizes of objects efficiently. Customizable containers, which are themselves persistent objects and can be nested, support a wide range of object sizes in Choices. Collections of persistent objects that are accessed as an aggregate and collections of light-weight persistent objects can be clustered in containers that are nested within containers for larger objects. Automated garbage collection schemes are added to storage management and have a major impact on persistent object applications. The Choices persistent object store provides extensible sets of persistent object types. The store contains not only the data for persistent objects but also the names of the classes to which they belong and the code for the operation of the classes. Besides presenting persistent object storage organization, storage management, and typing, this paper discusses how persistent objects are named and used within the Choices persistent data/file system framework.
QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm.
Bao, Ying; Lei, Weimin; Zhang, Wei; Zhan, Yuzhuo
2016-01-01
At present, to realize or improve the quality of experience (QoE) is a major goal for network media transmission service, and QoE evaluation is the basis for adjusting the transmission control mechanism. Therefore, a kind of QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm is proposed in this paper, which is concentrated on service score calculation at the server side. The server side collects network transmission quality of service (QoS) parameter, node location data, and user expectation value from client feedback information. Then it manages the historical data in database through the "big data" process mode, and predicts user score according to heuristic rules. On this basis, it completes fuzzy clustering analysis, and generates service QoE score and management message, which will be finally fed back to clients. Besides, this paper mainly discussed service evaluation generative rules, heuristic evaluation rules and fuzzy clustering analysis methods, and presents service-based QoE evaluation processes. The simulation experiments have verified the effectiveness of QoE collaborative evaluation method based on fuzzy clustering heuristic rules.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
There is a need for new systemic sclerosis subset criteria. A content analytic approach.
Johnson, S R; Soowamber, M L; Fransen, J; Khanna, D; Van Den Hoogen, F; Baron, M; Matucci-Cerinic, M; Denton, C P; Medsger, T A; Carreira, P E; Riemekasten, G; Distler, J; Gabrielli, A; Steen, V; Chung, L; Silver, R; Varga, J; Müller-Ladner, U; Vonk, M C; Walker, U A; Wollheim, F A; Herrick, A; Furst, D E; Czirjak, L; Kowal-Bielecka, O; Del Galdo, F; Cutolo, M; Hunzelmann, N; Murray, C D; Foeldvari, I; Mouthon, L; Damjanov, N; Kahaleh, B; Frech, T; Assassi, S; Saketkoo, L A; Pope, J E
2018-01-01
Systemic sclerosis (SSc) is heterogenous. The objectives of this study were to evaluate the purpose, strengths and limitations of existing SSc subset criteria, and identify ideas among experts about subsets. We conducted semi-structured interviews with randomly sampled international SSc experts. The interview transcripts underwent an iterative process with text deconstructed to single thought units until a saturated conceptual framework with coding was achieved and respondent occurrence tabulated. Serial cross-referential analyses of clusters were developed. Thirty experts from 13 countries were included; 67% were male, 63% were from Europe and 37% from North America; median experience of 22.5 years, with a median of 55 new SSc patients annually. Three thematic clusters regarding subsetting were identified: research and communication; management; and prognosis (prediction of internal organ involvement, survival). The strength of the limited/diffuse system was its ease of use, however 10% stated this system had marginal value. Shortcomings of the diffuse/limited classification were the risk of misclassification, predictions/generalizations did not always hold true, and that the elbow or knee threshold was arbitrary. Eighty-seven percent use more than 2 subsets including: SSc sine scleroderma, overlap conditions, antibody-determined subsets, speed of progression, and age of onset (juvenile, elderly). We have synthesized an international view of the construct of SSc subsets in the modern era. We found a number of factors underlying the construct of SSc subsets. Considerations for the next phase include rate of change and hierarchal clustering (e.g. limited/diffuse, then by antibodies).
Distant Galaxy Clusters Hosting Extreme Central Galaxies
NASA Astrophysics Data System (ADS)
McDonald, Michael
2014-09-01
The recently-discovered Phoenix cluster harbors the most star-forming central cluster galaxy of any cluster in the known Universe, by nearly a factor of 10. This extreme system appears to be fulfilling early cooling flow predictions, although the lack of similar systems makes any interpretation difficult. In an attempt to find other "Phoenix-like" clusters, we have cross-correlated archival all-sky surveys (in which Phoenix was detected) and isolated 4 similarly-extreme systems which are also coincident in position and redshift with an overdensity of red galaxies. We propose here to obtain Chandra observations of these extreme, Phoenix-like systems, in order to confirm them as relaxed, rapidly-cooling galaxy clusters.
P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)
Pillardy, J.
2007-01-01
One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.
Psychiatrist-patient verbal and nonverbal communications during split-treatment appointments.
Cruz, Mario; Roter, Debra; Cruz, Robyn Flaum; Wieland, Melissa; Cooper, Lisa A; Larson, Susan; Pincus, Harold Alan
2011-11-01
This study characterized psychiatrist and patient communication behaviors and affective voice tones during pharmacotherapy appointments with depressed patients at four community-based mental health clinics where psychiatrists provided medication management and other mental health professionals provided therapy ("split treatment"). Audiorecordings of 84 unique pairs of psychiatrists and patients with a depressive disorder were analyzed with the Roter Interaction Analysis System, which identifies 41 discrete speech categories that can be grouped into composites representing broad conceptual communication domains. Cluster analysis identified psychiatrist communication patterns. T test and chi square analyses compared the clusters for verbal dominance, affective voice tone, and characteristics of psychiatrist and patients. On average, 53% of psychiatrist talk was devoted to partnering and relationship building, and 67% of patient talk was about biomedical subjects, such as depression symptoms, and psychosocial information giving. Psychiatrist communication patterns were characterized by two clusters, a biomedical-centered cluster that emphasized biomedical questions (η²=.22, df=82, p<.001) and education or counseling (η²=.20, df=82, p<.001) and a patient-centered cluster focused on psychosocial and lifestyle questions (η²=.24, df=82, p<.001) and information giving (η²=.17, df=82, p<.001). The patient-centered cluster was associated with patients' expression of distress, anger, or other negative affects (t=3.22, df= 82, p=.002). Psychiatrists devoted much of their talk to partnering and relationship building while maintaining a focus on symptoms or psychosocial issues. However, patient behaviors did not reflect a similar level of partnering. Future studies should identify psychiatrist communication behaviors that activate collaborative patient communications or improve treatment outcomes.
Kaisey, Marwa; Mittman, Brian; Pearson, Marjorie; Connor, Karen I; Chodosh, Joshua; Vassar, Stefanie D; Nguyen, France T; Vickrey, Barbara G
2012-10-01
Care management approaches have been proven to improve outcomes for patients with dementia and their family caregivers (dyads). However, acceptance of services in these programs is incomplete, impacting effectiveness. Acceptance may be related to dyad as well as healthcare system characteristics, but knowledge about factors associated with program acceptance is lacking. This study investigates patient, caregiver, and healthcare system characteristics associated with acceptance of offered care management services. This study analyzed data from the intervention arm of a cluster randomized controlled trial of a comprehensive dementia care management intervention. There were 408 patient-caregiver dyads enrolled in the study, of which 238 dyads were randomized to the intervention. Caregiver, patient, and health system factors associated with participation in offered care management services were assessed through bivariate and multivariate regression analyses. Out of the 238 dyads, 9 were ineligible for this analysis, leaving data of 229 dyads in this sample. Of these, 185 dyads accepted offered care management services, and 44 dyads did not. Multivariate analyses showed that higher likelihood of acceptance of care management services was uniquely associated with cohabitation of caregiver and patient (p < 0.001), lesser severity of dementia (p = 0.03), and higher patient comorbidity (p = 0.03); it also varied across healthcare organization sites. Understanding factors that influence care management participation could result in increased adoption of successful programs to improve quality of care. Using these factors to revise both program design as well as program promotion may also benefit external validity of future quality improvement research trials. Copyright © 2011 John Wiley & Sons, Ltd.
Reconstruction of cluster masses using particle based lensing
NASA Astrophysics Data System (ADS)
Deb, Sanghamitra
Clusters of galaxies are among the richest astrophysical data systems, but to truly understand these systems, we need a detailed study of the relationship between observables and the underlying cluster dark matter distribution. Gravitational lensing is the most direct probe of dark matter, but many mass reconstruction techniques assume that cluster light traces mass, or combine different lensing signals in an ad hoc way. In this talk, we will describe "Particle Based Lensing" (PBL), a new method for cluster mass reconstruction, that avoids many of the pitfalls of previous techniques. PBL optimally combines lensing information of varying signal-to-noise, and makes no assumptions about the relationship between mass and light. We will describe mass reconstructions in three very different, but very illuminating cluster systems: the "Bullet Cluster" (lE 0657-56), A901/902 and A1689. The "Bullet Cluster" is a system of merging clusters made famous by the first unambiguous lensing detection of dark matter. A901/902 is a multi-cluster system with four peaks, and provides an ideal laboratory for studying cluster interaction. We are particularly interested in measuring and correlating the dark matter clump ellipticities. A1689 is one of the richest clusters known, and has significant substructure at the core. It is also my first exercise in optimally combining weak and strong gravitational lensing in a cluster reconstruction. We find that the dark matter distribution is significantly clumpier than indicated by X-ray maps of the gas. We conclude by discussing various potential applications of PBL to existing and future data.
MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture
Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra
2014-01-01
Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996
Results from DESDM Pipeline on Data From Blanco Cosmology Survey
NASA Astrophysics Data System (ADS)
Desai, Shantanu; Mohr, J.; Armstrong, R.; Bertin, E.; Zenteno, A.; Tucker, D.; Song, J.; Ngeow, C.; Lin, H.; Bazin, G.; Liu, J.; Cosmology Survey, Blanco
2011-01-01
The Blanco Cosmology Survey (BCS) is a 60-night survey of the southern skies using the CTIO Blanco 4 m telescope, whose main goal to study cosmic acceleration using galaxy clusters. BCS has carried out observations in two 50 degree patches of the southern skies centered at 23 hr and 5 hr in griz bands. These fields were chosen to maximize overlap with the the South Pole Telescope. The data from this survey has been processed using the Dark energy Data Management System (DESDM) on Teragrid resources at NCSA and CCT. DESDM is developed to analyze data from the Dark Energy Survey, which begins around 2011 and analysis of real data provides valuable warmup exercise before the DES survey starts. We describe in detail the key steps in producing science ready catalogs from the raw data. This includes detrending, astrometric calibration, photometric calibration, co-addition with psf homogenization. The final catalogs are constructed using model-fitting photometry which includes detailed galaxy fitting models convolved with the local PSF. We illustrate how photometric redshifts of galaxy clusters are estimated using red-sequence fitting and show results from a few clusters.
Cluster analysis of Pinus taiwanensis for its ex situ conservation in China.
Gao, X; Shi, L; Wu, Z
2015-06-01
Pinus taiwanensis Hayata is one of the most famous sights in the Huangshan Scenic Resort, China, because of its strong adaptability and ability to survive; however, this endemic species is currently under threat in China. Relationships between different P. taiwanensis populations have been well-documented; however, few studies have been conducted on how to protect this rare pine. In the present study, we propose the ex situ conservation of this species using geographical information system (GIS) cluster and genetic diversity analyses. The GIS cluster method was conducted as a preliminary analysis for establishing a sampling site category based on climatic factors. Genetic diversity was analyzed using morphological and genetic traits. By combining geographical information with genetic data, we demonstrate that growing conditions, morphological traits, and the genetic make-up of the population in the Huangshan Scenic Resort were most similar to conditions on Tianmu Mountain. Therefore, we suggest that Tianmu Mountain is the best choice for the ex situ conservation of P. taiwanensis. Our results provide a molecular basis for the sustainable management, utilization, and conservation of this species in Huangshan Scenic Resort.
NASA Technical Reports Server (NTRS)
Iverson, David L. (Inventor)
2008-01-01
The present invention relates to an Inductive Monitoring System (IMS), its software implementations, hardware embodiments and applications. Training data is received, typically nominal system data acquired from sensors in normally operating systems or from detailed system simulations. The training data is formed into vectors that are used to generate a knowledge database having clusters of nominal operating regions therein. IMS monitors a system's performance or health by comparing cluster parameters in the knowledge database with incoming sensor data from a monitored-system formed into vectors. Nominal performance is concluded when a monitored-system vector is determined to lie within a nominal operating region cluster or lies sufficiently close to a such a cluster as determined by a threshold value and a distance metric. Some embodiments of IMS include cluster indexing and retrieval methods that increase the execution speed of IMS.
Cascading failure in scale-free networks with tunable clustering
NASA Astrophysics Data System (ADS)
Zhang, Xue-Jun; Gu, Bo; Guan, Xiang-Min; Zhu, Yan-Bo; Lv, Ren-Li
2016-02-01
Cascading failure is ubiquitous in many networked infrastructure systems, such as power grids, Internet and air transportation systems. In this paper, we extend the cascading failure model to a scale-free network with tunable clustering and focus on the effect of clustering coefficient on system robustness. It is found that the network robustness undergoes a nonmonotonic transition with the increment of clustering coefficient: both highly and lowly clustered networks are fragile under the intentional attack, and the network with moderate clustering coefficient can better resist the spread of cascading. We then provide an extensive explanation for this constructive phenomenon via the microscopic point of view and quantitative analysis. Our work can be useful to the design and optimization of infrastructure systems.
Roets-Merken, Lieve M; Zuidema, Sytse U; Vernooij-Dassen, Myrra J F J; Teerenstra, Steven; Hermsen, Pieter G J M; Kempen, Gertrudis I J M; Graff, Maud J L
2018-01-24
To evaluate the effectiveness of a nurse-supported self-management programme to improve social participation of dual sensory impaired older adults in long-term care homes. Cluster randomised controlled trial. Thirty long-term care homes across the Netherlands. Long-term care homes were randomised into intervention clusters (n=17) and control clusters (n=13), involving 89 dual sensory impaired older adults and 56 licensed practical nurses. Nurse-supported self-management programme. Effectiveness was evaluated by the primary outcome social participation using a participation scale adapted for visually impaired older adults distinguishing four domains: instrumental activities of daily living, social-cultural activities, high-physical-demand and low-physical-demand leisure activities. A questionnaire assessing hearing-related participation problems was added as supportive outcome. Secondary outcomes were autonomy, control, mood and quality of life and nurses' job satisfaction. For effectiveness analyses, linear mixed models were used. Sampling and intervention quality were analysed using descriptive statistics. Self-management did not affect all four domains of social participation; however. the domain 'instrumental activities of daily living' had a significant effect in favour of the intervention group (P=0.04; 95% CI 0.12 to 8.5). Sampling and intervention quality was adequate. A nurse-supported self-management programme was effective in empowering the dual sensory impaired older adults to address the domain 'instrumental activities of daily living', but no differences were found in addressing the other three participation domains. Self-management showed to be beneficial for managing practical problems, but not for those problems requiring behavioural adaptations of other persons. NCT01217502; Results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Competency Index. [Health Technology Cluster.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Center on Education and Training for Employment.
This competency index lists the competencies included in the 62 units of the Tech Prep Competency Profiles within the Health Technologies Cluster. The unit topics are as follows: employability skills; professionalism; teamwork; computer literacy; documentation; infection control and risk management; medical terminology; anatomy, physiology, and…
Agricultural Occupations. Education for Employment Task Lists.
ERIC Educational Resources Information Center
Lake County Area Vocational Center, Grayslake, IL.
The duties and tasks found in these task lists form the basis of instructional content for secondary, postsecondary, and adult occupational training programs for agricultural occupations. The agricultural occupations are divided into three clusters. The clusters and occupations are: agricultural business and management cluster…
The signatures of the parental cluster on field planetary systems
NASA Astrophysics Data System (ADS)
Cai, Maxwell Xu; Portegies Zwart, Simon; van Elteren, Arjen
2018-03-01
Due to the high stellar densities in young clusters, planetary systems formed in these environments are likely to have experienced perturbations from encounters with other stars. We carry out direct N-body simulations of multiplanet systems in star clusters to study the combined effects of stellar encounters and internal planetary dynamics. These planetary systems eventually become part of the Galactic field population as the parental cluster dissolves, which is where most presently known exoplanets are observed. We show that perturbations induced by stellar encounters lead to distinct signatures in the field planetary systems, most prominently, the excited orbital inclinations and eccentricities. Planetary systems that form within the cluster's half-mass radius are more prone to such perturbations. The orbital elements are most strongly excited in the outermost orbit, but the effect propagates to the entire planetary system through secular evolution. Planet ejections may occur long after a stellar encounter. The surviving planets in these reduced systems tend to have, on average, higher inclinations and larger eccentricities compared to systems that were perturbed less strongly. As soon as the parental star cluster dissolves, external perturbations stop affecting the escaped planetary systems, and further evolution proceeds on a relaxation time-scale. The outer regions of these ejected planetary systems tend to relax so slowly that their state carries the memory of their last strong encounter in the star cluster. Regardless of the stellar density, we observe a robust anticorrelation between multiplicity and mean inclination/eccentricity. We speculate that the `Kepler dichotomy' observed in field planetary systems is a natural consequence of their early evolution in the parental cluster.
Cluster formation by allelomimesis in real-world complex adaptive systems
NASA Astrophysics Data System (ADS)
Juanico, Dranreb Earl; Monterola, Christopher; Saloma, Caesar
2005-04-01
Animal and human clusters are complex adaptive systems and many organize in cluster sizes s that obey the frequency distribution D(s)∝s-τ . The exponent τ describes the relative abundance of the cluster sizes in a given system. Data analyses reveal that real-world clusters exhibit a broad spectrum of τ values, 0.7 (tuna fish schools) ⩽τ⩽4.61 (T4 bacteriophage gene family sizes). Allelomimesis is proposed as an underlying mechanism for adaptation that explains the observed broad τ spectrum. Allelomimesis is the tendency of an individual to imitate the actions of others and two cluster systems have different τ values when their component agents display unequal degrees of allelomimetic tendencies. Cluster formation by allelomimesis is shown to be of three general types: namely, blind copying, information-use copying, and noncopying. Allelomimetic adaptation also reveals that the most stable cluster size is formed by three strongly allelomimetic individuals. Our finding is consistent with available field data taken from killer whales and marmots.
NASA Astrophysics Data System (ADS)
Savage, J. C.; Simpson, R. W.
2013-09-01
The deformation across the Sierra Nevada Block, the Walker Lane Belt, and the Central Nevada Seismic Belt (CNSB) between 38.5°N and 40.5°N has been analyzed by clustering GPS velocities to identify coherent blocks. Cluster analysis determines the number of clusters required and assigns the GPS stations to the proper clusters. The clusters are shown on a fault map by symbols located at the positions of the GPS stations, each symbol representing the cluster to which the velocity of that GPS station belongs. Fault systems that separate the clusters are readily identified on such a map. Four significant clusters are identified. Those clusters are strips separated by (from west to east) the Mohawk Valley-Genoa fault system, the Pyramid Lake-Wassuk fault system, and the Central Nevada Seismic Belt. The strain rates within the westernmost three clusters approximate simple right-lateral shear (~13 nstrain/a) across vertical planes roughly parallel to the cluster boundaries. Clustering does not recognize the longitudinal segmentation of the Walker Lane Belt into domains dominated by either northwesterly trending, right-lateral faults or northeasterly trending, left-lateral faults.
Savage, James C.; Simpson, Robert W.
2013-01-01
The deformation across the Sierra Nevada Block, the Walker Lane Belt, and the Central Nevada Seismic Belt (CNSB) between 38.5°N and 40.5°N has been analyzed by clustering GPS velocities to identify coherent blocks. Cluster analysis determines the number of clusters required and assigns the GPS stations to the proper clusters. The clusters are shown on a fault map by symbols located at the positions of the GPS stations, each symbol representing the cluster to which the velocity of that GPS station belongs. Fault systems that separate the clusters are readily identified on such a map. Four significant clusters are identified. Those clusters are strips separated by (from west to east) the Mohawk Valley-Genoa fault system, the Pyramid Lake-Wassuk fault system, and the Central Nevada Seismic Belt. The strain rates within the westernmost three clusters approximate simple right-lateral shear (~13 nstrain/a) across vertical planes roughly parallel to the cluster boundaries. Clustering does not recognize the longitudinal segmentation of the Walker Lane Belt into domains dominated by either northwesterly trending, right-lateral faults or northeasterly trending, left-lateral faults.
Clustering high dimensional data using RIA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Nazrina
2015-05-15
Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily andmore » hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.« less
A spatial analysis of hierarchical waste transport structures under growing demand.
Tanguy, Audrey; Glaus, Mathias; Laforest, Valérie; Villot, Jonathan; Hausler, Robert
2016-10-01
The design of waste management systems rarely accounts for the spatio-temporal evolution of the demand. However, recent studies suggest that this evolution affects the planning of waste management activities like the choice and location of treatment facilities. As a result, the transport structure could also be affected by these changes. The objective of this paper is to study the influence of the spatio-temporal evolution of the demand on the strategic planning of a waste transport structure. More particularly this study aims at evaluating the effect of varying spatial parameters on the economic performance of hierarchical structures (with one transfer station). To this end, three consecutive generations of three different spatial distributions were tested for hierarchical and non-hierarchical transport structures based on costs minimization. Results showed that a hierarchical structure is economically viable for large and clustered spatial distributions. The distance parameter was decisive but the loading ratio of trucks and the formation of clusters of sources also impacted the attractiveness of the transfer station. Thus the territories' morphology should influence strategies as regards to the installation of transfer stations. The use of spatial-explicit tools such as the transport model presented in this work that take into account the territory's evolution are needed to help waste managers in the strategic planning of waste transport structures. © The Author(s) 2016.
Zodiacal Exoplanets in Time: Searching for Young Stars in K2
NASA Astrophysics Data System (ADS)
Morris, Nathan Ryan; Mann, Andrew; Rizzuto, Aaron
2018-01-01
Observations of planetary systems around young stars provide insight into the early stages of planetary system formation. Nearby young open clusters such as the Hyades, Pleiades, and Praesepe provide important benchmarks for the properties of stellar systems in general. These clusters are all known to be less than 1 Gyr old, making them ideal targets for a survey of young planetary systems. Few transiting planets have been detected around clusters stars, however, so this alone is too small of a sample. K2, the revived Kepler mission, has provided a vast number of light curves for young stars in clusters and elsewhere in the K2 field. This provides us with the opportunity to extend the sample of young systems to field stars while calibrating with cluster stars. We compute rotational periods from starspot patterns for ~36,000 K2 targets and use gyrochronological relationships derived from cluster stars to determine their ages. From there, we have begun searching for planets around young stars outside the clusters with the ultimate goal of shedding light on how planets and planetary systems evolve in their early, most formative years.
Bergin, Sarah M; Periaswamy, Balamurugan; Barkham, Timothy; Chua, Hong Choon; Mok, Yee Ming; Fung, Daniel Shuen Sheng; Su, Alex Hsin Chuan; Lee, Yen Ling; Chua, Ming Lai Ivan; Ng, Poh Yong; Soon, Wei Jia Wendy; Chu, Collins Wenhan; Tan, Siyun Lucinda; Meehan, Mary; Ang, Brenda Sze Peng; Leo, Yee Sin; Holden, Matthew T G; De, Partha; Hsu, Li Yang; Chen, Swaine L; de Sessions, Paola Florez; Marimuthu, Kalisvar
2018-05-09
OBJECTIVEWe report the utility of whole-genome sequencing (WGS) conducted in a clinically relevant time frame (ie, sufficient for guiding management decision), in managing a Streptococcus pyogenes outbreak, and present a comparison of its performance with emm typing.SETTINGA 2,000-bed tertiary-care psychiatric hospital.METHODSActive surveillance was conducted to identify new cases of S. pyogenes. WGS guided targeted epidemiological investigations, and infection control measures were implemented. Single-nucleotide polymorphism (SNP)-based genome phylogeny, emm typing, and multilocus sequence typing (MLST) were performed. We compared the ability of WGS and emm typing to correctly identify person-to-person transmission and to guide the management of the outbreak.RESULTSThe study included 204 patients and 152 staff. We identified 35 patients and 2 staff members with S. pyogenes. WGS revealed polyclonal S. pyogenes infections with 3 genetically distinct phylogenetic clusters (C1-C3). Cluster C1 isolates were all emm type 4, sequence type 915 and had pairwise SNP differences of 0-5, which suggested recent person-to-person transmissions. Epidemiological investigation revealed that cluster C1 was mediated by dermal colonization and transmission of S. pyogenes in a male residential ward. Clusters C2 and C3 were genomically diverse, with pairwise SNP differences of 21-45 and 26-58, and emm 11 and mostly emm120, respectively. Clusters C2 and C3, which may have been considered person-to-person transmissions by emm typing, were shown by WGS to be unlikely by integrating pairwise SNP differences with epidemiology.CONCLUSIONSWGS had higher resolution than emm typing in identifying clusters with recent and ongoing person-to-person transmissions, which allowed implementation of targeted intervention to control the outbreak.Infect Control Hosp Epidemiol 2018;1-9.
Olson, Ryan; Thompson, Sharon V.; Wipfli, Brad; Hanson, Ginger; Elliot, Diane L.; Anger, W. Kent; Bodner, Todd; Hammer, Leslie B.; Hohn, Elliot; Perrin, Nancy A.
2015-01-01
Objective Our objectives were to describe a sample of truck drivers, identify clusters of drivers with similar patterns in behaviors affecting energy balance (sleep, diet, and exercise), and test for cluster differences in health and psychosocial factors. Methods Participants’ (n=452, BMI M=37.2, 86.4% male) self-reported behaviors were dichotomized prior to hierarchical cluster analysis, which identified groups with similar behavior co-variation. Cluster differences were tested with generalized estimating equations. Results Five behavioral clusters were identified that differed significantly in age, smoking status, diabetes prevalence, lost work days, stress, and social support, but not in BMI. Cluster 2, characterized by the best sleep quality, had significantly lower lost workdays and stress than other clusters. Conclusions Weight management interventions for drivers should explicitly address sleep, and may be maximally effective after establishing socially supportive work environments that reduce stress exposures. PMID:26949883
Olson, Ryan; Thompson, Sharon V; Wipfli, Brad; Hanson, Ginger; Elliot, Diane L; Anger, W Kent; Bodner, Todd; Hammer, Leslie B; Hohn, Elliot; Perrin, Nancy A
2016-03-01
The objectives of the study were to describe a sample of truck drivers, identify clusters of drivers with similar patterns in behaviors affecting energy balance (sleep, diet, and exercise), and test for cluster differences in health safety, and psychosocial factors. Participants' (n = 452, body mass index M = 37.2, 86.4% male) self-reported behaviors were dichotomized prior to hierarchical cluster analysis, which identified groups with similar behavior covariation. Cluster differences were tested with generalized estimating equations. Five behavioral clusters were identified that differed significantly in age, smoking status, diabetes prevalence, lost work days, stress, and social support, but not in body mass index. Cluster 2, characterized by the best sleep quality, had significantly lower lost workdays and stress than other clusters. Weight management interventions for drivers should explicitly address sleep, and may be maximally effective after establishing socially supportive work environments that reduce stress exposures.
Provision of an X-environment using the HEPiX-X11 scripts
NASA Astrophysics Data System (ADS)
Jones, R. W. L.; Cons, L.; Taddei, A.
1997-02-01
At CERN, we have created a user X11 environment within the HEPiX framework. Customisation is possible at the HEPiX, site, cluster, machine, group and user level, in order of increasing priority. The management of the X11 session is divorced from the window management. FVWM is the default window manager, being light on system resources while providing most of the desired functionality. The assembly of a correctly ordered. fvwmrc is done automatically by the scripts, with customisation allowed at all of the above levels. Two tools are provided to query aspects of that environment. These may be used both at the start of the X-session or when commencing any application. The first is guesskbd, a tool to identify the user's keyboard. A second, provides useful information about a given display.
Rahman, Quazi Abidur; Pirbaglou, Meysam; Ritvo, Paul; Heffernan, Jane M; Clarke, Hance; Katz, Joel
2017-01-01
Background Pain is one of the most prevalent health-related concerns and is among the top 3 most common reasons for seeking medical help. Scientific publications of data collected from pain tracking and monitoring apps are important to help consumers and healthcare professionals select the right app for their use. Objective The main objectives of this paper were to (1) discover user engagement patterns of the pain management app, Manage My Pain, using data mining methods; and (2) identify the association between several attributes characterizing individual users and their levels of engagement. Methods User engagement was defined by 2 key features of the app: longevity (number of days between the first and last pain record) and number of records. Users were divided into 5 user engagement clusters employing the k-means clustering algorithm. Each cluster was characterized by 6 attributes: gender, age, number of pain conditions, number of medications, pain severity, and opioid use. Z tests and chi-square tests were used for analyzing categorical attributes. Effects of gender and cluster on numerical attributes were analyzed using 2-way analysis of variances (ANOVAs) followed up by pairwise comparisons using Tukey honest significant difference (HSD). Results The clustering process produced 5 clusters representing different levels of user engagement. The proportion of males and females was significantly different in 4 of the 5 clusters (all P ≤.03). The proportion of males was higher than females in users with relatively high longevity. Mean ages of users in 2 clusters with high longevity were higher than users from other 3 clusters (all P <.001). Overall, males were significantly older than females (P <.001). Across clusters, females reported more pain conditions than males (all P <.001). Users from highly engaged clusters reported taking more medication than less engaged users (all P <.001). Females reported taking a greater number of medications than males (P =.04). In 4 of 5 clusters, the percentage of males taking an opioid was significantly greater (all P ≤.05) than that of females. The proportion of males with mild pain was significantly higher than that of females in 3 clusters (all P ≤.008). Conclusions Although most users of the app reported being female, male users were more likely to be highly engaged in the app. Users in the most engaged clusters self-reported a higher number of pain conditions, a higher number of current medications, and a higher incidence of opioid usage. The high engagement by males in these clusters does not appear to be driven by pain severity which may, in part, be the case for females. Use of a mobile pain app may be relatively more attractive to highly-engaged males than highly-engaged females, and to those with relatively more complex chronic pain problems. PMID:28701291
Rahman, Quazi Abidur; Janmohamed, Tahir; Pirbaglou, Meysam; Ritvo, Paul; Heffernan, Jane M; Clarke, Hance; Katz, Joel
2017-07-12
Pain is one of the most prevalent health-related concerns and is among the top 3 most common reasons for seeking medical help. Scientific publications of data collected from pain tracking and monitoring apps are important to help consumers and healthcare professionals select the right app for their use. The main objectives of this paper were to (1) discover user engagement patterns of the pain management app, Manage My Pain, using data mining methods; and (2) identify the association between several attributes characterizing individual users and their levels of engagement. User engagement was defined by 2 key features of the app: longevity (number of days between the first and last pain record) and number of records. Users were divided into 5 user engagement clusters employing the k-means clustering algorithm. Each cluster was characterized by 6 attributes: gender, age, number of pain conditions, number of medications, pain severity, and opioid use. Z tests and chi-square tests were used for analyzing categorical attributes. Effects of gender and cluster on numerical attributes were analyzed using 2-way analysis of variances (ANOVAs) followed up by pairwise comparisons using Tukey honest significant difference (HSD). The clustering process produced 5 clusters representing different levels of user engagement. The proportion of males and females was significantly different in 4 of the 5 clusters (all P ≤.03). The proportion of males was higher than females in users with relatively high longevity. Mean ages of users in 2 clusters with high longevity were higher than users from other 3 clusters (all P <.001). Overall, males were significantly older than females (P <.001). Across clusters, females reported more pain conditions than males (all P <.001). Users from highly engaged clusters reported taking more medication than less engaged users (all P <.001). Females reported taking a greater number of medications than males (P =.04). In 4 of 5 clusters, the percentage of males taking an opioid was significantly greater (all P ≤.05) than that of females. The proportion of males with mild pain was significantly higher than that of females in 3 clusters (all P ≤.008). Although most users of the app reported being female, male users were more likely to be highly engaged in the app. Users in the most engaged clusters self-reported a higher number of pain conditions, a higher number of current medications, and a higher incidence of opioid usage. The high engagement by males in these clusters does not appear to be driven by pain severity which may, in part, be the case for females. Use of a mobile pain app may be relatively more attractive to highly-engaged males than highly-engaged females, and to those with relatively more complex chronic pain problems. ©Quazi Abidur Rahman, Tahir Janmohamed, Meysam Pirbaglou, Paul Ritvo, Jane M Heffernan, Hance Clarke, Joel Katz. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 12.07.2017.
The Newly-Discovered Outer Halo Globular Cluster System of M31
NASA Astrophysics Data System (ADS)
Mackey, D.; Huxor, A.; Ferguson, A.
2012-08-01
In this contribution we describe the discovery of a large number of globular clusters in the outer halo of M31 from the Pan-Andromeda Archaeological Survey (PAndAS). New globular clusters have also been found in the outskirts of M33, and NGC 147 and 185. Many of the remote M31 clusters are observed to preferentially project onto tidal debris streams in the stellar halo, suggesting that much of the outer M31 globular cluster system has been assembled via the accretion of satellite galaxies. We briefly discuss the global properties of the M31 halo globular cluster system.
1993-03-18
responsible for the major FAA resource for designing general management and direction of the advanced automation systems. He also company’s overall...Clusters), Air Medals, Army Commendation *Division, Directorate of Combat Medal and Senior Parachutist Badge. * Developments at the Signal Center pieor to...evaluation (T&E) process. More extensive testing is els for various missile subsystems and runs on a required with fewer resources. This paper explores
Integrative analysis of the Lake Simcoe watershed (Ontario, Canada) as a socio-ecological system.
Neumann, Alex; Kim, Dong-Kyun; Perhar, Gurbir; Arhonditsis, George B
2017-03-01
Striving for long-term sustainability in catchments dominated by human activities requires development of interdisciplinary research methods to account for the interplay between environmental concerns and socio-economic pressures. In this study, we present an integrative analysis of the Lake Simcoe watershed, Ontario, Canada, as viewed from the perspective of a socio-ecological system. Key features of our analysis are (i) the equally weighted consideration of environmental attributes with socioeconomic priorities and (ii) the identification of the minimal number of key socio-hydrological variables that should be included in a parsimonious watershed management framework, aiming to establish linkages between urbanization trends and nutrient export. Drawing parallels with the concept of Hydrological Response Units, we used Self-Organizing Mapping to delineate spatial organizations with similar socio-economic and environmental attributes, also referred to as Socio-Environmental Management Units (SEMUs). Our analysis provides evidence of two SEMUs with contrasting features, the "undisturbed" and "anthropogenically-influenced", within the Lake Simcoe watershed. The "undisturbed" cluster occupies approximately half of the Lake Simcoe catchment (45%) and is characterized by low landscape diversity and low average population density <0.4 humans ha -1 . By contrast, the socio-environmental functional properties of the "anthropogenically-influenced" cluster highlight the likelihood of a stability loss in the long-run, as inferred from the distinct signature of urbanization activities on the tributary nutrient export, and the loss of subwatershed sensitivity to natural mechanisms that may ameliorate the degradation patterns. Our study also examines how the SEMU concept can augment the contemporary integrated watershed management practices and provides directions in order to promote environmental programs for lake conservation and to increase public awareness and engagement in stewardship initiatives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clustering of GPS velocities in the Mojave Block, southeastern California
Savage, James C.; Simpson, Robert W.
2013-01-01
We find subdivisions within the Mojave Block using cluster analysis to identify groupings in the velocities observed at GPS stations there. The clusters are represented on a fault map by symbols located at the positions of the GPS stations, each symbol representing the cluster to which the velocity of that GPS station belongs. Fault systems that separate the clusters are readily identified on such a map. The most significant representation as judged by the gap test involves 4 clusters within the Mojave Block. The fault systems bounding the clusters from east to west are 1) the faults defining the eastern boundary of the Northeast Mojave Domain extended southward to connect to the Hector Mine rupture, 2) the Calico-Paradise fault system, 3) the Landers-Blackwater fault system, and 4) the Helendale-Lockhart fault system. This division of the Mojave Block is very similar to that proposed by Meade and Hager. However, no cluster boundary coincides with the Garlock Fault, the northern boundary of the Mojave Block. Rather, the clusters appear to continue without interruption from the Mojave Block north into the southern Walker Lane Belt, similar to the continuity across the Garlock Fault of the shear zone along the Blackwater-Little Lake fault system observed by Peltzer et al. Mapped traces of individual faults in the Mojave Block terminate within the block and do not continue across the Garlock Fault [Dokka and Travis, ].
CRISPR-Cas Technologies and Applications in Food Bacteria.
Stout, Emily; Klaenhammer, Todd; Barrangou, Rodolphe
2017-02-28
Clustered regularly interspaced short palindromic repeats (CRISPRs) and CRISPR-associated (Cas) proteins form adaptive immune systems that occur in many bacteria and most archaea. In addition to protecting bacteria from phages and other invasive mobile genetic elements, CRISPR-Cas molecular machines can be repurposed as tool kits for applications relevant to the food industry. A primary concern of the food industry has long been the proper management of food-related bacteria, with a focus on both enhancing the outcomes of beneficial microorganisms such as starter cultures and probiotics and limiting the presence of detrimental organisms such as pathogens and spoilage microorganisms. This review introduces CRISPR-Cas as a novel set of technologies to manage food bacteria and offers insights into CRISPR-Cas biology. It primarily focuses on the applications of CRISPR-Cas systems and tools in starter cultures and probiotics, encompassing strain-typing, phage resistance, plasmid vaccination, genome editing, and antimicrobial activity.
UniGene Tabulator: a full parser for the UniGene format.
Lenzi, Luca; Frabetti, Flavia; Facchin, Federica; Casadei, Raffaella; Vitale, Lorenza; Canaider, Silvia; Carinci, Paolo; Zannotti, Maria; Strippoli, Pierluigi
2006-10-15
UniGene Tabulator 1.0 provides a solution for full parsing of UniGene flat file format; it implements a structured graphical representation of each data field present in UniGene following import into a common database managing system usable in a personal computer. This database includes related tables for sequence, protein similarity, sequence-tagged site (STS) and transcript map interval (TXMAP) data, plus a summary table where each record represents a UniGene cluster. UniGene Tabulator enables full local management of UniGene data, allowing parsing, querying, indexing, retrieving, exporting and analysis of UniGene data in a relational database form, usable on Macintosh (OS X 10.3.9 or later) and Windows (2000, with service pack 4, XP, with service pack 2 or later) operating systems-based computers. The current release, including both the FileMaker runtime applications, is freely available at http://apollo11.isto.unibo.it/software/
Factors influencing the quality of life of haemodialysis patients according to symptom cluster.
Shim, Hye Yeung; Cho, Mi-Kyoung
2018-05-01
To identify the characteristics in each symptom cluster and factors influencing the quality of life of haemodialysis patients in Korea according to cluster. Despite developments in renal replacement therapy, haemodialysis still restricts the activities of daily living due to pain and impairs physical functioning induced by the disease and its complications. Descriptive survey. Two hundred and thirty dialysis patients aged >18 years. They completed self-administered questionnaires of Dialysis Symptom Index and Kidney Disease Quality of Life instrument-Short Form 1.3. To determine the optimal number of clusters, the collected data were analysed using polytomous variable latent class analysis in R software (poLCA) to estimate the latent class models and the latent class regression models for polytomous outcome variables. Differences in characteristics, symptoms and QOL according to the symptom cluster of haemodialysis patients were analysed using the independent t test and chi-square test. The factors influencing the QOL according to symptom cluster were identified using hierarchical multiple regression analysis. Physical and emotional symptoms were significantly more severe, and the QOL was significantly worse in Cluster 1 than in Cluster 2. The factors influencing the QOL were spouse, job, insurance type and physical and emotional symptoms in Cluster 1, with these variables having an explanatory power of 60.9%. Physical and emotional symptoms were the only influencing factors in Cluster 2, and they had an explanatory power of 37.4%. Mitigating the symptoms experienced by haemodialysis patients and improving their QOL require educational and therapeutic symptom management interventions that are tailored according to the characteristics and symptoms in each cluster. The findings of this study are expected to lead to practical guidelines for addressing the symptoms experienced by haemodialysis patients, and they provide basic information for developing nursing interventions to manage these symptoms and improve the QOL of these patients. © 2017 John Wiley & Sons Ltd.
Yuan, Soe-Tsyr; Sun, Jerry
2005-10-01
Development of algorithms for automated text categorization in massive text document sets is an important research area of data mining and knowledge discovery. Most of the text-clustering methods were grounded in the term-based measurement of distance or similarity, ignoring the structure of the documents. In this paper, we present a novel method named structured cosine similarity (SCS) that furnishes document clustering with a new way of modeling on document summarization, considering the structure of the documents so as to improve the performance of document clustering in terms of quality, stability, and efficiency. This study was motivated by the problem of clustering speech documents (of no rich document features) attained from the wireless experience oral sharing conducted by mobile workforce of enterprises, fulfilling audio-based knowledge management. In other words, this problem aims to facilitate knowledge acquisition and sharing by speech. The evaluations also show fairly promising results on our method of structured cosine similarity.
Hubble Revisits a Globular Cluster’s Age
2014-08-13
This new NASA/ESA Hubble Space Telescope image shows the globular cluster IC 4499. Globular clusters are big balls of old stars that orbit around their host galaxy. It has long been believed that all the stars within a globular cluster form at the about same time, a property which can be used to determine the cluster's age. For more massive globulars however, detailed observations have shown that this is not entirely true — there is evidence that they instead consist of multiple populations of stars born at different times. One of the driving forces behind this behavior is thought to be gravity: more massive globulars manage to grab more gas and dust, which can then be transformed into new stars. IC 4499 is a somewhat special case. Its mass lies somewhere between low-mass globulars, which show a single generation build-up, and the more complex and massive globulars which can contain more than one generation of stars. By studying objects like IC 4499 astronomers can therefore explore how mass affects a cluster's contents. Astronomers found no sign of multiple generations of stars in IC 4499 — supporting the idea that less massive clusters in general only consist of a single stellar generation. Hubble observations of IC 4499 have also helped to pinpoint the cluster's age: observations of this cluster from the 1990s suggested a puzzlingly young age when compared to other globular clusters within the Milky Way. However, since those first estimates new Hubble data have been obtained and it has been found to be much more likely that IC 4499 is actually roughly the same age as other Milky Way clusters at approximately 12 billion years old. Credit: ESA and NASA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Analytical network process based optimum cluster head selection in wireless sensor network.
Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.
Analytical network process based optimum cluster head selection in wireless sensor network
Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P
MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less
Presentation on systems cluster research
NASA Technical Reports Server (NTRS)
Morgenthaler, George W.
1989-01-01
This viewgraph presentation presents an overview of systems cluster research performed by the Center for Space Construction. The goals of the research are to develop concepts, insights, and models for space construction and to develop systems engineering/analysis curricula for training future aerospace engineers. The following topics are covered: CSC systems analysis/systems engineering (SIMCON) model, CSC systems cluster schedule, system life-cycle, model optimization techniques, publications, cooperative efforts, and sponsored research.
Geographic distribution of trauma centers and injury-related mortality in the United States.
Brown, Joshua B; Rosengart, Matthew R; Billiar, Timothy R; Peitzman, Andrew B; Sperry, Jason L
2016-01-01
Regionalized trauma care improves outcomes; however, access to care is not uniform across the United States. The objective was to evaluate whether geographic distribution of trauma centers correlates with injury mortality across state trauma systems. Level I or II trauma centers in the contiguous United States were mapped. State-level age-adjusted injury fatality rates per 100,000 people were obtained and evaluated for spatial autocorrelation. Nearest neighbor ratios (NNRs) were generated for each state. A NNR less than 1 indicates clustering, while a NNR greater than 1 indicates dispersion. NNRs were tested for difference from random geographic distribution. Fatality rates and NNRs were examined for correlation. Fatality rates were compared between states with trauma center clustering versus dispersion. Trauma center distribution and population density were evaluated. Spatial-lag regression determined the association between fatality rate and NNR, controlling for state-level demographics, population density, injury severity, trauma system resources, and socioeconomic factors. Fatality rates were spatially autocorrelated (Moran's I = 0.35, p < 0.01). Nine states had a clustered pattern (median NNR, 0.55; interquartile range [IQR], 0.48-0.60), 22 had a dispersed pattern (median NNR, 2.00; IQR, 1.68-3.99), and 10 had a random pattern (median NNR, 0.90; IQR, 0.85-1.00) of trauma center distribution. Fatality rate and NNR were correlated (ρ = 0.34, p = 0.03). Clustered states had a lower median injury fatality rate compared with dispersed states (56.9 [IQR, 46.5-58.9] vs. 64.9 [IQR, 52.5-77.1]; p = 0.04). Dispersed compared with clustered states had more counties without a trauma center that had higher population density than counties with a trauma center (5.7% vs. 1.2%, p < 0.01). Spatial-lag regression demonstrated that fatality rates increased by 0.02 per 100,000 persons for each unit increase in NNR (p < 0.01). Geographic distribution of trauma centers correlates with injury mortality, with more clustered state trauma centers associated with lower fatality rates. This may be a result of access relative to population density. These results may have implications for trauma system planning and require further study to investigate underlying mechanisms. Therapeutic/care management study, level IV.
Integral field spectroscopy with GEMINI: Extragalactic star cluster in NGC1275
NASA Astrophysics Data System (ADS)
Trancho, Gelys; Miller, Bryan; García-Lorenzo, Begoña; Sánchez, Sebastián F.
2006-01-01
Studies of globular cluster systems play a critical role in our understanding of galaxy formation. Imaging with the Hubble Space Telescope has revealed that young star clusters are formed copiously in galaxy mergers, strengthening theories in which giant elliptical galaxies are formed by the merger of spirals [e.g. Whitmore, B.C., Schweizer, F., Leitherer, C., Borne, K., Robert, C., 1993. Astronomical Journal. 106, 1354; Miller, B.W., Whitmore, B.C., Schweizer, F., Fall, S.M., 1997. Astronomical Journal. 114, 2381; Zepf, S.E., Ashman, K.M., English, J., Freeman, K.C., Sharples, R.M., 1999. Astronomical Journal. 118, 752; Ashman, K.M., Zepf, S.E., 1992. Astrophysical Journal. 384, 50]. However, the formation and evolution of globular cluster systems is still not well understood. Ages and metallicities of the clusters are uncertain either because of degeneracy in the broad-band colors or due to variable reddening. Also, the luminosity function of the young clusters, which depends critically on the metallicities and ages of the clusters, appears to be single power-laws while the luminosity function of old clusters has a well-defined break. Either there is significant dynamical evolution of the cluster systems or metallicity affects the mass function of forming clusters. Spectroscopy of these clusters are needed to improve the metallicity and age measurements and to study the kinematics of young cluster systems. Therefore, we have obtained GMOS IFU data of 4 clusters in NGC1275. We will present preliminary results like metallicities, ages, and velocities of the star clusters from IFU spectroscopy.
Many-objective optimization and visual analytics reveal key trade-offs for London's water supply
NASA Astrophysics Data System (ADS)
Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.
2015-12-01
In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.
ERIC Educational Resources Information Center
Fonseca, Linda Lafferty
Developed in Illinois, this document contains three components. The first component consists of employability task lists for the business, marketing, and management occupations of first-line supervisors and manager/supervisors; file clerks; traffic, shipping, and receiving clerks; records management analysts; adjustment clerks; and customer…
Design notes for the next generation persistent object manager for CAP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isely, M.; Fischler, M.; Galli, M.
1995-05-01
The CAP query system software at Fermilab has several major components, including SQS (for managing the query), the retrieval system (for fetching auxiliary data), and the query software itself. The central query software in particular is essentially a modified version of the `ptool` product created at UIC (University of Illinois at Chicago) as part of the PASS project under Bob Grossman. The original UIC version was designed for use in a single-user non-distributed Unix environment. The Fermi modifications were an attempt to permit multi-user access to a data set distributed over a set of storage nodes. (The hardware is anmore » IBM SP-x system - a cluster of AIX POWER2 nodes with an IBM-proprietary high speed switch interconnect). Since the implementation work of the Fermi-ized ptool, the CAP members have learned quite a bit about the nature of queries and where the current performance bottlenecks exist. This has lead them to design a persistent object manager that will overcome these problems. For backwards compatibility with ptool, the ptool persistent object API will largely be retained, but the implementation will be entirely different.« less
Understanding growers' decisions to manage invasive pathogens at the farm level.
Breukers, Annemarie; van Asseldonk, Marcel; Bremmer, Johan; Beekman, Volkert
2012-06-01
Globalization causes plant production systems to be increasingly threatened by invasive pests and pathogens. Much research is devoted to support management of these risks. Yet, the role of growers' perceptions and behavior in risk management has remained insufficiently analyzed. This article aims to fill this gap by addressing risk management of invasive pathogens from a sociopsychological perspective. An analytical framework based on the Theory of Planned Behavior was used to explain growers' decisions on voluntary risk management measures. Survey information from 303 Dutch horticultural growers was statistically analyzed, including regression and cluster analysis. It appeared that growers were generally willing to apply risk management measures, and that poor risk management was mainly due to perceived barriers, such as high costs and doubts regarding efficacy of management measures. The management measures applied varied considerably among growers, depending on production sector and farm-specific circumstances. Growers' risk perception was found to play a role in their risk management, although the causal relation remained unclear. These results underscore the need to apply a holistic perspective to farm level management of invasive pathogen risk, considering the entire package of management measures and accounting for sector- and farm-specific circumstances. Moreover, they demonstrate that invasive pathogen risk management can benefit from a multidisciplinary approach that incorporates growers' perceptions and behavior.
1993-03-01
CLUSTER A CLUSTER B .UDP D "Orequeqes ProxyDistribute 0 Figure 4-4: HOSTALL Implementation HOST_ALL is implemented as follows. The kernel looks up the...it includes the HOSTALL request as an argument. The generic CronusHost object is managed by the Cronus Kernel. A kernel that receives a ProxyDistnbute...request uses its cached service information to send the HOSTALL request to each host in its cluster via UDP. If the kernel has no cached information
Bulger, Carrie A; Matthews, Russell A; Hoffman, Mark E
2007-10-01
While researchers are increasingly interested in understanding the boundaries surrounding the work and personal life domains, few have tested the propositions set forth by theory. Boundary theory proposes that individuals manage the boundaries between work and personal life through processes of segmenting and/or integrating the domains. The authors investigated boundary management profiles of 332 workers in an investigation of the segmentation-integration continuum. Cluster analysis indicated consistent clusters of boundary management practices related to varying segmentation and integration of the work and personal life domains. But, the authors suggest that the segmentation-integration continuum may be more complicated. Results also indicated relationships between boundary management practices and work-personal life interference and work-personal life enhancement. Less flexible and more permeable boundaries were related to more interference, while more flexible and more permeable boundaries were related to more enhancement.
Lee, Yii-Ching; Huang, Shian-Chang; Huang, Chih-Hsuan; Wu, Hsin-Hung
2016-01-01
This study uses kernel k-means cluster analysis to identify medical staffs with high burnout. The data collected in October to November 2014 are from the emotional exhaustion dimension of the Chinese version of Safety Attitudes Questionnaire in a regional teaching hospital in Taiwan. The number of effective questionnaires including the entire staffs such as physicians, nurses, technicians, pharmacists, medical administrators, and respiratory therapists is 680. The results show that 8 clusters are generated by kernel k-means method. Employees in clusters 1, 4, and 5 are relatively in good conditions, whereas employees in clusters 2, 3, 6, 7, and 8 need to be closely monitored from time to time because they have relatively higher degree of burnout. When employees with higher degree of burnout are identified, the hospital management can take actions to improve the resilience, reduce the potential medical errors, and, eventually, enhance the patient safety. This study also suggests that the hospital management needs to keep track of medical staffs’ fatigue conditions and provide timely assistance for burnout recovery through employee assistance programs, mindfulness-based stress reduction programs, positivity currency buildup, and forming appreciative inquiry groups. PMID:27895218
Kazi, A M; Ali, M; K, Ayub; Kalimuddin, H; Zubair, K; Kazi, A N; A, Artani; Ali, S A
2017-11-01
The addition of Global Positioning System (GPS) to a mobile phone makes it a very powerful tool for surveillance and monitoring coverage of health programs. This technology enables transfer of data directly into computer applications and cross-references to Geographic Information Systems (GIS) maps, which enhances assessment of coverage and trends. Utilization of these systems in low and middle income countries is currently limited, particularly for immunization coverage assessments and polio vaccination campaigns. We piloted the use of this system and discussed its potential to improve the efficiency of field-based health providers and health managers for monitoring of the immunization program. Using "30×7" WHO sampling technique, a survey of children less than five years of age was conducted in random clusters of Karachi, Pakistan in three high risk towns where a polio case was detected in 2011. Center point of the cluster was calculated by the application on the mobile. Data and location coordinates were collected through a mobile phone. This data was linked with an automated mHealth based monitoring system for monitoring of Supplementary Immunization Activities (SIAs) in Karachi. After each SIA, a visual report was generated according to the coordinates collected from the survey. A total of 3535 participants consented to answer to a baseline survey. We found that the mobile phones incorporated with GIS maps can improve efficiency of health providers through real-time reporting and replacing paper based questionnaire for collection of data at household level. Visual maps generated from the data and geospatial analysis can also give a better assessment of the immunization coverage and polio vaccination campaigns. The study supports a model system in resource constrained settings that allows routine capture of individual level data through GPS enabled mobile phone providing actionable information and geospatial maps to local public health managers, policy makers and study staff monitoring immunization coverage. Copyright © 2017 Elsevier B.V. All rights reserved.
SANs and Large Scale Data Migration at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2004-01-01
Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.
Managing Clustered Data Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Hospitality, Recreation, and Personal Service Occupations: Grade 8. Cluster V.
ERIC Educational Resources Information Center
Calhoun, Olivia H.
A curriculum guide for grade 8, the document is devoted to the occupational cluster "Hospitality, Recreation, and Personal Service Occupations." It is divided into four units: recreational resources for education, employment, and professional opportunities; barbering and cosmetology; mortuary science; hotel-motel management. Each unit is…
Development of Metal Cluster-Based Energetic Materials at NSWC-IHD
2011-01-01
reactivity of NixAly + clusters with nitromethane was investigated using a gas-phase molecular beam system. Results indicate that nitromethane is highly...clusters make up the subunit of a molecular metal-based energetic material. The reactivity of NixAly+ clusters with nitromethane was investigated using...a gas-phase molecular beam system. Results indicate that nitromethane is highly reactive toward the NixAly+ clusters and suggests it would not make
Nathan, Hannah L; Duhig, Kate; Vousden, Nicola; Lawley, Elodie; Seed, Paul T; Sandall, Jane; Bellad, Mrutyunjaya B; Brown, Adrian C; Chappell, Lucy C; Goudar, Shivaprasad S; Gidiri, Muchabayiwa F; Shennan, Andrew H
2018-03-27
Obstetric haemorrhage, sepsis and pregnancy hypertension account for more than 50% of maternal deaths worldwide. Early detection and effective management of these conditions relies on vital signs. The Microlife® CRADLE Vital Sign Alert (VSA) is an easy-to-use, accurate device that measures blood pressure and pulse. It incorporates a traffic-light early warning system that alerts all levels of healthcare provider to the need for escalation of care in women with obstetric haemorrhage, sepsis or pregnancy hypertension, thereby aiding early recognition of haemodynamic instability and preventing maternal mortality and morbidity. The aim of the trial was to determine whether implementation of the CRADLE intervention (the Microlife® CRADLE VSA device and CRADLE training package) into routine maternity care in place of existing equipment will reduce a composite outcome of maternal mortality and morbidity in low- and middle-income country populations. The CRADLE-3 trial was a stepped-wedge cluster-randomised controlled trial of the CRADLE intervention compared to routine maternity care. Each cluster crossed from routine maternity care to the intervention at 2-monthly intervals over the course of 20 months (April 2016 to November 2017). All women identified as pregnant or within 6 weeks postpartum, presenting for maternity care in cluster catchment areas were eligible to participate. Primary outcome data (composite of maternal death, eclampsia and emergency hysterectomy per 10,000 deliveries) were collected at 10 clusters (Gokak, Belgaum, India; Harare, Zimbabwe; Ndola, Zambia; Lusaka, Zambia; Free Town, Sierra Leone; Mbale, Uganda; Kampala, Uganda; Cap Haitien, Haiti; South West, Malawi; Addis Ababa, Ethiopia). This trial was informed by the Medical Research Council guidance for complex interventions. A process evaluation was undertaken to evaluate implementation in each site and a cost-effectiveness evaluation will be undertaken. All aspects of this protocol have been evaluated in a feasibility study, with subsequent optimisation of the intervention. This trial will demonstrate the potential impact of the CRADLE intervention on reducing maternal mortality and morbidity in low-resource settings. It is anticipated that the relatively low cost of the intervention and ease of integration into existing health systems will be of significant interest to local, national and international health policy-makers. ISCRTN41244132. Registered on 2 February 2016. Prospective protocol modifications have been recorded and were communicated to the Ethics Committees and Trials Committees. The adapted Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Checklist and the SPIRIT Checklist are attached as Additional file 1.
Ajay, Vamadevan S; Tian, Maoyi; Chen, Hao; Wu, Yangfeng; Li, Xian; Dunzhu, Danzeng; Ali, Mohammed K; Tandon, Nikhil; Krishnan, Anand; Prabhakaran, Dorairaj; Yan, Lijing L
2014-09-06
In resource-poor areas of China and India, the cardiovascular disease burden is high, but availability of and access to quality healthcare is limited. Establishing a management scheme that utilizes the local infrastructure and builds healthcare capacity is essential for cardiovascular disease prevention and management. The study aims to develop, implement, and evaluate the feasibility and effectiveness of a simplified, evidence-based cardiovascular management program delivered by community healthcare workers in resource-constrained areas in Tibet, China and Haryana, India. This yearlong cluster-randomized controlled trial will be conducted in 20 villages in Tibet and 20 villages in Haryana. Randomization of villages to usual care or intervention will be stratified by country. High cardiovascular disease risk individuals (aged 40 years or older, history of heart disease, stroke, diabetes, or measured systolic blood pressure of 160 mmHg or higher) will be screened at baseline. Community health workers in the intervention villages will be trained to manage and follow up high-risk patients on a monthly basis following a simplified '2+2' intervention model involving two lifestyle recommendations and the appropriate prescription of two medications. A customized electronic decision support system based on the intervention strategy will be developed to assist the community health workers with patient management. Baseline and follow-up surveys will be conducted in a standardized fashion in all villages. The primary outcome will be the net difference between-group in the proportion of high-risk patients taking antihypertensive medication pre- and post-intervention. Secondary outcomes will include the proportion of patients taking aspirin and changes in blood pressure. Process and economic evaluations will also be conducted. To our knowledge, this will be the first study to evaluate the effect of a simplified management program delivered by community health workers with the help of electronic decision support system on improving the health of high cardiovascular disease risk patients. If effective, this intervention strategy can serve as a model that can be implemented, where applicable, in rural China, India, and other resource-constrained areas. The trial was registered in the clinicaltrials.gov database on 30 December, 2011 and the registration number is NCT01503814.
Nonpharmacologic Pain Management Interventions in German Nursing Homes: A Cluster Randomized Trial.
Kalinowski, Sonja; Budnick, Andrea; Kuhnert, Ronny; Könner, Franziska; Kissel-Kröll, Angela; Kreutz, Reinhold; Dräger, Dagmar
2015-08-01
The reported prevalence of pain among nursing home residents (NHRs) is high. Insufficient use of analgesics, the conventional pain management strategy, is often reported. Whether and to what extent nonpharmacologic therapies (NPTs) are used to manage the pain of NHRs in Germany is largely unknown. The aim of this cluster-randomized trial was to assess the NPTs provided and to enhance the application and prescription of NPTs in NHRs on an individual level. There were six nursing homes in the intervention group and six in the control group. There were 239 NHRs, aged ≥65 years, with an average Mini-Mental State Examination score of at least 18 at baseline. Pain management interventions (cluster level) included an online course for physicians and 1-day seminar for nurses. Data on NPT applied by nurses and therapeutic NPT prescribed by physicians were obtained from residents' nursing documentation. Face-to-face interviews with NHRs assessed the NPT received. At baseline, 82.6% of NHR (mean age 83 years) were affected by pain, but less than 1 in 10 received NPT. The intervention did not result in a significant increase in the NPT applied by nurses, but did significantly increase the therapeutic NPT prescribed by physicians. Residents were active in using NPT to self-manage their pain. Given the prevalence of pain in NHRs, there is a clear need to improve pain management in this population. Extended use of NPT offers a promising approach. We recommend that nurses provide residents with education on pain-management techniques to support them in taking a proactive role in managing their pain. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Lipper, Colin H; Paddock, Mark L; Onuchic, José N; Mittler, Ron; Nechushtai, Rachel; Jennings, Patricia A
2015-01-01
Iron-sulfur cluster biogenesis is executed by distinct protein assembly systems. Mammals have two systems, the mitochondrial Fe-S cluster assembly system (ISC) and the cytosolic assembly system (CIA), that are connected by an unknown mechanism. The human members of the NEET family of 2Fe-2S proteins, nutrient-deprivation autophagy factor-1 (NAF-1) and mitoNEET (mNT), are located at the interface between the mitochondria and the cytosol. These proteins have been implicated in cancer cell proliferation, and they can transfer their 2Fe-2S clusters to a standard apo-acceptor protein. Here we report the first physiological 2Fe-2S cluster acceptor for both NEET proteins as human Anamorsin (also known as cytokine induced apoptosis inhibitor-1; CIAPIN-1). Anamorsin is an electron transfer protein containing two iron-sulfur cluster-binding sites that is required for cytosolic Fe-S cluster assembly. We show, using UV-Vis spectroscopy, that both NAF-1 and mNT can transfer their 2Fe-2S clusters to apo-Anamorsin with second order rate constants similar to those of other known human 2Fe-2S transfer proteins. A direct protein-protein interaction of the NEET proteins with apo-Anamorsin was detected using biolayer interferometry. Furthermore, electrospray mass spectrometry of holo-Anamorsin prepared by cluster transfer shows that it receives both of its 2Fe-2S clusters from the NEETs. We propose that mNT and NAF-1 can provide parallel routes connecting the mitochondrial ISC system and the CIA. 2Fe-2S clusters assembled in the mitochondria are received by NEET proteins and when needed transferred to Anamorsin, activating the CIA.
NASA Astrophysics Data System (ADS)
Stroobant, M.; Locritani, M.; Marini, D.; Sabbadini, L.; Carmisciano, C.; Manzella, G.; Magaldi, M.; Aliani, S.
2012-04-01
DLTM is the Ligurian Region (north Italy) cluster of Centre of Excellence (CoE) in waterborne technologies, that involves about 120 enterprises - of which, more than 100 SMEs -, the University of Genoa, all the main National Research Centres dealing with maritime and marine technologies established in Liguria (CNR, INGV, ENEA-UTMAR), the NATO Undersea Research Centre (NURC) and the Experimental Centre of the Italian Navy (CSSN), the Bank, the Port Authority and the Chamber of Commerce of the city of La Spezia. Following its mission, DLTM has recently established three Collaborative Research Laboratories focused on: 1. Computational Fluid dynamics (CFD_Lab) 2. High Performance Computing (HPC_Lab) 3. Monitoring and Analysis of Marine Ecosystems (MARE_Lab). The main role of them is to improve the relationships among the research centres and the enterprises, encouraging a systematic networking approach and sharing of knowledge, data, services, tools and human resources. Two of the key objectives of Lab_MARE are the establishment of: - an integrated system of observation and sea forecasting; - a Regional Marine Instrument Centre (RMIC) for oceanographic and metereological instruments (assembled using 'shared' tools and facilities). Besides, an important and innovative research project has been recently submitted to the Italian Ministry for Education, University and Research (MIUR). This project, in agreement with the European Directives (COM2009 (544)), is aimed to develop a Management Information System (MIS) for oceanographic and meteorological data in the Mediterranean Sea. The availability of adequate HPC inside DLTM is, of course, an important asset for achieving useful results; for example, the Regional Ocean Modeling System (ROMS) model is currently running on a high-resolution mesh on the cluster to simulate and reproduce the circulation within the Ligurian Sea. ROMS outputs will have broad and multidisciplinary impacts because ocean circulation affects the dispersion of different substances like oil spills and other pollutants but also sediments, nutrients and larvae. This could be an important tool for the environmental preservation, prevention and remediation, by placing the bases for the integrated management of the ocean.
Biogenesis of [Fe-S] cluster in Firmicutes: an unexploited field of investigation.
Riboldi, Gustavo Pelicioli; de Mattos, Eduardo Preusser; Frazzon, Jeverson
2013-09-01
Iron-sulfur clusters (ISC) ([Fe-S]) are evolutionarily ancient and ubiquitous inorganic prosthetic groups present in almost all living organisms, whose biosynthetic assembly is dependent on complex protein machineries. [Fe-S] clusters are involved in biologically important processes, ranging from electron transfer catalysis to transcriptional regulatory roles. Three different systems involved in [Fe-S] cluster assembly have already been characterized in Proteobacteria, namely, the nitrogen fixation system, the ISC system and the sulfur assimilation system. Although they are well described in various microorganisms, these machineries are poorly characterized in members of the Firmicutes phylum, to which several groups of pathogenic bacteria belong. Recently, several research groups have made efforts to elucidate the biogenesis of [Fe-S] clusters at the molecular level in Firmicutes, and many important characteristics have been described. Considering the pivotal role of [Fe-S] clusters in a number of biological processes, the review presented here focuses on the description of the biosynthetic machineries for [Fe-S] cluster biogenesis in prokaryotes, followed by a discussion on recent results observed for Firmicutes [Fe-S] cluster assembly.
Artim-Esen, Bahar; Çene, Erhan; Şahinkaya, Yasemin; Ertan, Semra; Pehlivan, Özlem; Kamali, Sevil; Gül, Ahmet; Öcal, Lale; Aral, Orhan; Inanç, Murat
2014-07-01
Associations between autoantibodies and clinical features have been described in systemic lupus erythematosus (SLE). Herein, we aimed to define autoantibody clusters and their clinical correlations in a large cohort of patients with SLE. We analyzed 852 patients with SLE who attended our clinic. Seven autoantibodies were selected for cluster analysis: anti-DNA, anti-Sm, anti-RNP, anticardiolipin (aCL) immunoglobulin (Ig)G or IgM, lupus anticoagulant (LAC), anti-Ro, and anti-La. Two-step clustering and Kaplan-Meier survival analyses were used. Five clusters were identified. A cluster consisted of patients with only anti-dsDNA antibodies, a cluster of anti-Sm and anti-RNP, a cluster of aCL IgG/M and LAC, and a cluster of anti-Ro and anti-La antibodies. Analysis revealed 1 more cluster that consisted of patients who did not belong to any of the clusters formed by antibodies chosen for cluster analysis. Sm/RNP cluster had significantly higher incidence of pulmonary hypertension and Raynaud phenomenon. DsDNA cluster had the highest incidence of renal involvement. In the aCL/LAC cluster, there were significantly more patients with neuropsychiatric involvement, antiphospholipid syndrome, autoimmune hemolytic anemia, and thrombocytopenia. According to the Systemic Lupus International Collaborating Clinics damage index, the highest frequency of damage was in the aCL/LAC cluster. Comparison of 10 and 20 years survival showed reduced survival in the aCL/LAC cluster. This study supports the existence of autoantibody clusters with distinct clinical features in SLE and shows that forming clinical subsets according to autoantibody clusters may be useful in predicting the outcome of the disease. Autoantibody clusters in SLE may exhibit differences according to the clinical setting or population.
Combining Surveillance Systems: Effective Merging of U.S. Veteran and Military Health Data
Pavlin, Julie A.; Burkom, Howard S.; Elbert, Yevgeniy; Lucero-Obusan, Cynthia; Winston, Carla A.; Cox, Kenneth L.; Oda, Gina; Lombardo, Joseph S.; Holodniy, Mark
2013-01-01
Background The U.S. Department of Veterans Affairs (VA) and Department of Defense (DoD) had more than 18 million healthcare beneficiaries in 2011. Both Departments conduct individual surveillance for disease events and health threats. Methods We performed joint and separate analyses of VA and DoD outpatient visit data from October 2006 through September 2010 to demonstrate geographic and demographic coverage, timeliness of influenza epidemic awareness, and impact on spatial cluster detection achieved from a joint VA and DoD biosurveillance platform. Results Although VA coverage is greater, DoD visit volume is comparable or greater. Detection of outbreaks was better in DoD data for 58% and 75% of geographic areas surveyed for seasonal and pandemic influenza, respectively, and better in VA data for 34% and 15%. The VA system tended to alert earlier with a typical H3N2 seasonal influenza affecting older patients, and the DoD performed better during the H1N1 pandemic which affected younger patients more than normal influenza seasons. Retrospective analysis of known outbreaks demonstrated clustering evidence found in separate DoD and VA runs, which persisted with combined data sets. Conclusion The analyses demonstrate two complementary surveillance systems with evident benefits for the national health picture. Relative timeliness of reporting could be improved in 92% of geographic areas with access to both systems, and more information provided in areas where only one type of facility exists. Combining DoD and VA data enhances geographic cluster detection capability without loss of sensitivity to events isolated in either population and has a manageable effect on customary alert rates. PMID:24386335
Chang, Soju; Pool, Vitali; O'Connell, Kathryn; Polder, Jacquelyn A; Iskander, John; Sweeney, Colleen; Ball, Robert; Braun, M Miles
2008-01-01
Errors involving the mix-up of tuberculin purified protein derivative (PPD) and vaccines leading to adverse reactions and unnecessary medical management have been reported previously. To determine the frequency of PPD-vaccine mix-ups reported to the US Vaccine Adverse Event Reporting System (VAERS) and the Adverse Event Reporting System (AERS), characterize adverse events and clusters involving mix-ups and describe reported contributory factors. We reviewed AERS reports from 1969 to 2005 and VAERS reports from 1990 to 2005. We defined a mix-up error event as an incident in which a single patient or a cluster of patients inadvertently received vaccine instead of a PPD product or received a PPD product instead of vaccine. We defined a cluster as inadvertent administration of PPD or vaccine products to more than one patient in the same facility within 1 month. Of 115 mix-up events identified, 101 involved inadvertent administration of vaccines instead of PPD. Product confusion involved PPD and multiple vaccines. The annual number of reported mix-ups increased from an average of one event per year in the early 1990s to an average of ten events per year in the early part of this decade. More than 240 adults and children were affected and the majority reported local injection site reactions. Four individuals were hospitalized (all recovered) after receiving the wrong products. Several patients were inappropriately started on tuberculosis prophylaxis as a result of a vaccine local reaction being interpreted as a positive tuberculin skin test. Reported potential contributory factors involved both system factors (e.g. similar packaging) and human errors (e.g. failure to read label before product administration). To prevent PPD-vaccine mix-ups, proper storage, handling and administration of vaccine and PPD products is necessary.
Cluster redshifts in five suspected superclusters
NASA Technical Reports Server (NTRS)
Ciardullo, R.; Ford, H.; Harms, R.
1985-01-01
Redshift surveys for rich superclusters were carried out in five regions of the sky containing surface-density enhancements of Abell clusters. While several superclusters are identified, projection effects dominate each field, and no system contains more than five rich clusters. Two systems are found to be especially interesting. The first, field 0136 10, is shown to contain a superposition of at least four distinct superclusters, with the richest system possessing a small velocity dispersion. The second system, 2206 - 22, though a region of exceedingly high Abell cluster surface density, appears to be a remarkable superposition of 23 rich clusters almost uniformly distributed in redshift space between 0.08 and 0.24. The new redshifts significantly increase the three-dimensional information available for the distance class 5 and 6 Abell clusters and allow the spatial correlation function around rich superclusters to be estimated.
Clustering of GPS velocities in the Mojave Block, southeastern California
NASA Astrophysics Data System (ADS)
Savage, J. C.; Simpson, R. W.
2013-04-01
find subdivisions within the Mojave Block using cluster analysis to identify groupings in the velocities observed at GPS stations there. The clusters are represented on a fault map by symbols located at the positions of the GPS stations, each symbol representing the cluster to which the velocity of that GPS station belongs. Fault systems that separate the clusters are readily identified on such a map. The most significant representation as judged by the gap test involves 4 clusters within the Mojave Block. The fault systems bounding the clusters from east to west are 1) the faults defining the eastern boundary of the Northeast Mojave Domain extended southward to connect to the Hector Mine rupture, 2) the Calico-Paradise fault system, 3) the Landers-Blackwater fault system, and 4) the Helendale-Lockhart fault system. This division of the Mojave Block is very similar to that proposed by Meade and Hager []. However, no cluster boundary coincides with the Garlock Fault, the northern boundary of the Mojave Block. Rather, the clusters appear to continue without interruption from the Mojave Block north into the southern Walker Lane Belt, similar to the continuity across the Garlock Fault of the shear zone along the Blackwater-Little Lake fault system observed by Peltzer et al. []. Mapped traces of individual faults in the Mojave Block terminate within the block and do not continue across the Garlock Fault [Dokka and Travis, ].
NASA Astrophysics Data System (ADS)
Krakovsky, Y. M.; Luzgin, A. N.; Mikhailova, E. A.
2018-05-01
At present, cyber-security issues associated with the informatization objects of industry occupy one of the key niches in the state management system. As a result of functional disruption of these systems via cyberattacks, an emergency may arise related to loss of life, environmental disasters, major financial and economic damage, or disrupted activities of cities and settlements. When cyberattacks occur with high intensity, in these conditions there is the need to develop protection against them, based on machine learning methods. This paper examines interval forecasting and presents results with a pre-set intensity level. The interval forecasting is carried out based on a probabilistic cluster model. This method involves forecasting of one of the two predetermined intervals in which a future value of the indicator will be located; probability estimates are used for this purpose. A dividing bound of these intervals is determined by a calculation method based on statistical characteristics of the indicator. Source data are used that includes a number of hourly cyberattacks using a honeypot from March to September 2013.
Peiris, David; Usherwood, Tim; Panaretto, Kathryn; Harris, Mark; Hunt, Jennifer; Redfern, Julie; Zwar, Nicholas; Colagiuri, Stephen; Hayman, Noel; Lo, Serigne; Patel, Bindu; Lyford, Marilyn; MacMahon, Stephen; Neal, Bruce; Sullivan, David; Cass, Alan; Jackson, Rod; Patel, Anushka
2015-01-01
Despite effective treatments to reduce cardiovascular disease risk, their translation into practice is limited. Using a parallel arm cluster-randomized controlled trial in 60 Australian primary healthcare centers, we tested whether a multifaceted quality improvement intervention comprising computerized decision support, audit/feedback tools, and staff training improved (1) guideline-indicated risk factor measurements and (2) guideline-indicated medications for those at high cardiovascular disease risk. Centers had to use a compatible software system, and eligible patients were regular attendees (Aboriginal and Torres Strait Islander people aged ≥ 35 years and others aged ≥ 45 years). Patient-level analyses were conducted using generalized estimating equations to account for clustering. Median follow-up for 38,725 patients (mean age, 61.0 years; 42% men) was 17.5 months. Mean monthly staff support was <1 hour/site. For the coprimary outcomes, the intervention was associated with improved overall risk factor measurements (62.8% versus 53.4% risk ratio; 1.25; 95% confidence interval, 1.04-1.50; P=0.02), but there was no significant differences in recommended prescriptions for the high-risk cohort (n=10,308; 56.8% versus 51.2%; P=0.12). There were significant treatment escalations (new prescriptions or increased numbers of medicines) for antiplatelet (17.9% versus 2.7%; P<0.001), lipid-lowering (19.2% versus 4.8%; P<0.001), and blood pressure-lowering medications (23.3% versus 12.1%; P=0.02). In Australian primary healthcare settings, a computer-guided quality improvement intervention, requiring minimal support, improved cardiovascular disease risk measurement but did not increase prescription rates in the high-risk group. Computerized quality improvement tools offer an important, albeit partial, solution to improving primary healthcare system capacity for cardiovascular disease risk management. https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=336630. Australian New Zealand Clinical Trials Registry No. 12611000478910. © 2015 American Heart Association, Inc.
Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily
2016-03-01
The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water using farm ponds and percolation tanks. This methodology can be adopted for estimating the soil loss from similar ungauged watersheds with deficient data and for planning suitable soil and water conservation interventions for the sustainable management of the watersheds.
Phung, Dung; Huang, Cunrui; Rutherford, Shannon; Dwirahmadi, Febi; Chu, Cordia; Wang, Xiaoming; Nguyen, Minh; Nguyen, Nga Huy; Do, Cuong Manh; Nguyen, Trung Hieu; Dinh, Tuan Anh Diep
2015-05-01
The present study is an evaluation of temporal/spatial variations of surface water quality using multivariate statistical techniques, comprising cluster analysis (CA), principal component analysis (PCA), factor analysis (FA) and discriminant analysis (DA). Eleven water quality parameters were monitored at 38 different sites in Can Tho City, a Mekong Delta area of Vietnam from 2008 to 2012. Hierarchical cluster analysis grouped the 38 sampling sites into three clusters, representing mixed urban-rural areas, agricultural areas and industrial zone. FA/PCA resulted in three latent factors for the entire research location, three for cluster 1, four for cluster 2, and four for cluster 3 explaining 60, 60.2, 80.9, and 70% of the total variance in the respective water quality. The varifactors from FA indicated that the parameters responsible for water quality variations are related to erosion from disturbed land or inflow of effluent from sewage plants and industry, discharges from wastewater treatment plants and domestic wastewater, agricultural activities and industrial effluents, and contamination by sewage waste with faecal coliform bacteria through sewer and septic systems. Discriminant analysis (DA) revealed that nephelometric turbidity units (NTU), chemical oxygen demand (COD) and NH₃ are the discriminating parameters in space, affording 67% correct assignation in spatial analysis; pH and NO₂ are the discriminating parameters according to season, assigning approximately 60% of cases correctly. The findings suggest a possible revised sampling strategy that can reduce the number of sampling sites and the indicator parameters responsible for large variations in water quality. This study demonstrates the usefulness of multivariate statistical techniques for evaluation of temporal/spatial variations in water quality assessment and management.
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
Growth, Yield and Fruit Quality of Grapevines under Organic and Biodynamic Management
Döring, Johanna; Frisch, Matthias; Tittmann, Susanne; Stoll, Manfred; Kauer, Randolf
2015-01-01
The main objective of this study was to determine growth, yield and fruit quality of grapevines under organic and biodynamic management in relation to integrated viticultural practices. Furthermore, the mechanisms for the observed changes in growth, yield and fruit quality were investigated by determining nutrient status, physiological performance of the plants and disease incidence on bunches in three consecutive growing seasons. A field trial (Vitis vinifera L. cv. Riesling) was set up at Hochschule Geisenheim University, Germany. The integrated treatment was managed according to the code of good practice. Organic and biodynamic plots were managed according to Regulation (EC) No 834/2007 and Regulation (EC) No 889/2008 and according to ECOVIN- and Demeter-Standards, respectively. The growth and yield of the grapevines differed strongly among the different management systems, whereas fruit quality was not affected by the management system. The organic and the biodynamic treatments showed significantly lower growth and yield in comparison to the integrated treatment. The physiological performance was significantly lower in the organic and the biodynamic systems, which may account for differences in growth and cluster weight and might therefore induce lower yields of the respective treatments. Soil management and fertilization strategy could be responsible factors for these changes. Yields of the organic and the biodynamic treatments partially decreased due to higher disease incidence of downy mildew. The organic and the biodynamic plant protection strategies that exclude the use of synthetic fungicides are likely to induce higher disease incidence and might partially account for differences in the nutrient status of vines under organic and biodynamic management. Use of the biodynamic preparations had little influence on vine growth and yield. Due to the investigation of important parameters that induce changes especially in growth and yield of grapevines under organic and biodynamic management the study can potentially provide guidance for defining more effective farming systems. PMID:26447762
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chun, K.C.; Chiu, S.Y.; Ditmars, J.D.
1994-05-01
The MIDAS (Munition Items Disposition Action System) database system is an electronic data management system capable of storage and retrieval of information on the detailed structures and material compositions of munitions items designated for demilitarization. The types of such munitions range from bulk propellants and small arms to projectiles and cluster bombs. The database system is also capable of processing data on the quantities of inert, PEP (propellant, explosives and pyrotechnics) and packaging materials associated with munitions, components, or parts, and the quantities of chemical compounds associated with parts made of PEP materials. Development of the MIDAS database system hasmore » been undertaken by the US Army to support disposition of unwanted ammunition stockpiles. The inventory of such stockpiles currently includes several thousand items, which total tens of thousands of tons, and is still growing. Providing systematic procedures for disposing of all unwanted conventional munitions is the mission of the MIDAS Demilitarization Program. To carry out this mission, all munitions listed in the Single Manager for Conventional Ammunition inventory must be characterized, and alternatives for resource recovery and recycling and/or disposal of munitions in the demilitarization inventory must be identified.« less
Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks
Fu, Jun-Song; Liu, Yun
2015-01-01
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211
Scaling of cluster growth for coagulating active particles
NASA Astrophysics Data System (ADS)
Cremer, Peet; Löwen, Hartmut
2014-02-01
Cluster growth in a coagulating system of active particles (such as microswimmers in a solvent) is studied by theory and simulation. In contrast to passive systems, the net velocity of a cluster can have various scalings dependent on the propulsion mechanism and alignment of individual particles. Additionally, the persistence length of the cluster trajectory typically increases with size. As a consequence, a growing cluster collects neighboring particles in a very efficient way and thus amplifies its growth further. This results in unusual large growth exponents for the scaling of the cluster size with time and, for certain conditions, even leads to "explosive" cluster growth where the cluster becomes macroscopic in a finite amount of time.
Exploiting volatile opportunistic computing resources with Lobster
NASA Astrophysics Data System (ADS)
Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2015-12-01
Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.
78 FR 54178 - Virginia: Final Authorization of State Hazardous Waste Management Program Revisions
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-03
...] Perform inspections, and require monitoring, tests, analyses or reports; [cir] Enforce RCRA requirements... (revision Federal Register Analogous Virginia checklists \\1\\) authority RCRA Cluster XVII Hazardous Waste... Cluster XVIII Regulation of Oil-Bearing 73 FR 57, January 9 VAC Sec. Sec. 20- Hazardous Secondary...
Identifying Peer Institutions Using Cluster Analysis
ERIC Educational Resources Information Center
Boronico, Jess; Choksi, Shail S.
2012-01-01
The New York Institute of Technology's (NYIT) School of Management (SOM) wishes to develop a list of peer institutions for the purpose of benchmarking and monitoring/improving performance against other business schools. The procedure utilizes relevant criteria for the purpose of establishing this peer group by way of a cluster analysis. The…
Accounting Cluster Demonstration Program at Aloha High School. Final Report.
ERIC Educational Resources Information Center
Beaverton School District 48, OR.
A model high school accounting cluster program was planned, developed, implemented, and evaluated in the Beaverton, Oregon, school district. The curriculum was developed with the help of representatives from the accounting occupations in the Portland metropolitan area. Through management interviews, identification of on-the job requirements, and…
Li, Yan; Shi, Zhou; Wu, Hao-Xiang; Li, Feng; Li, Hong-Yi
2013-10-01
The loss of cultivated land has increasingly become an issue of regional and national concern in China. Definition of management zones is an important measure to protect limited cultivated land resource. In this study, combined spatial data were applied to define management zones in Fuyang city, China. The yield of cultivated land was first calculated and evaluated and the spatial distribution pattern mapped; the limiting factors affecting the yield were then explored; and their maps of the spatial variability were presented using geostatistics analysis. Data were jointly analyzed for management zone definition using a combination of principal component analysis with a fuzzy clustering method, two cluster validity functions were used to determine the optimal number of cluster. Finally one-way variance analysis was performed on 3,620 soil sampling points to assess how well the defined management zones reflected the soil properties and productivity level. It was shown that there existed great potential for increasing grain production, and the amount of cultivated land played a key role in maintaining security in grain production. Organic matter, total nitrogen, available phosphorus, elevation, thickness of the plow layer, and probability of irrigation guarantee were the main limiting factors affecting the yield. The optimal number of management zones was three, and there existed significantly statistical differences between the crop yield and field parameters in each defined management zone. Management zone I presented the highest potential crop yield, fertility level, and best agricultural production condition, whereas management zone III lowest. The study showed that the procedures used may be effective in automatically defining management zones; by the development of different management zones, different strategies of cultivated land management and practice in each zone could be determined, which is of great importance to enhance cultivated land conservation, stabilize agricultural production, promote sustainable use of cultivated land and guarantee food security.
2011-01-01
Background Chronic diseases are a leading contributor to work disability and job loss in Europe. Recent EU policies aim to improve job retention among chronically ill employees. Disability and occupational health researchers argue that this requires a coordinated and pro-active approach at the workplace by occupational health professionals, line managers (LMs) and human resource managers (HRM). Little is known about the perspectives of LMs an HRM on what is needed to facilitate job retention among chronically ill employees. The aim of this qualitative study was to explore and compare the perspectives of Dutch LMs and HRM on this issue. Methods Concept mapping methodology was used to elicit and map statements (ideas) from 10 LMs and 17 HRM about what is needed to ensure continued employment for chronically ill employees. Study participants were recruited through a higher education and an occupational health services organization. Results Participants generated 35 statements. Each group (LMs and HRM) sorted these statements into six thematic clusters. LMs and HRM identified four similar clusters: LMs and HRM must be knowledgeable about the impact of chronic disease on the employee; employees must accept responsibility for work retention; work adaptations must be implemented; and clear company policy. Thematic clusters identified only by LMs were: good manager/employee cooperation and knowledge transfer within the company. Unique clusters identified by HRM were: company culture and organizational support. Conclusions There were both similarities and differences between the views of LMs and HRM on what may facilitate job retention for chronically ill employees. LMs perceived manager/employee cooperation as the most important mechanism for enabling continued employment for these employees. HRM perceived organizational policy and culture as the most important mechanism. The findings provide information about topics that occupational health researchers and planners should address in developing job retention programs for chronically ill workers. PMID:21586139
Abeyewickreme, W; Wickremasinghe, A R; Karunatilake, K; Sommerfeld, J; Axel, Kroeger
2012-12-01
Waste management through community mobilization to reduce breeding places at household level could be an effective and sustainable dengue vector control strategy in areas where vector breeding takes place in small discarded water containers. The objective of this study was to assess the validity of this assumption. An intervention study was conducted from February 2009 to February 2010 in the populous Gampaha District of Sri Lanka. Eight neighborhoods (clusters) with roughly 200 houses each were selected randomly from high and low dengue endemic areas; 4 of them were allocated to the intervention arm (2 in the high and 2 in the low endemicity areas) and in the same way 4 clusters to the control arm. A baseline household survey was conducted and entomological and sociological surveys were carried out simultaneously at baseline, at 3 months, at 9 months and at 15 months after the start of the intervention. The intervention programme in the treatment clusters consisted of building partnerships of local stakeholders, waste management at household level, the promotion of composting biodegradable household waste, raising awareness on the importance of solid waste management in dengue control and improving garbage collection with the assistance of local government authorities. The intervention and control clusters were very similar and there were no significant differences in pupal and larval indices of Aedes mosquitoes. The establishment of partnerships among local authorities was well accepted and sustainable; the involvement of communities and households was successful. Waste management with the elimination of the most productive water container types (bowls, tins, bottles) led to a significant reduction of pupal indices as a proxy for adult vector densities. The coordination of local authorities along with increased household responsibility for targeted vector interventions (in our case solid waste management due to the type of preferred vector breeding places) is vital for effective and sustained dengue control.
Abeyewickreme, W; Wickremasinghe, A R; Karunatilake, K; Sommerfeld, Johannes; Kroeger, Axel
2012-01-01
Introduction Waste management through community mobilization to reduce breeding places at household level could be an effective and sustainable dengue vector control strategy in areas where vector breeding takes place in small discarded water containers. The objective of this study was to assess the validity of this assumption. Methods An intervention study was conducted from February 2009 to February 2010 in the populous Gampaha District of Sri Lanka. Eight neighborhoods (clusters) with roughly 200 houses each were selected randomly from high and low dengue endemic areas; 4 of them were allocated to the intervention arm (2 in the high and 2 in the low endemicity areas) and in the same way 4 clusters to the control arm. A baseline household survey was conducted and entomological and sociological surveys were carried out simultaneously at baseline, at 3 months, at 9 months and at 15 months after the start of the intervention. The intervention programme in the treatment clusters consisted of building partnerships of local stakeholders, waste management at household level, the promotion of composting biodegradable household waste, raising awareness on the importance of solid waste management in dengue control and improving garbage collection with the assistance of local government authorities. Results The intervention and control clusters were very similar and there were no significant differences in pupal and larval indices of Aedes mosquitoes. The establishment of partnerships among local authorities was well accepted and sustainable; the involvement of communities and households was successful. Waste management with the elimination of the most productive water container types (bowls, tins, bottles) led to a significant reduction of pupal indices as a proxy for adult vector densities. Conclusion The coordination of local authorities along with increased household responsibility for targeted vector interventions (in our case solid waste management due to the type of preferred vector breeding places) is vital for effective and sustained dengue control. PMID:23318240
Haafkens, Joke A; Kopnina, Helen; Meerman, Martha G M; van Dijk, Frank J H
2011-05-17
Chronic diseases are a leading contributor to work disability and job loss in Europe. Recent EU policies aim to improve job retention among chronically ill employees. Disability and occupational health researchers argue that this requires a coordinated and pro-active approach at the workplace by occupational health professionals, line managers (LMs) and human resource managers (HRM). Little is known about the perspectives of LMs an HRM on what is needed to facilitate job retention among chronically ill employees. The aim of this qualitative study was to explore and compare the perspectives of Dutch LMs and HRM on this issue. Concept mapping methodology was used to elicit and map statements (ideas) from 10 LMs and 17 HRM about what is needed to ensure continued employment for chronically ill employees. Study participants were recruited through a higher education and an occupational health services organization. Participants generated 35 statements. Each group (LMs and HRM) sorted these statements into six thematic clusters. LMs and HRM identified four similar clusters: LMs and HRM must be knowledgeable about the impact of chronic disease on the employee; employees must accept responsibility for work retention; work adaptations must be implemented; and clear company policy. Thematic clusters identified only by LMs were: good manager/employee cooperation and knowledge transfer within the company. Unique clusters identified by HRM were: company culture and organizational support. There were both similarities and differences between the views of LMs and HRM on what may facilitate job retention for chronically ill employees. LMs perceived manager/employee cooperation as the most important mechanism for enabling continued employment for these employees. HRM perceived organizational policy and culture as the most important mechanism. The findings provide information about topics that occupational health researchers and planners should address in developing job retention programs for chronically ill workers.
Images of Leadership and their Effect Upon School Principals' Performance
NASA Astrophysics Data System (ADS)
Gaziel, Haim
2003-09-01
The purpose of the present study is to identify how school principals perceive their world and how their perceptions influence their effectiveness as managers and leaders. The principals' views of their world were categorised into four different metaphorical ways of describing the workings of organisations: (1) the structural model (organisations as machines); (2) the human-resource model (organisations as organisms); (3) the political model (organisations as political systems); (4) the symbolic model (organisations as cultural patterns and clusters of myths and symbols). The results reveal that the best predictors of school principals' effectiveness as managers, according to their own assessments and teachers' reports, are the structural and human resource models, while the best predictors of effective leadership are the political and human-resource models.
NASA Technical Reports Server (NTRS)
1983-01-01
The overall configuration and modules of the initial and evolved space station are described as well as tended industrial and polar platforms. The mass properties that are the basis for costing are summarized. User friendly attributes (interfaces, resources, and facilities) are identified for commercial; science and applications; industrial park; international participation; national security; and the external tank option. Configuration alternates studied to determine a baseline are examined. Commonality for clustered 3-man and 9-man stations are considered as well as the use of tethered platforms. Requirements are indicated for electrical, communication and tracking; data management Subsystem requirements for electrical, data management, communication and tracking, environment control/life support system; and guidance navigation and control subsystems are identified.
NASA Astrophysics Data System (ADS)
1983-04-01
The overall configuration and modules of the initial and evolved space station are described as well as tended industrial and polar platforms. The mass properties that are the basis for costing are summarized. User friendly attributes (interfaces, resources, and facilities) are identified for commercial; science and applications; industrial park; international participation; national security; and the external tank option. Configuration alternates studied to determine a baseline are examined. Commonality for clustered 3-man and 9-man stations are considered as well as the use of tethered platforms. Requirements are indicated for electrical, communication and tracking; data management Subsystem requirements for electrical, data management, communication and tracking, environment control/life support system; and guidance navigation and control subsystems are identified.
Conceptual model and map of financial exploitation of older adults.
Conrad, Kendon J; Iris, Madelyn; Ridings, John W; Fairman, Kimberly P; Rosen, Abby; Wilber, Kathleen H
2011-10-01
This article describes the processes and outcomes of three-dimensional concept mapping to conceptualize financial exploitation of older adults. Statements were generated from a literature review and by local and national panels consisting of 16 experts in the field of financial exploitation. These statements were sorted and rated using Concept Systems software, which grouped the statements into clusters and depicted them as a map. Statements were grouped into six clusters, and ranked by the experts as follows in descending severity: (a) theft and scams, (b) financial victimization, (c) financial entitlement, (d) coercion, (e) signs of possible financial exploitation, and (f) money management difficulties. The hierarchical model can be used to identify elder financial exploitation and differentiate it from related but distinct areas of victimization. The severity hierarchy may be used to develop measures that will enable more precise screening for triage of clients into appropriate interventions.
Optimizing CMS build infrastructure via Apache Mesos
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad
2015-12-01
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.
Pezeshki, Z; Tafazzoli-Shadpour, M; Mansourian, A; Eshrati, B; Omidi, E; Nejadqoli, I
2012-10-01
Cholera is spread by drinking water or eating food that is contaminated by bacteria, and is related to climate changes. Several epidemics have occurred in Iran, the most recent of which was in 2005 with 1133 cases and 12 deaths. This study investigated the incidence of cholera over a 10-year period in Chabahar district, a region with one of the highest incidence rates of cholera in Iran. Descriptive retrospective study on data of patients with Eltor and NAG cholera reported to the Iranian Centre of Disease Control between 1997 and 2006. Data on the prevalence of cholera were gathered through a surveillance system, and a spatial database was developed using geographic information systems (GIS) to describe the relation of spatial and climate variables to cholera incidences. Fuzzy clustering (fuzzy C) method and statistical analysis based on logistic regression were used to develop a model of cholera dissemination. The variables were demographic characteristics, specifications of cholera infection, climate conditions and some geographical parameters. The incidence of cholera was found to be significantly related to higher temperature and humidity, lower precipitation, shorter distance to the eastern border of Iran and local health centres, and longer distance to the district health centre. The fuzzy C means algorithm showed that clusters were geographically distributed in distinct regions. In order to plan, manage and monitor any public health programme, GIS provide ideal platforms for the convergence of disease-specific information, analysis and computation of new data for statistical analysis. Copyright © 2012 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Said, Halima M; Krishnamani, Keshav; Omar, Shaheed V; Dreyer, Andries W; Sansom, Bianca; Fallows, Dorothy; Ismail, Nazir A
2016-10-01
The manual IS6110-based restriction fragment length polymorphism (RFLP) typing method is highly discriminatory; however, it is laborious and technically demanding, and data exchange remains a challenge. In an effort to improve IS6110-based RFLP to make it a faster format, DuPont Molecular Diagnostics recently introduced the IS6110-PvuII kit for semiautomated typing of Mycobacterium tuberculosis using the RiboPrinter microbial characterization system. This study aimed to evaluate the semiautomated RFLP typing against the standard manual method. A total of 112 isolates collected between 2013 and 2014 were included. All isolates were genotyped using manual and semiautomated RFLP typing methods. Clustering rates and discriminatory indexes were compared between methods. The overall performance of semiautomated RFLP compared to manual typing was excellent, with high discriminatory index (0.990 versus 0.995, respectively) and similar numbers of unique profiles (72 versus 74, respectively), numbers of clustered isolates (33 versus 31, respectively), cluster sizes (2 to 6 and 2 to 5 isolates, respectively), and clustering rates (21.9% and 17.1%, respectively). The semiautomated RFLP system is technically simple and significantly faster than the manual RFLP method (8 h versus 5 days). The analysis is fully automated and generates easily manageable databases of standardized fingerprints that can be easily exchanged between laboratories. Based on its high-throughput processing with minimal human effort, the semiautomated RFLP can be a very useful tool as a first-line method for routine typing of M. tuberculosis isolates, especially where Beijing strains are highly prevalent, followed by manual RFLP typing if resolution is not achieved, thereby saving time and labor. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Reimer, Joachim; Vogel, Frédéric; Steele-MacInnis, Matthew
2016-05-18
Aqueous solutions of salts at elevated pressures and temperatures play a key role in geochemical processes and in applications of supercritical water in waste and biomass treatment, for which salt management is crucial for performance. A major question in predicting salt behavior in such processes is how different salts affect the phase equilibria. Herein, molecular dynamics (MD) simulations are used to investigate molecular-scale structures of solutions of sodium and/or potassium sulfate, which show contrasting macroscopic behavior. Solutions of Na-SO4 exhibit a tendency towards forming large ionic clusters with increasing temperature, whereas solutions of K-SO4 show significantly less clustering under equivalent conditions. In mixed systems (Nax K2-x SO4 ), cluster formation is dramatically reduced with decreasing Na/(K+Na) ratio; this indicates a structure-breaking role of K. MD results allow these phenomena to be related to the characteristics of electrostatic interactions between K(+) and SO4 (2-) , compared with the analogous Na(+) -SO4 (2-) interactions. The results suggest a mechanism underlying the experimentally observed increasing solubility in ternary mixtures of solutions of Na-K-SO4 . Specifically, the propensity of sodium to associate with sulfate, versus that of potassium to break up the sodium-sulfate clusters, may affect the contrasting behavior of these salts. Thus, mutual salting-in in ternary hydrothermal solutions of Na-K-SO4 reflects the opposing, but complementary, natures of Na-SO4 versus K-SO4 interactions. The results also provide clues towards the reported liquid immiscibility in this ternary system. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Bonilla, I.; Martínez De Toda, F.; Martínez-Casasnovas, J. A.
2014-10-01
Vineyard variability within the fields is well known by grape growers, producing different plant responses and fruit characteristics. Many technologies have been developed in last recent decades in order to assess this spatial variability, including remote sensing and soil sensors. In this paper we study the possibility of creating a stable classification system that better provides useful information for the grower, especially in terms of grape batch quality sorting. The work was carried out during 4 years in a rain-fed Tempranillo vineyard located in Rioja (Spain). NDVI was extracted from airborne imagery, and soil conductivity (EC) data was acquired by an EM38 sensor. Fifty-four vines were sampled at véraison for vegetative parameters and before harvest for yield and grape analysis. An Isocluster unsupervised classification in two classes was performed in 5 different ways, combining NDVI maps individually, collectively and combined with EC. The target vines were assigned in different zones depending on the clustering combination. Analysis of variance was performed in order to verify the ability of the combinations to provide the most accurate information. All combinations showed a similar behaviour concerning vegetative parameters. Yield parameters classify better by the EC-based clustering, whilst maturity grape parameters seemed to give more accuracy by combining all NDVIs and EC. Quality grape parameters (anthocyanins and phenolics), presented similar results for all combinations except for the NDVI map of the individual year, where the results were poorer. This results reveal that stable parameters (EC or/and NDVI all-together) clustering outcomes in better information for a vineyard zonal management strategy.
One hundred years of work design research: Looking back and looking forward.
Parker, Sharon K; Morgeson, Frederick P; Johns, Gary
2017-03-01
In this article we take a big picture perspective on work design research. In the first section of the paper we identify influential work design articles and use scientific mapping to identify distinct clusters of research. Pulling this material together, we identify five key work design perspectives that map onto distinct historical developments: (a) sociotechnical systems and autonomous work groups, (b) job characteristics model, (c) job demands-control model, (d) job demands-resources model, and (e) role theory. The grounding of these perspectives in the past is understandable, but we suggest that some of the distinction between clusters is convenient rather than substantive. Thus we also identify contemporary integrative perspectives on work design that build connections across the clusters and we argue that there is scope for further integration. In the second section of the paper, we review the role of Journal of Applied Psychology ( JAP ) in shaping work design research. We conclude that JAP has played a vital role in the advancement of this topic over the last 100 years. Nevertheless, we suspect that to continue to play a leading role in advancing the science and practice of work design, the journal might need to publish research that is broader, more contextualized, and team-oriented. In the third section, we address the impact of work design research on: applied psychology and management, disciplines beyond our own, management thinking, work practice, and national policy agendas. Finally, we draw together observations from our analysis and identify key future directions for the field. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Lattice animals in diffusion limited binary colloidal system
NASA Astrophysics Data System (ADS)
Shireen, Zakiya; Babu, Sujin B.
2017-08-01
In a soft matter system, controlling the structure of the amorphous materials has been a key challenge. In this work, we have modeled irreversible diffusion limited cluster aggregation of binary colloids, which serves as a model for chemical gels. Irreversible aggregation of binary colloidal particles leads to the formation of a percolating cluster of one species or both species which are also called bigels. Before the formation of the percolating cluster, the system forms a self-similar structure defined by a fractal dimension. For a one component system when the volume fraction is very small, the clusters are far apart from each other and the system has a fractal dimension of 1.8. Contrary to this, we will show that for the binary system, we observe the presence of lattice animals which has a fractal dimension of 2 irrespective of the volume fraction. When the clusters start inter-penetrating, we observe a fractal dimension of 2.5, which is the same as in the case of the one component system. We were also able to predict the formation of bigels using a simple inequality relation. We have also shown that the growth of clusters follows the kinetic equations introduced by Smoluchowski for diffusion limited cluster aggregation. We will also show that the chemical distance of a cluster in the flocculation regime will follow the same scaling law as predicted for the lattice animals. Further, we will also show that irreversible binary aggregation comes under the universality class of the percolation theory.
Hydration of a Large Anionic Charge Distribution - Naphthalene-Water Cluster Anions
NASA Astrophysics Data System (ADS)
Weber, J. Mathias; Adams, Christopher L.
2010-06-01
We report the infrared spectra of anionic clusters of naphthalene with up to three water molecules. Comparison of the experimental infrared spectra with theoretically predicted spectra from quantum chemistry calculations allow conclusions regarding the structures of the clusters under study. The first water molecule forms two hydrogen bonds with the π electron system of the naphthalene moiety. Subsequent water ligands interact with both the naphthalene and the other water ligands to form hydrogen bonded networks, similar to other hydrated anion clusters. Naphthalene-water anion clusters illustrate how water interacts with negative charge delocalized over a large π electron system. The clusters are interesting model systems that are discussed in the context of wetting of graphene surfaces and polyaromatic hydrocarbons.
Dynamical evolution of globular-cluster systems in clusters of galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muzzio, J.C.
1987-04-01
The dynamical processes that affect globular-cluster systems in clusters of galaxies are analyzed. Two-body and impulsive approximations are utilized to study dynamical friction, drag force, tidal stripping, tidal radii, globular-cluster swapping, tidal accretion, and galactic cannibalism. The evolution of galaxies and the collision of galaxies are simulated numerically; the steps involved in the simulation are described. The simulated data are compared with observations. Consideration is given to the number of galaxies, halo extension, location of the galaxies, distribution of the missing mass, nonequilibrium initial conditions, mass dependence, massive central galaxies, globular-cluster distribution, and lost globular clusters. 116 references.
2012-01-01
Background Malaria case management is a key strategy for malaria control. Effective coverage of parasite-based malaria diagnosis (PMD) remains limited in malaria endemic countries. This study assessed the health system's capacity to absorb PMD at primary health care facilities in Uganda. Methods In a cross sectional survey, using multi-stage cluster sampling, lower level health facilities (LLHF) in 11 districts in Uganda were assessed for 1) tools, 2) skills, 3) staff and infrastructure, and 4) structures, systems and roles necessary for the implementing of PMD. Results Tools for PMD (microscopy and/or RDTs) were available at 30 (24%) of the 125 LLHF. All LLHF had patient registers and 15% had functional in-patient facilities. Three months’ long stock-out periods were reported for oral and parenteral quinine at 39% and 47% of LLHF respectively. Out of 131 health workers interviewed, 86 (66%) were nursing assistants; 56 (43%) had received on-job training on malaria case management and 47 (36%) had adequate knowledge in malaria case management. Overall, only 18% (131/730) Ministry of Health approved staff positions were filled by qualified personnel and 12% were recruited or transferred within six months preceding the survey. Of 186 patients that received referrals from LLHF, 130(70%) had received pre-referral anti-malarial drugs, none received pre-referral rectal artesunate and 35% had been referred due to poor response to antimalarial drugs. Conclusion Primary health care facilities had inadequate human and infrastructural capacity to effectively implement universal parasite-based malaria diagnosis. The priority capacity building needs identified were: 1) recruitment and retention of qualified staff, 2) comprehensive training of health workers in fever management, 3) malaria diagnosis quality control systems and 4) strengthening of supply chain, stock management and referral systems. PMID:22920954
Schneider, Dominik; Engelhaupt, Martin; Allen, Kara; Kurniawan, Syahrul; Krashevska, Valentyna; Heinemann, Melanie; Nacke, Heiko; Wijayanti, Marini; Meryandini, Anja; Corre, Marife D.; Scheu, Stefan; Daniel, Rolf
2015-01-01
Prokaryotes are the most abundant and diverse group of microorganisms in soil and mediate virtually all biogeochemical cycles in terrestrial ecosystems. Thereby, they influence aboveground plant productivity and diversity. In this study, the impact of rainforest transformation to intensively managed cash crop systems on soil prokaryotic communities was investigated. The studied managed land use systems comprised rubber agroforests (jungle rubber), rubber plantations and oil palm plantations within two Indonesian landscapes Bukit Duabelas and Harapan. Soil prokaryotic community composition and diversity were assessed by pyrotag sequencing of bacterial and archaeal 16S rRNA genes. The curated dataset contained 16,413 bacterial and 1679 archaeal operational taxonomic units at species level (97% genetic identity). Analysis revealed changes in indigenous taxon-specific patterns of soil prokaryotic communities accompanying lowland rainforest transformation to jungle rubber, and intensively managed rubber and oil palm plantations. Distinct clustering of the rainforest soil communities indicated that these are different from the communities in the studied managed land use systems. The predominant bacterial taxa in all investigated soils were Acidobacteria, Actinobacteria, Alphaproteobacteria, Betaproteobacteria, and Gammaproteobacteria. Overall, the bacterial community shifted from proteobacterial groups in rainforest soils to Acidobacteria in managed soils. The archaeal soil communities were mainly represented by Thaumarchaeota and Euryarchaeota. Members of the Terrestrial Group and South African Gold Mine Group 1 (Thaumarchaeota) dominated in the rainforest and members of Thermoplasmata in the managed land use systems. The alpha and beta diversity of the soil prokaryotic communities was higher in managed land use systems than in rainforest. In the case of bacteria, this was related to soil characteristics such as pH value, exchangeable Ca and Fe content, C to N ratio, and extractable P content. Archaeal community composition and diversity were correlated to pH value, exchangeable Fe content, water content, and total N. The distribution of bacterial and archaeal taxa involved in biological N cycle indicated functional shifts of the cycle during conversion of rainforest to plantations. PMID:26696965
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
Federated and Cloud Enabled Resources for Data Management and Utilization
NASA Astrophysics Data System (ADS)
Rankin, R.; Gordon, M.; Potter, R. G.; Satchwill, B.
2011-12-01
The emergence of cloud computing over the past three years has led to a paradigm shift in how data can be managed, processed and made accessible. Building on the federated data management system offered through the Canadian Space Science Data Portal (www.cssdp.ca), we demonstrate how heterogeneous and geographically distributed data sets and modeling tools have been integrated to form a virtual data center and computational modeling platform that has services for data processing and visualization embedded within it. We also discuss positive and negative experiences in utilizing Eucalyptus and OpenStack cloud applications, and job scheduling facilitated by Condor and Star Cluster. We summarize our findings by demonstrating use of these technologies in the Cloud Enabled Space Weather Data Assimilation and Modeling Platform CESWP (www.ceswp.ca), which is funded through Canarie's (canarie.ca) Network Enabled Platforms program in Canada.
Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Lauzon, N.; Lence, B. J.
2002-12-01
This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be established with these calibration procedures. A simple method for analyzing uncertainties associated with the Kohonen neural network and fuzzy c-means is developed here. The method combines the results from several sets of clusters, either from the Kohonen neural network or fuzzy c-means, so as to provide an overall diagnosis as to the identification of outliers, shifts and trends. The results indicate an improvement in the performance for identifying anomalies when the method of combining cluster sets is used, compared with when only one cluster set is used.
Analysis of ground-motion simulation big data
NASA Astrophysics Data System (ADS)
Maeda, T.; Fujiwara, H.
2016-12-01
We developed a parallel distributed processing system which applies a big data analysis to the large-scale ground motion simulation data. The system uses ground-motion index values and earthquake scenario parameters as input. We used peak ground velocity value and velocity response spectra as the ground-motion index. The ground-motion index values are calculated from our simulation data. We used simulated long-period ground motion waveforms at about 80,000 meshes calculated by a three dimensional finite difference method based on 369 earthquake scenarios of a great earthquake in the Nankai Trough. These scenarios were constructed by considering the uncertainty of source model parameters such as source area, rupture starting point, asperity location, rupture velocity, fmax and slip function. We used these parameters as the earthquake scenario parameter. The system firstly carries out the clustering of the earthquake scenario in each mesh by the k-means method. The number of clusters is determined in advance using a hierarchical clustering by the Ward's method. The scenario clustering results are converted to the 1-D feature vector. The dimension of the feature vector is the number of scenario combination. If two scenarios belong to the same cluster the component of the feature vector is 1, and otherwise the component is 0. The feature vector shows a `response' of mesh to the assumed earthquake scenario group. Next, the system performs the clustering of the mesh by k-means method using the feature vector of each mesh previously obtained. Here the number of clusters is arbitrarily given. The clustering of scenarios and meshes are performed by parallel distributed processing with Hadoop and Spark, respectively. In this study, we divided the meshes into 20 clusters. The meshes in each cluster are geometrically concentrated. Thus this system can extract regions, in which the meshes have similar `response', as clusters. For each cluster, it is possible to determine particular scenario parameters which characterize the cluster. In other word, by utilizing this system, we can obtain critical scenario parameters of the ground-motion simulation for each evaluation point objectively. This research was supported by CREST, JST.
An Analysis of Category Management of Service Contracts
2017-12-01
management teams a way to make informed , data-driven decisions. Data-driven decisions derived from clustering not only align with Category...savings. Furthermore, this methodology provides a data-driven visualization to inform sound business decisions on potential Category Management ...Category Management initiatives. The Maptitude software will allow future research to collect data and develop visualizations to inform Category
Scalable cluster administration - Chiba City I approach and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, J. P.; Evard, R.; Nurmi, D.
2002-07-01
Systems administrators of large clusters often need to perform the same administrative activity hundreds or thousands of times. Often such activities are time-consuming, especially the tasks of installing and maintaining software. By combining network services such as DHCP, TFTP, FTP, HTTP, and NFS with remote hardware control, cluster administrators can automate all administrative tasks. Scalable cluster administration addresses the following challenge: What systems design techniques can cluster builders use to automate cluster administration on very large clusters? We describe the approach used in the Mathematics and Computer Science Division of Argonne National Laboratory on Chiba City I, a 314-node Linuxmore » cluster; and we analyze the scalability, flexibility, and reliability benefits and limitations from that approach.« less
The properties of the disk system of globular clusters
NASA Technical Reports Server (NTRS)
Armandroff, Taft E.
1989-01-01
A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.
A single population of red globular clusters around the massive compact galaxy NGC 1277
NASA Astrophysics Data System (ADS)
Beasley, Michael A.; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia
2018-03-01
Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277—a nearby, un-evolved example of a high-redshift ‘red nugget’ galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.
A single population of red globular clusters around the massive compact galaxy NGC 1277.
Beasley, Michael A; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia
2018-03-22
Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277-a nearby, un-evolved example of a high-redshift 'red nugget' galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Pakzad, Farzaneh; Asadinia, Sanaz
In recent years, one of the wireless technologies increased tremendously is mobile ad hoc networks (MANETs) in which mobile nodes organize themselves without the help of any predefined infrastructure. MANETs are highly vulnerable to attack due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring, management point and lack of a clear defense line. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. In our proposed scheme, the network with distributed hierarchical architecture is partitioned into zones, so that in each of them there is one zone manager. The zone manager is responsible for monitoring the cluster heads in its zone and cluster heads are in charge of monitoring their members. However, the most important problem is how the trustworthiness of the zone manager can be recognized. So, we propose a scheme in which "honest neighbors" of zone manager specify the validation of their zone manager. These honest neighbors prevent false accusations and also allow manager if it is wrongly misbehaving. However, if the manger repeats its misbehavior, then it will lose its management degree. Therefore, our scheme will be improved intrusion detection and also provide a more reliable network.
Buglione, Michela; Cavagnini, Roberta; Di Rosario, Federico; Maddalo, Marta; Vassalli, Lucia; Grisanti, Salvatore; Salgarello, Stefano; Orlandi, Ester; Bossi, Paolo; Majorana, Alessandra; Gastaldi, Giorgio; Berruti, Alfredo; Trippa, Fabio; Nicolai, Pietro; Barasch, Andrei; Russi, Elvio G; Raber-Durlacher, Judith; Murphy, Barbara; Magrini, Stefano M
2016-06-01
Radiotherapy alone or in combination with chemotherapy and/or surgery is a well-known radical treatment for head and neck cancer patients. Nevertheless acute side effects (such as moist desquamation, skin erythema, loss of taste, mucositis etc.) and in particular late toxicities (osteoradionecrosis, xerostomia, trismus, radiation caries etc.) are often debilitating and underestimated. A multidisciplinary group of head and neck cancer specialists from Italy met in Milan with the aim of reaching a consensus on a clinical definition and management of these toxicities. The Delphi Appropriateness method was used for this consensus and external experts evaluated the conclusions. The paper contains 20 clusters of statements about the clinical definition and management of stomatological issues that reached consensus, and offers a review of the literature about these topics. The review was split into two parts: the first part dealt with dental pathologies and osteo-radionecrosis (10 clusters of statements), whereas this second part deals with trismus and xerostomia (10 clusters of statements). Copyright © 2016. Published by Elsevier Ireland Ltd.
X-Ray Morphological Analysis of the Planck ESZ Clusters
NASA Astrophysics Data System (ADS)
Lovisari, Lorenzo; Forman, William R.; Jones, Christine; Ettori, Stefano; Andrade-Santos, Felipe; Arnaud, Monique; Démoclès, Jessica; Pratt, Gabriel W.; Randall, Scott; Kraft, Ralph
2017-09-01
X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev-Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper we determine eight morphological parameters for the Planck Early Sunyaev-Zeldovich (ESZ) objects observed with XMM-Newton. We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.
X-Ray Morphological Analysis of the Planck ESZ Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovisari, Lorenzo; Forman, William R.; Jones, Christine
2017-09-01
X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev–Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper wemore » determine eight morphological parameters for the Planck Early Sunyaev–Zeldovich (ESZ) objects observed with XMM-Newton . We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.« less
A new flight control and management system architecture and configuration
NASA Astrophysics Data System (ADS)
Kong, Fan-e.; Chen, Zongji
2006-11-01
The advanced fighter should possess the performance such as super-sound cruising, stealth, agility, STOVL(Short Take-Off Vertical Landing),powerful communication and information processing. For this purpose, it is not enough only to improve the aerodynamic and propulsion system. More importantly, it is necessary to enhance the control system. A complete flight control system provides not only autopilot, auto-throttle and control augmentation, but also the given mission management. F-22 and JSF possess considerably outstanding flight control system on the basis of pave pillar and pave pace avionics architecture. But their control architecture is not enough integrated. The main purpose of this paper is to build a novel fighter control system architecture. The control system constructed on this architecture should be enough integrated, inexpensive, fault-tolerant, high safe, reliable and effective. And it will take charge of both the flight control and mission management. Starting from this purpose, this paper finishes the work as follows: First, based on the human nervous control, a three-leveled hierarchical control architecture is proposed. At the top of the architecture, decision level is in charge of decision-making works. In the middle, organization & coordination level will schedule resources, monitor the states of the fighter and switch the control modes etc. And the bottom is execution level which holds the concrete drive and measurement; then, according to their function and resources all the tasks involving flight control and mission management are sorted to individual level; at last, in order to validate the three-leveled architecture, a physical configuration is also showed. The configuration is distributed and applies some new advancement in information technology industry such line replaced module and cluster technology.
Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao
2014-01-01
Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470
Lift Off for first pair of Cluster II spacecraft
NASA Astrophysics Data System (ADS)
2000-07-01
At 14.39 CEST, a Soyuz-Fregat launch vehicle provided by the French-Russian Starsem consortium lifted off with FM 6 and FM 7, the first pair of Cluster II satellites. Approximately 90 minutes into the mission, the rocket's Fregat fourth stage fired for a second time to insert the spacecraft into a 240 km - 18,000 km parking orbit. A few minutes later, the ground station in Kiruna, Sweden, acquired the two spacecraft and started to receive telemetry, confirming that the satellites had sucessfully separated from the Fregat and that they were now flying independently. "This has been an excellent start and we look forward to the second launch next month," said Professor Roger-Maurice Bonnet, ESA Director of Science. "Cluster is one of the key Cornerstone missions in our Horizons 2000 long-term scientific programme and it will provide unique insights that will revolutionise our understanding of near-Earth space." ESA's Cluster II project manager, Dr John Ellwood, paid tribute to the hundreds of scientists and engineers in many countries who have worked so hard to rebuild the four Cluster satellites since the tragic loss of the first group in 1996. "Without the dedication and teamwork of these people, today's success would not have been possible," he said. "Only three years after we began the Cluster II programme, we are already starting to see the fruits of all our efforts." Cluster II deputy project manager, Alberto Gianolio, also expressed his full satisfaction for the successful launch. "This launch marks a milestone in the cooperation between the European Space Agency and our Russian partners. We are looking forward to the continuation of this fruitful joint effort in the years to come". UK Winner For Cluster Competition - Rumba, Salsa, Samba, Tango into space! The winner of ESA's "Name The Cluster Quartet" competition was announced today, during a special launch event for the media at the European Space Operations Centre (ESOC) in Darmstadt, Germany. After an exhaustive examination of more than 5,000 entries from all 15 ESA member states, Professor Bonnet selected the winning entry from a shortlist recommended by the international jury. The lucky winner is Raymond Cotton of Bristol, who suggested the names of four dances - RUMBA, SALSA, SAMBA and TANGO - for the individual satellites of the Cluster quartet. "We thought of these because my wife and I both like ballroom dancing, and they seemed to fit with the movement of the satellites through space," he said. "The names are also international and will be recognised in any country." "It was an extremely hard decision," commented Professor Bonnet, "There were some excellent suggestions, but I considered the shortlisted entry from the UK to be the best because it is catchy, easy to remember, and reflects the way the four satellites will dance in formation around the heavens during their mission." The spacecraft will now be named as follows: FM 5 - Rumba FM 6 - Salsa FM 7 - Samba FM 8 - Tango Future Operations. Over the next week, the FM 6 (Salsa) and FM 7 (Samba) spacecraft will use their own onboard propulsion systems to reach their operational orbits, 19,000km - 119,000 km above the Earth. At their furthest point (apogee) from the Earth, the Cluster satellites will be almost one third of the distance to the Moon. Six engine firings will be required to enlarge the current orbits and change their inclination so that the spacecraft will eventually pass over the Earth's polar regions. These major manoeuvres are only possible because of the large amount of fuel they carry, which accounts for more than half the launch mass of each Cluster satellite. The second pair of Cluster spacecraft is scheduled for launch on 9 August. After they rendezvous with the spacecraft that were launched today, the quartet will undergo three months of instrument calibration and systems checkouts before beginning their scientific programme. They will then spend the next two years investigating the interaction between the Sun and our planet in unprecedented detail.
The Observed Relationship between Management Styles and Resource Adequacy.
ERIC Educational Resources Information Center
Lynch, David M.; And Others
This descriptive study surveyed deans (N=142), department chairs (N=392), and faculty (N=1173) to examine their perceptions of the relationship between resource adequacy within institutions of higher education and administrators' management styles. The clusters of variables examined were: (1) management style (use of communication and…
Management of a Learning Resource Center: A Seven-Year Study.
ERIC Educational Resources Information Center
Hampton, Carol L.; And Others
1979-01-01
Data compiled over seven years present evidence that small-group or "cluster" carrels are successfully utilized by medical students in a learning resource center and should be considered to be an efficient method of managing space, software, and hardware. Three management concepts are reported. (Author/LBH)
Breland, Jessica Y; Hundt, Natalie E; Barrera, Terri L; Mignogna, Joseph; Petersen, Nancy J; Stanley, Melinda A; Cully, Jeffery A
2015-10-01
Treatment of chronic obstructive pulmonary disease (COPD) is palliative, and quality of life is important. Increased understanding of correlates of quality of life and its domains could help clinicians and researchers better tailor COPD treatments and better support patients engaging in those treatments or other important self-management behaviors. Anxiety is common in those with COPD; however, overlap of physical and emotional symptoms complicates its assessment. The current study aimed to identify anxiety symptom clusters and to assess the association of these symptom clusters with COPD-related quality of life. Participants (N = 162) with COPD completed the Beck Anxiety Inventory (BAI), Chronic Respiratory Disease Questionnaire, Patient Health Questionnaire-9, and Medical Research Council dyspnea scale. Anxiety clusters were identified, using principal component analysis (PCA) on the BAI's 21 items. Anxiety clusters, along with factors previously associated with quality of life, were entered into a multiple regression designed to predict COPD-related quality of life. PCA identified four symptom clusters related to (1) general somatic distress, (2) fear, (3) nervousness, and (4) respiration-related distress. Multiple regression analyses indicated that greater fear was associated with less perceived mastery over COPD (β = -0.19, t(149) = -2.69, p < 0.01). Anxiety symptoms associated with fear appear to be an important indicator of anxiety in patients with COPD. In particular, fear was associated with perceptions of mastery, an important psychological construct linked to disease self-management. Assessing the BAI symptom cluster associated with fear (five items) may be a valuable rapid assessment tool to improve COPD treatment and physical health outcomes.
Army Officers’ Attitudes of Conflict Management.
1976-06-11
The purpose of this study was to measure the attitudes of the middle level career Army officers relative to the concepts of conflict management . The...the literature concerning conflict management and its related fields of study, an exploratory analysis employing Hierarchical Clustering Schemes, and... conflict management . (2) No difference exists in the attitudes of conflict management according to the sample’s three branch groups: combat arms
Clustervision: Visual Supervision of Unsupervised Clustering.
Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam
2018-01-01
Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.
Jeon, Yun-Hee; Simpson, Judy M; Chenoweth, Lynn; Cunich, Michelle; Kendig, Hal
2013-10-25
A plethora of observational evidence exists concerning the impact of management and leadership on workforce, work environment, and care quality. Yet, no randomised controlled trial has been conducted to test the effectiveness of leadership and management interventions in aged care. An innovative aged care clinical leadership program (Clinical Leadership in Aged Care--CLiAC) was developed to improve managers' leadership capacities to support the delivery of quality care in Australia. This paper describes the study design of the cluster randomised controlled trial testing the effectiveness of the program. Twenty-four residential and community aged care sites were recruited as managers at each site agreed in writing to participate in the study and ensure that leaders allocated to the control arm would not be offered the intervention program. Sites undergoing major managerial or structural changes were excluded. The 24 sites were randomly allocated to receive the CLiAC program (intervention) or usual care (control), stratified by type (residential vs. community, six each for each arm). Treatment allocation was masked to assessors and staff of all participating sites. The objective is to establish the effectiveness of the CLiAC program in improving work environment, workforce retention, as well as care safety and quality, when compared to usual care. The primary outcomes are measures of work environment, care quality and safety, and staff turnover rates. Secondary outcomes include manager leadership capacity, staff absenteeism, intention to leave, stress levels, and job satisfaction. Differences between intervention and control groups will be analysed by researchers blinded to treatment allocation using linear regression of individual results adjusted for stratification and clustering by site (primary analysis), and additionally for baseline values and potential confounders (secondary analysis). Outcomes measured at the site level will be compared by cluster-level analysis. The overall costs and benefits of the program will also be assessed. The outcomes of the trial have the potential to inform actions to enhance leadership and management capabilities of the aged care workforce, address pressing issues about workforce shortages, and increase the quality of aged care services. Australian New Zealand Clinical Trials Registry (ACTRN12611001070921).
Horizontally scaling dChache SRM with the Terracotta platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perelmutov, T.; Crawford, M.; Moibenko, A.
2011-01-01
The dCache disk caching file system has been chosen by a majority of LHC experiments Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a singlemore » node. Using the Terracotta platform, we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.« less
A taxonomy of accountable care organizations for policy and practice.
Shortell, Stephen M; Wu, Frances M; Lewis, Valerie A; Colla, Carrie H; Fisher, Elliott S
2014-12-01
To develop an exploratory taxonomy of Accountable Care Organizations (ACOs) to describe and understand early ACO development and to provide a basis for technical assistance and future evaluation of performance. Data from the National Survey of Accountable Care Organizations, fielded between October 2012 and May 2013, of 173 Medicare, Medicaid, and commercial payer ACOs. Drawing on resource dependence and institutional theory, we develop measures of eight attributes of ACOs such as size, scope of services offered, and the use of performance accountability mechanisms. Data are analyzed using a two-step cluster analysis approach that accounts for both continuous and categorical data. We identified a reliable and internally valid three-cluster solution: larger, integrated systems that offer a broad scope of services and frequently include one or more postacute facilities; smaller, physician-led practices, centered in primary care, and that possess a relatively high degree of physician performance management; and moderately sized, joint hospital-physician and coalition-led groups that offer a moderately broad scope of services with some involvement of postacute facilities. ACOs can be characterized into three distinct clusters. The taxonomy provides a framework for assessing performance, for targeting technical assistance, and for diagnosing potential antitrust violations. © Health Research and Educational Trust.
HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters
NASA Astrophysics Data System (ADS)
Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge
2015-12-01
In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.
Formation of black hole x-ray binaries in globular clusters
NASA Astrophysics Data System (ADS)
Kremer, Kyle; Chatterjee, Sourav; Rodriguez, Carl; Rasio, Frederic
2018-01-01
We explore the formation of mass-transferring binary systems containing black holes within globular clusters. We show that it is possible to form mass-transferring binaries with main sequence, giant, and white dwarf companions with a variety of orbital parameters in globular clusters spanning a large range in present-day properties. We show that the presence of mass-transferring black hole systems has little correlation with the total number of black holes within the cluster at any time. In addition to mass-transferring binaries retained within their host clusters at late times, we also examine the black hole and neutron star binaries that are ejected from their host clusters. These ejected systems may contribute to the low-mass x-ray binary population in the galactic field.
Cost/Performance Ratio Achieved by Using a Commodity-Based Cluster
NASA Technical Reports Server (NTRS)
Lopez, Isaac
2001-01-01
Researchers at the NASA Glenn Research Center acquired a commodity cluster based on Intel Corporation processors to compare its performance with a traditional UNIX cluster in the execution of aeropropulsion applications. Since the cost differential of the clusters was significant, a cost/performance ratio was calculated. After executing a propulsion application on both clusters, the researchers demonstrated a 9.4 cost/performance ratio in favor of the Intel-based cluster. These researchers utilize the Aeroshark cluster as one of the primary testbeds for developing NPSS parallel application codes and system software. The Aero-shark cluster provides 64 Intel Pentium II 400-MHz processors, housed in 32 nodes. Recently, APNASA - a code developed by a Government/industry team for the design and analysis of turbomachinery systems was used for a simulation on Glenn's Aeroshark cluster.
ERIC Educational Resources Information Center
Child Care Bureau, 2004
2004-01-01
This publication was developed in conjunction with a special Tribal Cluster Training, "Collaboration and Accountability as Foundations for Success," held in Portland, Oregon on August 24-25, 2004. This Tribal Cluster Training is jointly sponsored by the Office of Family Assistance (OFA), which administers the Tribal Temporary Assistance…
ERIC Educational Resources Information Center
Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.
2006-01-01
Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…
Pathway of Contagion: The Identification of a Youth Suicide Cluster
ERIC Educational Resources Information Center
Zenere, Frank J.
2008-01-01
As a school psychologist and crisis management specialist for Miami-Dade County Public Schools, a member of the NASP National Emergency Assistance Team, and an independent consultant, the author has provided postvention services following the suicides of over 50 students, including several suicide clusters. Providing assistance in the aftermath of…
Consumer and Homemaking: Grade 7. Cluster I.
ERIC Educational Resources Information Center
Calhoun, Olivia H.
A curriculum guide for grade 7, the document is devoted to the occupational cluster "Consumer and Homemaking." It is divided into six units: buying, child care, nutrition, clothing, family relations, and housing and household management. Each unit is introduced by a statement of the topic, the unit's purpose, main ideas, quests, and a…
On the Right Track: Southern Maryland Schools Revamp Their Curriculum around Tech Prep.
ERIC Educational Resources Information Center
Leftwich, Kathy
1992-01-01
In St. Mary's County (Maryland) schools' revamped curriculum, tech prep encompasses four clusters: applied business/management, applied engineering/mechanics, applied health/human services, and college prep. Career counselors help eighth graders choose a cluster and monitor their satisfaction with their choice, allowing them to change until junior…
An Experiment in Computer Ethics: Clustering Composition with Computer Applications.
ERIC Educational Resources Information Center
Nydahl, Joel
Babson College (a school of business and management in Wellesley, Massachusetts) attempted to make a group of first-year students computer literate through "clustering." The same group of students were enrolled in two courses: a special section of "Composition" which stressed word processing as a composition aid and a regular…
Shapira, Aviad; Shoshany, Maxim; Nir-Goldenberg, Sigal
2013-07-01
Environmental management and planning are instrumental in resolving conflicts arising between societal needs for economic development on the one hand and for open green landscapes on the other hand. Allocating green corridors between fragmented core green areas may provide a partial solution to these conflicts. Decisions regarding green corridor development require the assessment of alternative allocations based on multiple criteria evaluations. Analytical Hierarchy Process provides a methodology for both a structured and consistent extraction of such evaluations and for the search for consensus among experts regarding weights assigned to the different criteria. Implementing this methodology using 15 Israeli experts-landscape architects, regional planners, and geographers-revealed inherent differences in expert opinions in this field beyond professional divisions. The use of Agglomerative Hierarchical Clustering allowed to identify clusters representing common decisions regarding criterion weights. Aggregating the evaluations of these clusters revealed an important dichotomy between a pragmatist approach that emphasizes the weight of statutory criteria and an ecological approach that emphasizes the role of the natural conditions in allocating green landscape corridors.
Initial Analysis of and Predictive Model Development for Weather Reroute Advisory Use
NASA Technical Reports Server (NTRS)
Arneson, Heather M.
2016-01-01
In response to severe weather conditions, traffic management coordinators specify reroutes to route air traffic around affected regions of airspace. Providing analysis and recommendations of available reroute options would assist the traffic management coordinators in making more efficient rerouting decisions. These recommendations can be developed by examining historical data to determine which previous reroute options were used in similar weather and traffic conditions. Essentially, using previous information to inform future decisions. This paper describes the initial steps and methodology used towards this goal. A method to extract relevant features from the large volume of weather data to quantify the convective weather scenario during a particular time range is presented. Similar routes are clustered. A description of the algorithm to identify which cluster of reroute advisories were actually followed by pilots is described. Models built for fifteen of the top twenty most frequently used reroute clusters correctly predict the use of the cluster for over 60 of the test examples. Results are preliminary but indicate that the methodology is worth pursuing with modifications based on insight gained from this analysis.
NASA Astrophysics Data System (ADS)
Shapira, Aviad; Shoshany, Maxim; Nir-Goldenberg, Sigal
2013-07-01
Environmental management and planning are instrumental in resolving conflicts arising between societal needs for economic development on the one hand and for open green landscapes on the other hand. Allocating green corridors between fragmented core green areas may provide a partial solution to these conflicts. Decisions regarding green corridor development require the assessment of alternative allocations based on multiple criteria evaluations. Analytical Hierarchy Process provides a methodology for both a structured and consistent extraction of such evaluations and for the search for consensus among experts regarding weights assigned to the different criteria. Implementing this methodology using 15 Israeli experts—landscape architects, regional planners, and geographers—revealed inherent differences in expert opinions in this field beyond professional divisions. The use of Agglomerative Hierarchical Clustering allowed to identify clusters representing common decisions regarding criterion weights. Aggregating the evaluations of these clusters revealed an important dichotomy between a pragmatist approach that emphasizes the weight of statutory criteria and an ecological approach that emphasizes the role of the natural conditions in allocating green landscape corridors.
ERIC Educational Resources Information Center
Bouchet, Francois; Harley, Jason M.; Trevors, Gregory J.; Azevedo, Roger
2013-01-01
In this paper, we present the results obtained using a clustering algorithm (Expectation-Maximization) on data collected from 106 college students learning about the circulatory system with MetaTutor, an agent-based Intelligent Tutoring System (ITS) designed to foster self-regulated learning (SRL). The three extracted clusters were validated and…
Ammerlaan, Judy W; van Os-Medendorp, Harmieke; de Boer-Nijhof, Nienke; Maat, Bertha; Scholtus, Lieske; Kruize, Aike A; Bijlsma, Johannes W J; Geenen, Rinie
2017-03-01
Aim of this study was to investigate preferences and needs regarding the structure and content of a person-centered online self-management support intervention for patients with a rheumatic disease. A four step procedure, consisting of online focus group interviews, consensus meetings with patient representatives, card sorting task and hierarchical cluster analysis was used to identify the preferences and needs. Preferences concerning the structure involved 1) suitability to individual needs and questions, 2) fit to the life stage 3) creating the opportunity to share experiences, be in contact with others, 4) have an expert patient as trainer, 5) allow for doing the training at one's own pace and 6) offer a brief intervention. Hierarchical cluster analysis of 55 content needs comprised eleven clusters: 1) treatment knowledge, 2) societal procedures, 3) physical activity, 4) psychological distress, 5) self-efficacy, 6) provider, 7) fluctuations, 8) dealing with rheumatic disease, 9) communication, 10) intimate relationship, and 11) having children. A comprehensive assessment of preferences and needs in patients with a rheumatic disease is expected to contribute to motivation, adherence to and outcome of self-management-support programs. The overview of preferences and needs can be used to build an online-line self-management intervention. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Valdes-Donoso, P; Mardones, F O; Jarpa, M; Ulloa, M; Carpenter, T E; Perez, A M
2013-03-01
Infectious salmon anaemia virus (ISAV) caused a large epidemic in farmed Atlantic salmon in Chile in 2007-2009. Here, we assessed co-infection patterns of ISAV and sea lice (SL) based on surveillance data collected by the fish health authority. ISAV status and SL counts in all Atlantic salmon farms located in the 10th region of Chile were registered monthly from July 2007 through December 2009. Each farm was categorized monthly according to its ISAV and SL status. A multinomial time-space scan test using a circular window was applied to identify disease clusters, and a multivariate regression model was fitted to quantify the association between disease-clustering and farm-management factors. Most of the identified clusters (9/13) were associated with high SL burdens. There were significant associations (P < 0.05) between management factors and ISAV/SL status. Areas in which good management practices were associated with a reduced disease risk were identified. The findings of this study suggest that certain management practices can effectively reduce the risk of SL and ISAV in the face of an epidemic and will be helpful towards creating an effective disease control programme in Chile. © 2013 Blackwell Publishing Ltd.
The origin of and conditions for clustering in fluids with competing interactions
NASA Astrophysics Data System (ADS)
Jadrich, Ryan; Bollinger, Jonathan; Truskett, Thomas
2015-03-01
Fluids with competing short-range attractions and long-range repulsions exhibit a rich phase behavior characterized by intermediate range order (IRO), as quantified via the static structure factor. This phase behavior includes cluster formation depending upon density-controlled packing effects and the magnitude and range of the attractive and repulsive interactions. Such model systems mimic (to zeroth order) screened, charge-stabilized, aqueous colloidal dispersions of, e.g., proteins. We employ molecular dynamics simulations and integral equation theory to elucidate a more fundamental microscopic explanation for IRO-driven clustering. A simple criterion is identified that indicates when dynamic, amorphous clustering emerges in a polydisperse system, namely when the Ornstein-Zernike thermal correlation length in the system exceeds the repulsive potential tail range. Remarkably, this criterion also appears tightly correlated to crystalline cluster formation in a monodisperse system. Our new gauge is compared to another phenomenological condition for clustering which is when the IRO peak magnitude exceeds ~ 2.7. Ramifications of crystalline versus amorphous clustering are discussed and potential ways of using our new measure in experiment are put forward.
NASA Astrophysics Data System (ADS)
Sitek, M.; Szymański, M. K.; Udalski, A.; Skowron, D. M.; Kostrzewa-Rutkowska, Z.; Skowron, J.; Karczmarek, P.; Cieślar, M.; Wyrzykowski, Ł.; Kozłowski, S.; Pietrukowicz, P.; Soszyński, I.; Mróz, P.; Pawlak, M.; Poleski, R.; Ulaczyk, K.
2017-12-01
The Magellanic System (MS) encompasses the nearest neighbors of the Milky Way, the Large (LMC) and Small (SMC) Magellanic Clouds, and the Magellanic Bridge (MBR). This system contains a diverse sample of star clusters. Their parameters, such as the spatial distribution, chemical composition and age distribution yield important information about the formation scenario of the whole Magellanic System. Using deep photometric maps compiled in the fourth phase of the Optical Gravitational Lensing Experiment (OGLE-IV) we present the most complete catalog of star clusters in the Magellanic System ever constructed from homogeneous, long time-scale photometric data. In this second paper of the series, we show the collection of star clusters found in the area of about 360 square degrees in the MBR and in the outer regions of the SMC. Our sample contains 198 visually identified star cluster candidates, 75 of which were not listed in any of the previously published catalogs. The new discoveries are mainly young small open clusters or clusters similar to associations.
Lee, Yii-Ching; Huang, Shian-Chang; Huang, Chih-Hsuan; Wu, Hsin-Hung
2016-01-01
This study uses kernel k-means cluster analysis to identify medical staffs with high burnout. The data collected in October to November 2014 are from the emotional exhaustion dimension of the Chinese version of Safety Attitudes Questionnaire in a regional teaching hospital in Taiwan. The number of effective questionnaires including the entire staffs such as physicians, nurses, technicians, pharmacists, medical administrators, and respiratory therapists is 680. The results show that 8 clusters are generated by kernel k-means method. Employees in clusters 1, 4, and 5 are relatively in good conditions, whereas employees in clusters 2, 3, 6, 7, and 8 need to be closely monitored from time to time because they have relatively higher degree of burnout. When employees with higher degree of burnout are identified, the hospital management can take actions to improve the resilience, reduce the potential medical errors, and, eventually, enhance the patient safety. This study also suggests that the hospital management needs to keep track of medical staffs' fatigue conditions and provide timely assistance for burnout recovery through employee assistance programs, mindfulness-based stress reduction programs, positivity currency buildup, and forming appreciative inquiry groups. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Tsaur, Woei-Jiunn; Pai, Haw-Tyng
2008-11-01
The applications of group computing and communication motivate the requirement to provide group access control in mobile ad hoc networks (MANETs). The operation in MANETs' groups performs a decentralized manner and accommodated membership dynamically. Moreover, due to lack of centralized control, MANETs' groups are inherently insecure and vulnerable to attacks from both within and outside the groups. Such features make access control more challenging in MANETs. Recently, several researchers have proposed group access control mechanisms in MANETs based on a variety of threshold signatures. However, these mechanisms cannot actually satisfy MANETs' dynamic environments. This is because the threshold-based mechanisms cannot be achieved when the number of members is not up to the threshold value. Hence, by combining the efficient elliptic curve cryptosystem, self-certified public key cryptosystem and secure filter technique, we construct dynamic key management schemes based on hierarchical clustering for securing group access control in MANETs. Specifically, the proposed schemes can constantly accomplish secure group access control only by renewing the secure filters of few cluster heads, when a cluster head joins or leaves a cross-cluster. In such a new way, we can find that the proposed group access control scheme can be very effective for securing practical applications in MANETs.
Towards the use of computationally inserted lesions for mammographic CAD assessment
NASA Astrophysics Data System (ADS)
Ghanian, Zahra; Pezeshk, Aria; Petrick, Nicholas; Sahiner, Berkman
2018-03-01
Computer-aided detection (CADe) devices used for breast cancer detection on mammograms are typically first developed and assessed for a specific "original" acquisition system, e.g., a specific image detector. When CADe developers are ready to apply their CADe device to a new mammographic acquisition system, they typically assess the CADe device with images acquired using the new system. Collecting large repositories of clinical images containing verified cancer locations and acquired by the new image acquisition system is costly and time consuming. Our goal is to develop a methodology to reduce the clinical data burden in the assessment of a CADe device for use with a different image acquisition system. We are developing an image blending technique that allows users to seamlessly insert lesions imaged using an original acquisition system into normal images or regions acquired with a new system. In this study, we investigated the insertion of microcalcification clusters imaged using an original acquisition system into normal images acquired with that same system utilizing our previously-developed image blending technique. We first performed a reader study to assess whether experienced observers could distinguish between computationally inserted and native clusters. For this purpose, we applied our insertion technique to clinical cases taken from the University of South Florida Digital Database for Screening Mammography (DDSM) and the Breast Cancer Digital Repository (BCDR). Regions of interest containing microcalcification clusters from one breast of a patient were inserted into the contralateral breast of the same patient. The reader study included 55 native clusters and their 55 inserted counterparts. Analysis of the reader ratings using receiver operating characteristic (ROC) methodology indicated that inserted clusters cannot be reliably distinguished from native clusters (area under the ROC curve, AUC=0.58±0.04). Furthermore, CADe sensitivity was evaluated on mammograms with native and inserted microcalcification clusters using a commercial CADe system. For this purpose, we used full field digital mammograms (FFDMs) from 68 clinical cases, acquired at the University of Michigan Health System. The average sensitivities for native and inserted clusters were equal, 85.3% (58/68). These results demonstrate the feasibility of using the inserted microcalcification clusters for assessing mammographic CAD devices.
Photometric binary stars in Praesepe and the search for globular cluster binaries
NASA Technical Reports Server (NTRS)
Bolte, Michael
1991-01-01
A radial velocity study of the stars which are located on a second sequence above the single-star zero-age main sequence at a given color in the color-magnitude diagram of the open cluster Praesepe, (NGC 2632) shows that 10, and possibly 11, of 17 are binary systems. Of the binary systems, five have full amplitudes for their velocity variations that are greater than 50 km/s. To the extent that they can be applied to globular clusters, these results suggests that (1) observations of 'second-sequence' stars in globular clusters would be an efficient way of finding main-sequence binary systems in globulars, and (2) current instrumentation on large telescopes is sufficient for establishing unambiguously the existence of main-sequence binary systems in nearby globular clusters.
Toda Systems, Cluster Characters, and Spectral Networks
NASA Astrophysics Data System (ADS)
Williams, Harold
2016-11-01
We show that the Hamiltonians of the open relativistic Toda system are elements of the generic basis of a cluster algebra, and in particular are cluster characters of nonrigid representations of a quiver with potential. Using cluster coordinates defined via spectral networks, we identify the phase space of this system with the wild character variety related to the periodic nonrelativistic Toda system by the wild nonabelian Hodge correspondence. We show that this identification takes the relativistic Toda Hamiltonians to traces of holonomies around a simple closed curve. In particular, this provides nontrivial examples of cluster coordinates on SL n -character varieties for n > 2 where canonical functions associated to simple closed curves can be computed in terms of quivers with potential, extending known results in the SL 2 case.
Gas and galaxies in filaments between clusters of galaxies. The study of A399-A401
NASA Astrophysics Data System (ADS)
Bonjean, V.; Aghanim, N.; Salomé, P.; Douspis, M.; Beelen, A.
2018-01-01
We have performed a multi-wavelength analysis of two galaxy cluster systems selected with the thermal Sunyaev-Zel'dovich (tSZ) effect and composed of cluster pairs and an inter-cluster filament. We have focused on one pair of particular interest: A399-A401 at redshift z 0.073 seperated by 3 Mpc. We have also performed the first analysis of one lower-significance newly associated pair: A21-PSZ2 G114.09-34.34 at z 0.094, separated by 4.2 Mpc. We have characterised the intra-cluster gas using the tSZ signal from Planck and, when possible, the galaxy optical and infrared (IR) properties based on two photometric redshift catalogues: 2MPZ and WISExSCOS. From the tSZ data, we measured the gas pressure in the clusters and in the inter-cluster filaments. In the case of A399-A401, the results are in perfect agreement with previous studies and, using the temperature measured from the X-rays, we further estimate the gas density in the filament and find n0 = (4.3 ± 0.7) × 10-4 cm-3. The optical and IR colour-colour and colour-magnitude analyses of the galaxies selected in the cluster system, together with their star formation rate, show no segregation between galaxy populations, both in the clusters and in the filament of A399-A401. Galaxies are all passive, early type, and red and dead. The gas and galaxy properties of this system suggest that the whole system formed at the same time and corresponds to a pre-merger, with a cosmic filament gas heated by the collapse. For the other cluster system, the tSZ analysis was performed and the pressure in the clusters and in the inter-cluster filament was constrained. However, the limited or nonexistent optical and IR data prevent us from concluding on the presence of an actual cosmic filament or from proposing a scenario.
NASA Astrophysics Data System (ADS)
Wang, Zhao; Yang, Shan; Wang, Shuguang; Shen, Yan
2017-10-01
The assessment of the dynamic urban structure has been affected by lack of timely and accurate spatial information for a long period, which has hindered the measurements of structural continuity at the macroscale. Defense meteorological satellite program's operational linescan system (DMSP/OLS) nighttime light (NTL) data provide an ideal source for urban information detection with a long-time span, short-time interval, and wide coverage. In this study, we extracted the physical boundaries of urban clusters from corrected NTL images and quantitatively analyzed the structure of the urban cluster system based on rank-size distribution, spatial metrics, and Mann-Kendall trend test. Two levels of urban cluster systems in the Yangtze River Delta region (YRDR) were examined. We found that (1) in the entire YRDR, the urban cluster system showed a periodic process, with a significant trend of even distribution before 2007 but an unequal growth pattern after 2007, and (2) at the metropolitan level, vast disparities exist in four metropolitan areas for the fluctuations of Pareto's exponent, the speed of cluster expansion, and the dominance of core cluster. The results suggest that the extracted urban cluster information from NTL data effectively reflect the evolving nature of regional urbanization, which in turn can aid in the planning of cities and help achieve more sustainable regional development.
NASA Astrophysics Data System (ADS)
Capuzzo-Dolcetta, Roberto
1993-10-01
Among the possible phenomena inducing evolution of the globular cluster system in an elliptical galaxy, dynamical friction due to field stars and tidal disruption caused by a central nucleus is of crucial importance. The aim of this paper is the study of the evolution of the globular cluster system in a triaxial galaxy in the presence of these phenomena. In particular, the possibility is examined that some galactic nuclei have been formed by frictionally decayed globular clusters moving in a triaxial potential. We find that the initial rapid growth of the nucleus, due mainly to massive clusters on box orbits falling in a short time scale into the galactic center, is later slowed by tidal disruption induced by the nucleus itself on less massive clusters in the way described by Ostriker, Binney, and Saha. The efficiency of dynamical friction is such to carry to the center of the galaxy enough globular cluster mass available to form a compact nucleus, but the actual modes and results of cluster-cluster encounters in the central potential well are complicated phenomena which remains to be investigated. The mass of the resulting nucleus is determined by the mutual feedback of the described processes, together with the initial spatial, velocity, and mass distributions of the globular cluster family. The effect on the system mass function is studied, showing the development of a low- and high-mass turnover even with an initially flat mass function. Moreover, in this paper is discussed the possibility that the globular cluster fall to the galactic center has been a cause of primordial violent galactic activity. An application of the model to M31 is presented.
NASA Astrophysics Data System (ADS)
Barba Ferrer, Carme; Folch, Albert; Gaju, Núria; Martínez-Alonso, Maira; Carrasquilla, Marc; Grau-Martínez, Alba; Sanchez-Vila, Xavier
2016-04-01
Managed Artificial Recharge (MAR) represents a strategic tool for managing water resources, especially during scarce periods. On one hand, it can increase water stored in aquifers and extract it when weather conditions do not permit exclusive exploitation of surface resources. On the other, it allows improve water quality due the processes occurring into the soil whereas water crosses vadose zone. Barcelona (Catalonia, Spain) conurbation is suffering significant quantitative and qualitative groundwater disturbances. For this reason, Sant Vicenç MAR system, constituted by a sedimentation and an infiltration pond, was constructed in 2009 as the strategic water management infrastructure. Compared with other MAR facilities, this infiltration pond has a reactive bed formed by organic compost and local material. The objective is to promote different redox states allowing more and different degradation of chemical compounds than regular MAR systems. In previous studies in the site, physical and hydrochemical parameters demonstrated that there was indeed a degradation of different pollutants. However, to go a step further understanding the different biogeochemical processes and the related degradation processes occurring in the system, we studied the existing microbial communities. So, molecular techniques were applied in water and soil samples in two different scenarios; the first one, when the system was fully operating and the second when the system was not operating during some months. We have specifically compared microbial diversity and richness indexes and both cluster dendrograms obtained from DGGEs analysis made in each sampling campaign.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Minagata, Atsushi; Suzuoki, Yasuo
This paper discusses the influence of mass installation of a home co-generation system (H-CGS) using a polymer electrolyte fuel cell (PEFC) on the voltage profile of power distribution system in residential area. The influence of H-CGS is compared with that of photovoltaic power generation systems (PV systems). The operation pattern of H-CGS is assumed based on the electricity and hot-water demand observed in 10 households for a year. The main results are as follows. With the clustered H-CGS, the voltage of each bus is higher by about 1-3% compared with the conventional system without any distributed generators. Because H-CGS tends to increase the output during the early evening, H-CGS contributes to recover the voltage drop during the early evening, resulting in smaller voltage variation of distribution system throughout a day. Because of small rated power output about 1kW, the influence on voltage profile by the clustered H-CGS is smaller than that by the clustered PV systems. The highest voltage during the day time is not so high as compared with the distribution system with the clustered PV systems, even if the reverse power flow from H-CGS is allowed.
Two- and three-cluster decays of light nuclei within a hyperspherical harmonics approach
NASA Astrophysics Data System (ADS)
Vasilevsky, V. S.; Lashko, Yu. A.; Filippov, G. F.
2018-06-01
We consider a set of three-cluster systems (4He, 7Li, 7Be, 8Be, 10Be) within a microscopic model which involves hyperspherical harmonics to represent intercluster motion. We selected three-cluster systems which have at least one binary channel. Our aim is to study whether hyperspherical harmonics are able, and under what conditions, to describe two-body channel(s) (nondemocratic motion) or if they are suitable for describing the three-cluster continuum only (democratic motion). It is demonstrated that a rather restricted number of hyperspherical harmonics allows us to describe bound states and scattering states in the two-body continuum for a three-cluster system.
Construction and application of Red5 cluster based on OpenStack
NASA Astrophysics Data System (ADS)
Wang, Jiaqing; Song, Jianxin
2017-08-01
With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.
Zhao, Yan; Shang, Jin-cheng; Chen, Chong; Wu, He-nan
2008-04-01
Reasonable structure, adaptive patterns and effective regulation of society, economy and environment subsystems should be taken into account in order to obtain harmonious development of urban eco-industrial system. We simulated and evaluated a redesigned eco-industrial system in Changchun Economic and Technological Development Zone (CCETDZ) in the present work using system dynamics and grey cluster methods. Four typical development strategies were simulated during 2005-2020 via standard system dynamic models. Furthermore, analytic hierarchy process and grey cluster allowed for the eco-industrial system evaluation and scenarios optimizing. Our dynamic simulation and statistical analysis revealed that: (1) CCETDZ would have different development scenarios under different strategies. The total population in scenario 2 grew most rapidly and reached 3.28 x 10(5) in 2020, exceeding its long-term planning expected population. And the GDP differences among these four scenarios would amount to 6.41 x 10(10) RMB. On the other hand, environmental pollution would become serious along with economy increasing. As a restriction factor, positive or negative increment of water resource will occur according to the selected strategy. (2) The fourth strategy would have the best efficiency, which means that the most efficiently development of CCETDZ required to take science, technology, environment progress and economy increase into account at the same time. (3) Positive environment protection measures, such as cleaner production, green manufacture, production life cycle management and environment friendly industries, should be attached great importance the same as economy development during 2005-2020 in CCETDZ.
Fifth Congress of Industrial Cell Technology 2014.
Rasch, Anja
2015-01-01
The highly specialized and informative Fifth Congress of Industrial Cell Technology took place in Luebeck, close to Hamburg, on 11-12 September 2014. It was organized by the Fraunhofer Institution for Marine Biotechnology (EMB), Luebeck and supported by the cluster agency Life Science Nord Management GmbH as well as the Luebeck Chamber of Industry and Commerce. The central aim of the congress was to promote the name-giving platform applications of industrial cell technologies, in other words, the development of complex cell culture systems, analyzing technologies, innovative instruments and materials, etc. This year's sessions were: smart cell culture, bioreactor systems and cell goods including 3D bioprinting. This article highlights selected presentations of the congress.
Mixing HTC and HPC Workloads with HTCondor and Slurm
NASA Astrophysics Data System (ADS)
Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.
2017-10-01
Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.
Ratinaud, Pierre; Andersson, Gerhard
2018-01-01
Background When people with health conditions begin to manage their health issues, one important issue that emerges is the question as to what exactly do they do with the information that they have obtained through various sources (eg, news media, social media, health professionals, friends, and family). The information they gather helps form their opinions and, to some degree, influences their attitudes toward managing their condition. Objective This study aimed to understand how tinnitus is represented in the US newspaper media and in Facebook pages (ie, social media) using text pattern analysis. Methods This was a cross-sectional study based upon secondary analyses of publicly available data. The 2 datasets (ie, text corpuses) analyzed in this study were generated from US newspaper media during 1980-2017 (downloaded from the database US Major Dailies by ProQuest) and Facebook pages during 2010-2016. The text corpuses were analyzed using the Iramuteq software using cluster analysis and chi-square tests. Results The newspaper dataset had 432 articles. The cluster analysis resulted in 5 clusters, which were named as follows: (1) brain stimulation (26.2%), (2) symptoms (13.5%), (3) coping (19.8%), (4) social support (24.2%), and (5) treatment innovation (16.4%). A time series analysis of clusters indicated a change in the pattern of information presented in newspaper media during 1980-2017 (eg, more emphasis on cluster 5, focusing on treatment inventions). The Facebook dataset had 1569 texts. The cluster analysis resulted in 7 clusters, which were named as: (1) diagnosis (21.9%), (2) cause (4.1%), (3) research and development (13.6%), (4) social support (18.8%), (5) challenges (11.1%), (6) symptoms (21.4%), and (7) coping (9.2%). A time series analysis of clusters indicated no change in information presented in Facebook pages on tinnitus during 2011-2016. Conclusions The study highlights the specific aspects about tinnitus that the US newspaper media and Facebook pages focus on, as well as how these aspects change over time. These findings can help health care providers better understand the presuppositions that tinnitus patients may have. More importantly, the findings can help public health experts and health communication experts in tailoring health information about tinnitus to promote self-management, as well as assisting in appropriate choices of treatment for those living with tinnitus. PMID:29739734
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
Dinamical properties of globular clusters: Primordial or evolutional?
NASA Astrophysics Data System (ADS)
Surdin, V. G.
1995-04-01
Some observable relations between globular cluster parameters appear as a result of dynamical evolution of the cluster system. These relations are inapplicable to the studies of the globular cluster origin
Discovering Massive z > 1 Galaxy Clusters with Spitzer and SPTpol
NASA Astrophysics Data System (ADS)
Bleem, Lindsey; Brodwin, Mark; Ashby, Matthew; Stalder, Brian; Klein, Matthias; Gladders, Michael; Stanford, Spencer; Canning, Rebecca
2018-05-01
We propose to obtain Spitzer/IRAC imaging of 50 high-redshift galaxy cluster candidates derived from two new completed SZ cluster surveys by the South Pole Telescope. Clusters from the deep SPTpol 500-square-deg main survey will extend high-redshift SZ cluster science to lower masses (median M500 2x10^14Msun) while systems drawn from the wider 2500-sq-deg SPTpol Extended Cluster Survey are some of the rarest most massive high-z clusters in the observable universe. The proposed small 10 h program will enable (1) confirmation of these candidates as high-redshift clusters, (2) measurements of the cluster redshifts (sigma_z/(1+z) 0.03), and (3) estimates of the stellar masses of the brightest cluster members. These observations will yield exciting and timely targets for the James Webb Space Telescope--and, combined with lower-z systems--will both extend cluster tests of dark energy to z>1 as well as enable studies of galaxy evolution in the richest environments for a mass-limited cluster sample from 0
Exact combinatorial approach to finite coagulating systems
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr
2018-02-01
This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.
2018-01-01
Rapid urbanization and agricultural development has resulted in the degradation of ecosystems, while also negatively impacting ecosystem services (ES) and urban sustainability. Identifying conservation priorities for ES and applying reasonable management strategies have been found to be effective methods for mitigating this phenomenon. The purpose of this study is to propose a comprehensive framework for identifying ES conservation priorities and associated management strategies for these planning areas. First, we incorporated 10 ES indicators within a systematic conservation planning (SCP) methodology in order to identify ES conservation priorities with high irreplaceability values based on conservation target goals associated with the potential distribution of ES indicators. Next, we assessed the efficiency of the ES conservation priorities for meeting the designated conservation target goals. Finally, ES conservation priorities were clustered into groups using a K-means clustering analysis in an effort to identify the dominant ES per location before formulating management strategies. We effectively identified 12 ES priorities to best represent conservation target goals for the ES indicators. These 12 priorities had a total areal coverage of 13,364 km2 representing 25.16% of the study area. The 12 priorities were further clustered into five significantly different groups (p-values between groups < 0.05), which helped to refine management strategies formulated to best enhance ES across the study area. The proposed method allows conservation and management plans to easily adapt to a wide variety of quantitative ES target goals within urban and agricultural areas, thereby preventing urban and agriculture sprawl and guiding sustainable urban development. PMID:29682412
Qu, Yi; Lu, Ming
2018-01-01
Rapid urbanization and agricultural development has resulted in the degradation of ecosystems, while also negatively impacting ecosystem services (ES) and urban sustainability. Identifying conservation priorities for ES and applying reasonable management strategies have been found to be effective methods for mitigating this phenomenon. The purpose of this study is to propose a comprehensive framework for identifying ES conservation priorities and associated management strategies for these planning areas. First, we incorporated 10 ES indicators within a systematic conservation planning (SCP) methodology in order to identify ES conservation priorities with high irreplaceability values based on conservation target goals associated with the potential distribution of ES indicators. Next, we assessed the efficiency of the ES conservation priorities for meeting the designated conservation target goals. Finally, ES conservation priorities were clustered into groups using a K-means clustering analysis in an effort to identify the dominant ES per location before formulating management strategies. We effectively identified 12 ES priorities to best represent conservation target goals for the ES indicators. These 12 priorities had a total areal coverage of 13,364 km 2 representing 25.16% of the study area. The 12 priorities were further clustered into five significantly different groups ( p -values between groups < 0.05), which helped to refine management strategies formulated to best enhance ES across the study area. The proposed method allows conservation and management plans to easily adapt to a wide variety of quantitative ES target goals within urban and agricultural areas, thereby preventing urban and agriculture sprawl and guiding sustainable urban development.
Adding Data Management Services to Parallel File Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Scott
2015-03-04
The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less
Tian, Maoyi; Ajay, Vamadevan S.; Dunzhu, Danzeng; Hameed, Safraj S.; Li, Xian; Liu, Zhong; Li, Cong; Chen, Hao; Cho, KaWing; Li, Ruilai; Zhao, Xingshan; Jindal, Devraj; Rawal, Ishita; Ali, Mohammed K.; Peterson, Eric D.; Ji, Jiachao; Amarchand, Ritvik; Krishnan, Anand; Tandon, Nikhil; Xu, Li-Qun; Wu, Yangfeng; Prabhakaran, Dorairaj; Yan, Lijing L.
2015-01-01
Background In rural areas in China and India, cardiovascular disease burden is high but economic and healthcare resources are limited. This study aims to develop and evaluate a simplified cardiovascular management program (SimCard) delivered by community health workers (CHWs) with the aid of a smartphone-based electronic decision support system. Methods and Results The SimCard study was a yearlong cluster-randomized controlled trial conducted in 47 villages (27 in China and 20 in India). 2,086 ‘high cardiovascular risk’ individuals (aged 40 years or older with self-reported history of coronary heart disease, stroke, diabetes, and/or measured systolic blood pressure ≥160 mmHg) were recruited. Participants in the intervention villages were managed by CHWs through an Android-powered “app” on a monthly basis focusing on two medication use and two lifestyle modifications. Compared with the control group, the intervention group had a 25.5% (P<0.001) higher net increase in the primary outcome of the proportion of patient-reported anti-hypertensive medication use pre-and-post intervention. There were also significant differences in certain secondary outcomes: aspirin use (net difference 17.1%, P<0.001) and systolic blood pressure (−2.7 mmHg, P=0.04). However, no significant changes were observed in the lifestyle factors. The intervention was culturally tailored and country-specific results revealed important differences between the regions. Conclusions The results indicate that the simplified cardiovascular management program improved quality of primary care and clinical outcomes in resource-poor settings in China and India. Larger trials in more places are needed to ascertain potential impacts on mortality and morbidity outcomes. Clinical Trial Registration Information clinicaltrials.gov. Identifier: NCT01503814. PMID:26187183
Nakanishi, Miharu; Endo, Kaori; Hirooka, Kayo; Granvik, Eva; Minthon, Lennart; Nägga, Katarina; Nishida, Atsushi
2018-03-01
Little is known about the effectiveness of a psychosocial behaviour management programme on home-dwelling people with dementia. We developed a Behaviour Analytics & Support Enhancement (BASE) programme for care managers and professional caregivers of home care services in Japan. We investigated the effects of BASE on challenging behaviour of home-dwelling people with dementia. A cluster-randomized controlled trial was conducted with home care providers from 3 different districts in Tokyo. Each provider recruited persons with dementia aged 65 years or older to receive home care in the BASE programme in August 2016. An online monitoring and assessment system was introduced to the intervention group for repeated measures of challenging behaviour with a total score of the Neuropsychiatric Inventory. Care professionals in both the intervention and control groups evaluated challenging behaviour of persons with dementia at baseline (September 2016) and follow-up (February 2017). A majority of persons with dementia had Alzheimer disease (59.3%). One-hundred and forty-one persons with dementia were included in the intervention group and 142 in the control group. Multilevel modelling revealed a significant reduction in challenging behaviour in the intervention group after 6 months (mean score, 18.3 to 11.2) compared with that of the control group (11.6 to 10.8; P < .05). The implementation of the BASE programme resulted in a reduction of challenging behaviour of home-dwelling people with dementia. Future research should examine the long-term effects of behaviour management programmes on behaviour, nursing home placement, and hospital admission of home-dwelling people with dementia. Copyright © 2017 John Wiley & Sons, Ltd.
Preliminary Design of Industrial Symbiosis of Smes Using Material Flow Cost Accounting (MFCA) Method
NASA Astrophysics Data System (ADS)
Astuti, Rahayu Siwi Dwi; Astuti, Arieyanti Dwi; Hadiyanto
2018-02-01
Industrial symbiosis is a collaboration of several industries to share their necessities such material, energy, technology as well as waste management. As a part of industrial ecology, in principle, this system attempts to emulate ecosystem where waste of an organism is being used by another organism, therefore there is no waste in the nature. This system becomes an effort to optimize resources (material and energy) as well as minimize waste. Considerable, in a symbiosis incure material and energy flows among industries. Material and energy in an industry are known as cost carriers, thus flow analysis in this system can be conducted in perspective of material, energy and cost, or called as material flow cost accounting (MFCA) that is an economic and ecological appraisal approach. Previous researches shown that MFCA implementation could be used to evaluate an industry's environmental-related efficiency as well as in planning, business control and decision making. Moreover, the MFCA has been extended to assess environmental performance of SMEs Cluster or industrial symbiosis in SMEs Cluster, even to make preliminary design of an industrial symbiosis base on a major industry. This paper describes the use of MFCA to asses performance of SMEs industrial symbiosis and to improve the performance.
Kinematics of the Doped Quantum Vortices in Superfluid Helium Droplets
NASA Astrophysics Data System (ADS)
Bernando, Charles; Vilesov, Andrey F.
2018-05-01
Recent observation of quantum vortices in superfluid 4He droplets measuring a few hundreds of nanometers in diameter involved decoration of vortex cores by clusters containing large numbers of Xe atoms, which served as X-ray contrast agents. Here, we report on the study of the kinematics of the combined vortex-cluster system in a cylinder and in a sphere. Equilibrium states, characterized by total angular momentum, L, were found by minimizing the total energy, E, which sums from the kinetic energy of the liquid due to the vortex and due to orbiting Xe clusters, as well as solvation energy of the cluster in the droplet. Calculations show that, at small mass of the cluster, the equilibrium displacement of the system from the rotation axis is close to that for the bare vortex. However, upon decrease in L beyond certain critical value, which is larger for heavier clusters, the displacement bifurcates toward the surface region, where the motion of the system is governed by the clusters. In addition, at even smaller L, bare orbiting clusters become energetically favorable, opening the possibility for the vortex to detach from the cluster and to annihilate at the droplet's surface.
Identification of piecewise affine systems based on fuzzy PCA-guided robust clustering technique
NASA Astrophysics Data System (ADS)
Khanmirza, Esmaeel; Nazarahari, Milad; Mousavi, Alireza
2016-12-01
Hybrid systems are a class of dynamical systems whose behaviors are based on the interaction between discrete and continuous dynamical behaviors. Since a general method for the analysis of hybrid systems is not available, some researchers have focused on specific types of hybrid systems. Piecewise affine (PWA) systems are one of the subsets of hybrid systems. The identification of PWA systems includes the estimation of the parameters of affine subsystems and the coefficients of the hyperplanes defining the partition of the state-input domain. In this paper, we have proposed a PWA identification approach based on a modified clustering technique. By using a fuzzy PCA-guided robust k-means clustering algorithm along with neighborhood outlier detection, the two main drawbacks of the well-known clustering algorithms, i.e., the poor initialization and the presence of outliers, are eliminated. Furthermore, this modified clustering technique enables us to determine the number of subsystems without any prior knowledge about system. In addition, applying the structure of the state-input domain, that is, considering the time sequence of input-output pairs, provides a more efficient clustering algorithm, which is the other novelty of this work. Finally, the proposed algorithm has been evaluated by parameter identification of an IGV servo actuator. Simulation together with experiment analysis has proved the effectiveness of the proposed method.
The Complexities of Implementing Cluster Supply Chain - Case Study of JCH
NASA Astrophysics Data System (ADS)
Xue, Xiao; Zhang, Jibiao; Wang, Yang
As a new type of management pattern, "cluster supply chain" (CSC) can help SMEs to face the global challenges through all kinds of collaboration. However, a major challenge in implementing CSC is the gap between theory and practice in the field. In an effort to provide a better understanding of this emerging phenomenon, this paper presents the implementation process of CSC in the context of JingCheng Mechanical & Electrical Holding co., ltd.(JCH) as a case study. The cast study of JCH suggests that the key problems in the practice of cluster supply chain: How do small firms use cluster supply chain? Only after we clarify the problem, the actual construction and operation of cluster supply chain does show successful results as it should be.