Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud.
Cianfrocco, Michael A; Leschziner, Andres E
2015-05-08
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.
Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud
Cianfrocco, Michael A; Leschziner, Andres E
2015-01-01
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available ‘off-the-shelf’ computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16–480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM. DOI: http://dx.doi.org/10.7554/eLife.06664.001 PMID:25955969
Visualization of unsteady computational fluid dynamics
NASA Astrophysics Data System (ADS)
Haimes, Robert
1994-11-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1994-01-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Grid Computing Environment using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Alanis, Fransisco; Mahmood, Akhtar
2003-10-01
Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
The `TTIME' Package: Performance Evaluation in a Cluster Computing Environment
NASA Astrophysics Data System (ADS)
Howe, Marico; Berleant, Daniel; Everett, Albert
2011-06-01
The objective of translating developmental event time across mammalian species is to gain an understanding of the timing of human developmental events based on known time of those events in animals. The potential benefits include improvements to diagnostic and intervention capabilities. The CRAN `ttime' package provides the functionality to infer unknown event timings and investigate phylogenetic proximity utilizing hierarchical clustering of both known and predicted event timings. The original generic mammalian model included nine eutherian mammals: Felis domestica (cat), Mustela putorius furo (ferret), Mesocricetus auratus (hamster), Macaca mulatta (monkey), Homo sapiens (humans), Mus musculus (mouse), Oryctolagus cuniculus (rabbit), Rattus norvegicus (rat), and Acomys cahirinus (spiny mouse). However, the data for this model is expected to grow as more data about developmental events is identified and incorporated into the analysis. Performance evaluation of the `ttime' package across a cluster computing environment versus a comparative analysis in a serial computing environment provides an important computational performance assessment. A theoretical analysis is the first stage of a process in which the second stage, if justified by the theoretical analysis, is to investigate an actual implementation of the `ttime' package in a cluster computing environment and to understand the parallelization process that underlies implementation.
Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy
2005-01-01
Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
Resource Provisioning in SLA-Based Cluster Computing
NASA Astrophysics Data System (ADS)
Xiong, Kaiqi; Suh, Sang
Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Balancing computation and communication power in power constrained clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piga, Leonardo; Paul, Indrani; Huang, Wei
Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less
Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2005-01-01
A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Automating a Massive Online Course with Cluster Computing
ERIC Educational Resources Information Center
Haas, Timothy C.
2016-01-01
Before massive numbers of students can take online courses for college credit, the challenges of providing tutoring support, answers to student-posed questions, and the control of cheating will need to be addressed. These challenges are taken up here by developing an online course delivery system that runs in a cluster computing environment and is…
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1995-01-01
The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.
Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil
2015-11-10
Calculations of the principal components of magnetic-shielding tensors in crystalline solids require the inclusion of the effects of lattice structure on the local electronic environment to obtain significant agreement with experimental NMR measurements. We assess periodic (GIPAW) and GIAO/symmetry-adapted cluster (SAC) models for computing magnetic-shielding tensors by calculations on a test set containing 72 insulating molecular solids, with a total of 393 principal components of chemical-shift tensors from 13C, 15N, 19F, and 31P sites. When clusters are carefully designed to represent the local solid-state environment and when periodic calculations include sufficient variability, both methods predict magnetic-shielding tensors that agree well with experimental chemical-shift values, demonstrating the correspondence of the two computational techniques. At the basis-set limit, we find that the small differences in the computed values have no statistical significance for three of the four nuclides considered. Subsequently, we explore the effects of additional DFT methods available only with the GIAO/cluster approach, particularly the use of hybrid-GGA functionals, meta-GGA functionals, and hybrid meta-GGA functionals that demonstrate improved agreement in calculations on symmetry-adapted clusters. We demonstrate that meta-GGA functionals improve computed NMR parameters over those obtained by GGA functionals in all cases, and that hybrid functionals improve computed results over the respective pure DFT functional for all nuclides except 15N.
Oh, Jeongsu; Choi, Chi-Hwan; Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo
2016-01-01
High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology-a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in JAVA and is freely available at http://clustomcloud.kopri.re.kr.
Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo
2016-01-01
High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology–a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in JAVA and is freely available at http://clustomcloud.kopri.re.kr. PMID:26954507
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
NASA Astrophysics Data System (ADS)
Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing
2018-01-01
For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.
Issues in ATM Support of High-Performance, Geographically Distributed Computing
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G
1995-01-01
This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Dynamic VM Provisioning for TORQUE in a Cloud Environment
NASA Astrophysics Data System (ADS)
Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.
2014-06-01
Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.
Optimizing R with SparkR on a commodity cluster for biomedical research.
Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan
2016-12-01
Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.
Dynamic Extension of a Virtualized Cluster by using Cloud Resources
NASA Astrophysics Data System (ADS)
Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter
2012-12-01
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
Experimental Program to Stimulate Competitive Research (EPSCoR)
NASA Technical Reports Server (NTRS)
Dingerson, Michael R.
1997-01-01
Report includes: (1) CLUSTER: "Studies in Macromolecular Behavior in Microgravity Environment": The Role of Protein Oligomers in Protein Crystallization; Phase Separation Phenomena in Microgravity; Traveling Front Polymerizations; Investigating Mechanisms Affecting Phase Transition Response and Changes in Thermal Transport Properties in ER-Fluids under Normal and Microgravity Conditions. (2) CLUSTER: "Computational/Parallel Processing Studies": Flows in Local Chemical Equilibrium; A Computational Method for Solving Very Large Problems; Modeling of Cavitating Flows.
NASA Astrophysics Data System (ADS)
Appel, Marius; Nüst, Daniel; Pebesma, Edzer
2017-04-01
Geoscientific analyses of Earth observation data typically involve a long path from data acquisition to scientific results and conclusions. Before starting the actual processing, scenes must be downloaded from the providers' platforms and the computing infrastructure needs to be prepared. The computing environment often requires specialized software, which in turn might have lots of dependencies. The software is often highly customized and provided without commercial support, which leads to rather ad-hoc systems and irreproducible results. To let other scientists reproduce the analyses, the full workspace including data, code, the computing environment, and documentation must be bundled and shared. Technologies such as virtualization or containerization allow for the creation of identical computing environments with relatively little effort. Challenges, however, arise when the volume of the data is too large, when computations are done in a cluster environment, or when complex software components such as databases are used. We discuss these challenges for the example of scalable Land use change detection on Landsat imagery. We present a reproducible implementation that runs R and the scalable data management and analytical system SciDB within a Docker container. Thanks to an explicit container recipe (the Dockerfile), this enables the all-in-one reproduction including the installation of software components, the ingestion of the data, and the execution of the analysis in a well-defined environment. We furthermore discuss possibilities how the implementation could be transferred to multi-container environments in order to support reproducibility on large cluster environments.
Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele
2016-12-28
Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.
Integration of High-Performance Computing into Cloud Computing Services
NASA Astrophysics Data System (ADS)
Vouk, Mladen A.; Sills, Eric; Dreher, Patrick
High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).
NASA Astrophysics Data System (ADS)
Shanmugam, Ramasamy; Thamaraichelvan, Arunachalam; Ganesan, Tharumeya Kuppusamy; Viswanathan, Balasubramanian
2017-02-01
Metal cluster, at sub-nanometer level has a unique property in the activation of small molecules, in contrast to that of bulk surface. In the present work, singly exposed active site of copper metal cluster at sub-nanometer level was designed to arrive at the energy minimised configurations, binding energy, electrostatic potential map, frontier molecular orbitals and partial density of states. The ab initio molecular dynamics was carried out to probe the catalytic nature of the cluster. Further, the stability of the metal cluster and its catalytic activity in the electrochemical reduction of CO2 to CO were evaluated by means of computational hydrogen electrode via calculation of the free energy profile using DFT/B3LYP level of theory in vacuum. The activity of the cluster is ascertained from the fact that the copper atom, present in a two coordinative environment, performs a more selective conversion of CO2 to CO at an applied potential of -0.35 V which is comparatively lower than that of higher coordinative sites. The present study helps to design any sub-nano level metal catalyst for electrochemical reduction of CO2 to various value added chemicals.
WinHPC System | High-Performance Computing | NREL
System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB
2007-09-01
example, an application developed in Sun’s Netbeans [2007] integrated development environment (IDE) uses Swing class object for graphical user... Netbeans Version 5.5.1 [Computer Software]. Santa Clara, CA: Sun Microsystems. Process Modeler Version 7.0 [Computer Software]. Santa Clara, Ca
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
GREEN SUPERCOMPUTING IN A DESKTOP BOX
DOE Office of Scientific and Technical Information (OSTI.GOV)
HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY
2007-01-17
The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less
Personal Learning Network Clusters: A Comparison between Mathematics and Computer Science Students
ERIC Educational Resources Information Center
Harding, Ansie; Engelbrecht, Johann
2015-01-01
"Personal learning environments" (PLEs) and "personal learning networks" (PLNs) are well-known concepts. A personal learning network "cluster" is a small group of people who regularly interact academically and whose PLNs have a non-empty intersection that includes all the other members. At university level PLN…
Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters
ERIC Educational Resources Information Center
Younge, Andrew J.
2016-01-01
With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…
A Hybrid Cloud Computing Service for Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, C. P.
2016-12-01
Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.
Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing
NASA Astrophysics Data System (ADS)
Tang, Jingyin; Matyas, Corene J.
2018-02-01
Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.
Distributed Sensing and Processing Adaptive Collaboration Environment (D-SPACE)
2014-07-01
to the query graph, or subgraph permutations with the same mismatch cost (often the case for homogeneous and/or symmetrical data/query). To avoid...decisions are generated in a bottom-up manner using the metric of entropy at the cluster level (Figure 9c). Using the definition of belief messages...for a cluster and a set of data nodes in this cluster , we compute the entropy for forward and backward messages as (,) = −∑ (
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2002-01-01
Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.
3D Viewer Platform of Cloud Clustering Management System: Google Map 3D
NASA Astrophysics Data System (ADS)
Choi, Sung-Ja; Lee, Gang-Soo
The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].
Wills, Lindsay A.; Qu, Xiaohui; Chang, I-Ya; Mustard, Thomas J. L.; Keszler, Douglas A.; Persson, Kristin A.; Cheong, Paul Ha-Yeon
2017-01-01
The characterization of water-based corrosion, geochemical, environmental and catalytic processes rely on the accurate depiction of stable phases in a water environment. The process is aided by Pourbaix diagrams, which map the equilibrium solid and solution phases under varying conditions of pH and electrochemical potential. Recently, metastable or possibly stable nanometric aqueous clusters have been proposed as intermediate species in non-classical nucleation processes. Herein, we describe a Group Additivity approach to obtain Pourbaix diagrams with full consideration of multimeric cluster speciation from computations. Comparisons with existing titration results from experiments yield excellent agreement. Applying this Group Additivity-Pourbaix approach to Group 13 elements, we arrive at a quantitative evaluation of cluster stability, as a function of pH and concentration, and present compelling support for not only metastable but also thermodynamically stable multimeric clusters in aqueous solutions. PMID:28643782
NASA Astrophysics Data System (ADS)
Wills, Lindsay A.; Qu, Xiaohui; Chang, I.-Ya; Mustard, Thomas J. L.; Keszler, Douglas A.; Persson, Kristin A.; Cheong, Paul Ha-Yeon
2017-06-01
The characterization of water-based corrosion, geochemical, environmental and catalytic processes rely on the accurate depiction of stable phases in a water environment. The process is aided by Pourbaix diagrams, which map the equilibrium solid and solution phases under varying conditions of pH and electrochemical potential. Recently, metastable or possibly stable nanometric aqueous clusters have been proposed as intermediate species in non-classical nucleation processes. Herein, we describe a Group Additivity approach to obtain Pourbaix diagrams with full consideration of multimeric cluster speciation from computations. Comparisons with existing titration results from experiments yield excellent agreement. Applying this Group Additivity-Pourbaix approach to Group 13 elements, we arrive at a quantitative evaluation of cluster stability, as a function of pH and concentration, and present compelling support for not only metastable but also thermodynamically stable multimeric clusters in aqueous solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
A parallel-processing approach to computing for the geographic sciences
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.
Beam Dynamics Simulation Platform and Studies of Beam Breakup in Dielectric Wakefield Structures
NASA Astrophysics Data System (ADS)
Schoessow, P.; Kanareykin, A.; Jing, C.; Kustov, A.; Altmark, A.; Gai, W.
2010-11-01
A particle-Green's function beam dynamics code (BBU-3000) to study beam breakup effects is incorporated into a parallel computing framework based on the Boinc software environment, and supports both task farming on a heterogeneous cluster and local grid computing. User access to the platform is through a web browser.
Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic
Sanduja, S; Jewell, P; Aron, E; Pharai, N
2015-01-01
Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic. PMID:26451333
Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.
Sanduja, S; Jewell, P; Aron, E; Pharai, N
2015-09-01
Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.
birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.
Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir
2011-05-01
birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Understanding and preventing computer vision syndrome.
Loh, Ky; Redd, Sc
2008-01-01
The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
Hodor, Paul; Chawla, Amandeep; Clark, Andrew; Neal, Lauren
2016-01-15
: One of the solutions proposed for addressing the challenge of the overwhelming abundance of genomic sequence and other biological data is the use of the Hadoop computing framework. Appropriate tools are needed to set up computational environments that facilitate research of novel bioinformatics methodology using Hadoop. Here, we present cl-dash, a complete starter kit for setting up such an environment. Configuring and deploying new Hadoop clusters can be done in minutes. Use of Amazon Web Services ensures no initial investment and minimal operation costs. Two sample bioinformatics applications help the researcher understand and learn the principles of implementing an algorithm using the MapReduce programming pattern. Source code is available at https://bitbucket.org/booz-allen-sci-comp-team/cl-dash.git. hodor_paul@bah.com. © The Author 2015. Published by Oxford University Press.
Hodor, Paul; Chawla, Amandeep; Clark, Andrew; Neal, Lauren
2016-01-01
Summary: One of the solutions proposed for addressing the challenge of the overwhelming abundance of genomic sequence and other biological data is the use of the Hadoop computing framework. Appropriate tools are needed to set up computational environments that facilitate research of novel bioinformatics methodology using Hadoop. Here, we present cl-dash, a complete starter kit for setting up such an environment. Configuring and deploying new Hadoop clusters can be done in minutes. Use of Amazon Web Services ensures no initial investment and minimal operation costs. Two sample bioinformatics applications help the researcher understand and learn the principles of implementing an algorithm using the MapReduce programming pattern. Availability and implementation: Source code is available at https://bitbucket.org/booz-allen-sci-comp-team/cl-dash.git. Contact: hodor_paul@bah.com PMID:26428290
Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580
NASA Astrophysics Data System (ADS)
Sultanova, Madina; Barkhouse, Wayne; Rude, Cody
2018-01-01
The classification of galaxies based on their morphology is a field in astrophysics that aims to understand galaxy formation and evolution based on their physical differences. Whether structural differences are due to internal factors or a result of local environment, the dominate mechanism that determines galaxy type needs to be robustly quantified in order to have a thorough grasp of the origin of the different types of galaxies. The main subject of my Ph.D. dissertation is to explore the use of computers to automatically classify and analyze large numbers of galaxies according to their morphology, and to analyze sub-samples of galaxies selected by type to understand galaxy formation in various environments. I have developed a computer code to classify galaxies by measuring five parameters from their images in FITS format. The code was trained and tested using visually classified SDSS galaxies from Galaxy Zoo and the EFIGI data set. I apply my morphology software to numerous galaxies from diverse data sets. Among the data analyzed are the 15 Abell galaxy clusters (0.03 < z < 0.184) from Rude et al. 2017 (in preparation), which were observed by the Canada-France-Hawaii Telescope. Additionally, I studied 57 galaxy clusters from Barkhouse et al. (2007), 77 clusters from the WINGS survey (Fasano et al. 2006), and the six Hubble Space Telescope (HST) Frontier Field galaxy clusters. The high resolution of HST allows me to compare distant clusters with those nearby to look for evolutionary changes in the galaxy cluster population. I use the results from the software to examine the properties (e.g. luminosity functions, radial dependencies, star formation rates) of selected galaxies. Due to the large amount of data that will be available from wide-area surveys in the future, the use of computer software to classify and analyze the morphology of galaxies will be extremely important in terms of efficiency. This research aims to contribute to the solution of this problem.
Clustering molecular dynamics trajectories for optimizing docking experiments.
De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D; Norberto de Souza, Osmar; Barros, Rodrigo C
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand.
Thermodynamics and Charging of Interstellar Iron Nanoparticles
NASA Astrophysics Data System (ADS)
Hensley, Brandon S.; Draine, B. T.
2017-01-01
Interstellar iron in the form of metallic iron nanoparticles may constitute a component of the interstellar dust. We compute the stability of iron nanoparticles to sublimation in the interstellar radiation field, finding that iron clusters can persist down to a radius of ≃4.5 Å, and perhaps smaller. We employ laboratory data on small iron clusters to compute the photoelectric yields as a function of grain size and the resulting grain charge distribution in various interstellar environments, finding that iron nanoparticles can acquire negative charges, particularly in regions with high gas temperatures and ionization fractions. If ≳10% of the interstellar iron is in the form of ultrasmall iron clusters, the photoelectric heating rate from dust may be increased by up to tens of percent relative to dust models with only carbonaceous and silicate grains.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Computational Design of Clusters for Catalysis
NASA Astrophysics Data System (ADS)
Jimenez-Izal, Elisa; Alexandrova, Anastassia N.
2018-04-01
When small clusters are studied in chemical physics or physical chemistry, one perhaps thinks of the fundamental aspects of cluster electronic structure, or precision spectroscopy in ultracold molecular beams. However, small clusters are also of interest in catalysis, where the cold ground state or an isolated cluster may not even be the right starting point. Instead, the big question is: What happens to cluster-based catalysts under real conditions of catalysis, such as high temperature and coverage with reagents? Myriads of metastable cluster states become accessible, the entire system is dynamic, and catalysis may be driven by rare sites present only under those conditions. Activity, selectivity, and stability are highly dependent on size, composition, shape, support, and environment. To probe and master cluster catalysis, sophisticated tools are being developed for precision synthesis, operando measurements, and multiscale modeling. This review intends to tell the messy story of clusters in catalysis.
Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook
NASA Astrophysics Data System (ADS)
Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.
2012-12-01
The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system. Outputs can be summarised and visualised using the full power of Python's many scientific tools, including Scipy, Matplotlib, Pandas and CDAT. This rich user experience is delivered through the user's web browser; maintaining the interactive feel of a workstation-based environment with the parallel power of a remote data-centric processing facility.
Climbing the Ladder of Star Formation Feedback
NASA Astrophysics Data System (ADS)
Frank, Adam
2012-10-01
While much is understood about isolated star formation, the opposite is true for star formation in clusters of both low and high mass. In particular the mechanisms by which many coevally formed stars affect their parent cloud environment remains poorly characterized. Fundamental questions such as interplay between multiple outflows, ionization fronts and turbulence are just beginning to be fully articulated. Distinguishing between the nature of feedback in clusters of different mass is also critical. In high mass clusters O stars are expected to dominate energetics while in low mass clusters multiple collimated outflows may represent the dominant feedback mechanism. Thus the issue of feedback modalities in clusters of different masses represents one of the major challenges to the next generation of star formation studies. In this proposal we seek to carry forward a focused theoretical study of feedback in both low and high-mass cluster environments with direct connections to observations. Using a state-of-the-art Adaptive Mesh Refinement MHD multi-physics code {developed by our group} we propose two computational studies: {1} multiple, interacting outflows and their role in altering the properties of a parent low mass cluster {2} Poorly collimated outburst/outflows from massive star{s} and their effect on high mass cluster star forming environments. In both cases we will use initial conditions derived from high-resolution AMR MHD simulations of cloud/cluster formation. Synthetic observations derived from the simulations {in a variety of emission lines from ions to atoms to molecules} will allow for direct contact with HST and other star formation databases.
Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin
2018-01-01
Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...
2018-01-05
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
A versatile software package for inter-subject correlation based analyses of fMRI.
Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi
2014-01-01
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/
A versatile software package for inter-subject correlation based analyses of fMRI
Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi
2014-01-01
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/ PMID:24550818
THERMODYNAMICS AND CHARGING OF INTERSTELLAR IRON NANOPARTICLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, Brandon S.; Draine, B. T., E-mail: brandon.s.hensley@jpl.nasa.gov
Interstellar iron in the form of metallic iron nanoparticles may constitute a component of the interstellar dust. We compute the stability of iron nanoparticles to sublimation in the interstellar radiation field, finding that iron clusters can persist down to a radius of ≃4.5 Å, and perhaps smaller. We employ laboratory data on small iron clusters to compute the photoelectric yields as a function of grain size and the resulting grain charge distribution in various interstellar environments, finding that iron nanoparticles can acquire negative charges, particularly in regions with high gas temperatures and ionization fractions. If ≳10% of the interstellar ironmore » is in the form of ultrasmall iron clusters, the photoelectric heating rate from dust may be increased by up to tens of percent relative to dust models with only carbonaceous and silicate grains.« less
Pereiro, M; Baldomir, D; Arias, J E
2011-02-28
Optical excitation spectra of Ag(n) and Ag(n)@He(60) (n = 2, 8) clusters are investigated in the framework of the time-dependent density functional theory (TDDFT) within the linear response regime. We have performed the ab initio calculations for two different exact exchange functionals (GGA-exact and LDA-exact). The computed spectra of Ag(n)@He(60) clusters with the GGA-exact functional accounting for exchange-correlation effects are found to be generally in a relatively good agreement with the experiment. A strategy is proposed to obtain the ground-state structures of the Ag(n)@He(60) clusters and in the initial process of the geometry optimization, the He environment is simulated with buckyballs. A redshift of the silver clusters spectra is observed in the He environment with respect to the ones of bare silver clusters. This observation is discussed and explained in terms of a contraction of the Ag-He bonding length and a consequent confinement of the s valence electrons in silver clusters. Likewise, the Mie-Gans predictions combined with our TDDFT calculations also show that the dielectric effect produced by the He matrix is considerably less important in explaining the redshifting observed in the optical spectra of Ag(n)@He(60) clusters.
Oak Ridge Institutional Cluster Autotune Test Drive Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jibonananda, Sanyal; New, Joshua Ryan
2014-02-01
The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less
Clustering Molecular Dynamics Trajectories for Optimizing Docking Experiments
De Paris, Renata; Quevedo, Christian V.; Ruiz, Duncan D.; Norberto de Souza, Osmar; Barros, Rodrigo C.
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand. PMID:25873944
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland
2003-01-01
In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
Data Handling and Communication
NASA Astrophysics Data System (ADS)
Hemmer, FréDéRic Giorgio Innocenti, Pier
The following sections are included: * Introduction * Computing Clusters and Data Storage: The New Factory and Warehouse * Local Area Networks: Organizing Interconnection * High-Speed Worldwide Networking: Accelerating Protocols * Detector Simulation: Events Before the Event * Data Analysis and Programming Environment: Distilling Information * World Wide Web: Global Networking * References
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; ...
2016-11-24
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. A multitude of technologies, abstractions, and interpretive frameworks have emerged to answer the challenges presented by genome function and regulatory network inference. Here, we propose a new approach for producing biologically meaningful clusters of coexpressed genes, called Atomic Regulons (ARs), based on expression data, gene context, and functional relationships. We demonstrate this new approach by computing ARs for Escherichia coli, which we compare with the coexpressed gene clusters predicted by two prevalent existing methods: hierarchical clustering and k-meansmore » clustering. We test the consistency of ARs predicted by all methods against expected interactions predicted by the Context Likelihood of Relatedness (CLR) mutual information based method, finding that the ARs produced by our approach show better agreement with CLR interactions. We then apply our method to compute ARs for four other genomes: Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus. We compare the AR clusters from all genomes to study the similarity of coexpression among a phylogenetically diverse set of species, identifying subsystems that show remarkable similarity over wide phylogenetic distances. We also study the sensitivity of our method for computing ARs to the expression data used in the computation, showing that our new approach requires less data than competing approaches to converge to a near final configuration of ARs. We go on to use our sensitivity analysis to identify the specific experiments that lead most rapidly to the final set of ARs for E. coli. As a result, this analysis produces insights into improving the design of gene expression experiments.« less
clubber: removing the bioinformatics bottleneck in big data analyses.
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
2017-06-13
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these "big data" analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber's goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.
clubber: removing the bioinformatics bottleneck in big data analyses
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
2018-01-01
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment. PMID:28609295
NASA Astrophysics Data System (ADS)
Ji, Junzhong; Song, Xiangjing; Liu, Chunnian; Zhang, Xiuzhen
2013-08-01
Community structure detection in complex networks has been intensively investigated in recent years. In this paper, we propose an adaptive approach based on ant colony clustering to discover communities in a complex network. The focus of the method is the clustering process of an ant colony in a virtual grid, where each ant represents a node in the complex network. During the ant colony search, the method uses a new fitness function to percept local environment and employs a pheromone diffusion model as a global information feedback mechanism to realize information exchange among ants. A significant advantage of our method is that the locations in the grid environment and the connections of the complex network structure are simultaneously taken into account in ants moving. Experimental results on computer-generated and real-world networks show the capability of our method to successfully detect community structures.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.
Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan
2014-02-01
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
DeRobertis, Christopher V.; Lu, Yantian T.
2010-02-23
A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.
Catalysis by clusters with precise numbers of atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyo, Eric C.; Vajda, Stefan
2015-07-03
Clusters that contain only a small number of atoms can exhibit unique and often unexpected properties. The clusters are of particular interest in catalysis because they can act as individual active sites, and minor changes in size and composition – such as the addition or removal of a single atom – can have a substantial influence on the activity and selectivity of a reaction. Here we review recent progress in the synthesis, characterization and catalysis of well-defined sub-nanometre clusters. We examine work on size-selected supported clusters in ultra-high vacuum environments and under realistic reaction conditions, and explore the use ofmore » computational methods to provide a mechanistic understanding of their catalytic properties. We also highlight the potential of size-selected clusters to provide insights into important catalytic processes and their use in the development of novel catalytic systems.« less
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Linking Associations of Rare Low-Abundance Species to Their Environments by Association Networks
Karpinets, Tatiana V.; Gopalakrishnan, Vancheswaran; Wargo, Jennifer; ...
2018-03-07
Studies of microbial communities by targeted sequencing of rRNA genes lead to recovering numerous rare low-abundance taxa with unknown biological roles. We propose to study associations of such rare organisms with their environments by a computational framework based on transformation of the data into qualitative variables. Namely, we analyze the sparse table of putative species or OTUs (operational taxonomic units) and samples generated in such studies, also known as an OTU table, by collecting statistics on co-occurrences of the species and on shared species richness across samples. Based on the statistics we built two association networks, of the rare putativemore » species and of the samples respectively, using a known computational technique, Association networks (Anets) developed for analysis of qualitative data. Clusters of samples and clusters of OTUs are then integrated and combined with metadata of the study to produce a map of associated putative species in their environments. We tested and validated the framework on two types of microbiomes, of human body sites and that of the Populus tree root systems. We show that in both studies the associations of OTUs can separate samples according to environmental or physiological characteristics of the studied systems.« less
Linking Associations of Rare Low-Abundance Species to Their Environments by Association Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpinets, Tatiana V.; Gopalakrishnan, Vancheswaran; Wargo, Jennifer
Studies of microbial communities by targeted sequencing of rRNA genes lead to recovering numerous rare low-abundance taxa with unknown biological roles. We propose to study associations of such rare organisms with their environments by a computational framework based on transformation of the data into qualitative variables. Namely, we analyze the sparse table of putative species or OTUs (operational taxonomic units) and samples generated in such studies, also known as an OTU table, by collecting statistics on co-occurrences of the species and on shared species richness across samples. Based on the statistics we built two association networks, of the rare putativemore » species and of the samples respectively, using a known computational technique, Association networks (Anets) developed for analysis of qualitative data. Clusters of samples and clusters of OTUs are then integrated and combined with metadata of the study to produce a map of associated putative species in their environments. We tested and validated the framework on two types of microbiomes, of human body sites and that of the Populus tree root systems. We show that in both studies the associations of OTUs can separate samples according to environmental or physiological characteristics of the studied systems.« less
Computational Science in Armenia (Invited Talk)
NASA Astrophysics Data System (ADS)
Marandjian, H.; Shoukourian, Yu.
This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less
SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering
2013-11-01
IaaS) cloud frameworks including Amazon Web Services and Eucalyptus . For load testing, we used The Grinder [9], a Java load testing framework that...internal Eucalyptus cluster which we could not scale as large as the Amazon environment due to a lack of computation resources. We recreated our
Efficient Agent-Based Cluster Ensembles
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Tumer, Kagan
2006-01-01
Numerous domains ranging from distributed data acquisition to knowledge reuse need to solve the cluster ensemble problem of combining multiple clusterings into a single unified clustering. Unfortunately current non-agent-based cluster combining methods do not work in a distributed environment, are not robust to corrupted clusterings and require centralized access to all original clusterings. Overcoming these issues will allow cluster ensembles to be used in fundamentally distributed and failure-prone domains such as data acquisition from satellite constellations, in addition to domains demanding confidentiality such as combining clusterings of user profiles. This paper proposes an efficient, distributed, agent-based clustering ensemble method that addresses these issues. In this approach each agent is assigned a small subset of the data and votes on which final cluster its data points should belong to. The final clustering is then evaluated by a global utility, computed in a distributed way. This clustering is also evaluated using an agent-specific utility that is shown to be easier for the agents to maximize. Results show that agents using the agent-specific utility can achieve better performance than traditional non-agent based methods and are effective even when up to 50% of the agents fail.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
The coupling of fluids, dynamics, and controls on advanced architecture computers
NASA Technical Reports Server (NTRS)
Atwood, Christopher
1995-01-01
This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.
NASA Astrophysics Data System (ADS)
Walter, R. J.; Protack, S. P.; Harris, C. J.; Caruthers, C.; Kusterer, J. M.
2008-12-01
NASA's Atmospheric Science Data Center at the NASA Langley Research Center performs all of the science data processing for the Multi-angle Imaging SpectroRadiometer (MISR) instrument. MISR is one of the five remote sensing instruments flying aboard NASA's Terra spacecraft. From the time of Terra launch in December 1999 until February 2008, all MISR science data processing was performed on a Silicon Graphics, Inc. (SGI) platform. However, dramatic improvements in commodity computing technology coupled with steadily declining project budgets during that period eventually made transitioning MISR processing to a commodity computing environment both feasible and necessary. The Atmospheric Science Data Center has successfully ported the MISR science data processing environment from the SGI platform to a Linux cluster environment. There were a multitude of technical challenges associated with this transition. Even though the core architecture of the production system did not change, the manner in which it interacted with underlying hardware was fundamentally different. In addition, there are more potential throughput bottlenecks in a cluster environment than there are in a symmetric multiprocessor environment like the SGI platform and each of these had to be addressed. Once all the technical issues associated with the transition were resolved, the Atmospheric Science Data Center had a MISR science data processing system with significantly higher throughput than the SGI platform at a fraction of the cost. In addition to the commodity hardware, free and open source software such as S4PM, Sun Grid Engine, PostgreSQL and Ganglia play a significant role in the new system. Details of the technical challenges and resolutions, software systems, performance improvements, and cost savings associated with the transition will be discussed. The Atmospheric Science Data Center in Langley's Science Directorate leads NASA's program for the processing, archival and distribution of Earth science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry. The Data Center was established in 1991 to support NASA's Earth Observing System and the U.S. Global Change Research Program. It is unique among NASA data centers in the size of its archive, cutting edge computing technology, and full range of data services. For more information regarding ASDC data holdings, documentation, tools and services, visit http://eosweb.larc.nasa.gov
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.
A gossip based information fusion protocol for distributed frequent itemset mining
NASA Astrophysics Data System (ADS)
Sohrabi, Mohammad Karim
2018-07-01
The computational complexity, huge memory space requirement, and time-consuming nature of frequent pattern mining process are the most important motivations for distribution and parallelization of this mining process. On the other hand, the emergence of distributed computational and operational environments, which causes the production and maintenance of data on different distributed data sources, makes the parallelization and distribution of the knowledge discovery process inevitable. In this paper, a gossip based distributed itemset mining (GDIM) algorithm is proposed to extract frequent itemsets, which are special types of frequent patterns, in a wireless sensor network environment. In this algorithm, local frequent itemsets of each sensor are extracted using a bit-wise horizontal approach (LHPM) from the nodes which are clustered using a leach-based protocol. Heads of clusters exploit a gossip based protocol in order to communicate each other to find the patterns which their global support is equal to or more than the specified support threshold. Experimental results show that the proposed algorithm outperforms the best existing gossip based algorithm in term of execution time.
HPC on Competitive Cloud Resources
NASA Astrophysics Data System (ADS)
Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff
Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.
Development of a Web Based Simulating System for Earthquake Modeling on the Grid
NASA Astrophysics Data System (ADS)
Seber, D.; Youn, C.; Kaiser, T.
2007-12-01
Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.
High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions
2016-08-30
High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated
Radiative Feedback of Forming Star Clusters on Their GMC Environments: Theory and Simulation
NASA Astrophysics Data System (ADS)
Howard, C. S.; Pudritz, R. E.; Harris, W. E.
2013-07-01
Star clusters form from dense clumps within a molecular cloud. Radiation from these newly formed clusters feeds back on their natal molecular cloud through heating and ionization which ultimately stops gas accretion into the cluster. Recent studies suggest that radiative feedback effects from a single cluster may be sufficient to disrupt an entire cloud over a short timescale. Simulating cluster formation on a large scale, however, is computationally demanding due to the high number of stars involved. For this reason, we present a model for representing the radiative output of an entire cluster which involves randomly sampling an initial mass function (IMF) as the cluster accretes mass. We show that this model is able to reproduce the star formation histories of observed clusters. To examine the degree to which radiative feedback shapes the evolution of a molecular cloud, we use the FLASH adaptive-mesh refinement hydrodynamics code to simulate cluster formation in a turbulent cloud. Unlike previous studies, sink particles are used to represent a forming cluster rather than individual stars. Our cluster model is then coupled with a raytracing scheme to treat radiative transfer as the clusters grow in mass. This poster will outline the details of our model and present preliminary results from our 3D hydrodynamical simulations.
2014-02-01
idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework
NASA Astrophysics Data System (ADS)
Gannon, C.
2017-12-01
As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.
Random Walk Method for Potential Problems
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Raju, I. S.
2002-01-01
A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment
NASA Astrophysics Data System (ADS)
Loewe, P.; Klump, J.; Thaler, J.
2012-12-01
High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.
Specialized Computer Systems for Environment Visualization
NASA Astrophysics Data System (ADS)
Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.
2018-06-01
The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
Architecture Adaptive Computing Environment
NASA Technical Reports Server (NTRS)
Dorband, John E.
2006-01-01
Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.
NASA Astrophysics Data System (ADS)
Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.
2007-12-01
The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
Parallel computing in genomic research: advances and applications
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801
Parallel computing in genomic research: advances and applications.
Ocaña, Kary; de Oliveira, Daniel
2015-01-01
Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.
Visualization of Unsteady Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient problems require dealing with time.
Freud: a software suite for high-throughput simulation analysis
NASA Astrophysics Data System (ADS)
Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon
Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.
Numerical taxonomy of heavy metal tolerant bacteria isolated from the estuarine environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, D.A.; Austin, B.; Mills, A.L.
1977-01-01
Metal tolerant bacteria, totalling 301 strains, were isolated from water and sediment samples collected from Chesapeake Bay. Growth in the presence of 100 ppm cadmium, chromium, cobalt, lead, mercury and molybdenum was tested. In addition, the strains were examined for 118 biochemical, cultural, morphological, nutritional and physiological, characters and the data were analyzed by computer, using the simple matching and Jaccard coefficients. From sorted similarity matrices, 293 strains, 97% of the total, were removed in 12 clusters defined at the 80 to 85% similarity level. The clusters included Bacillus and Pseudomonas spp. and genera and species of Enterobacteriaceae. Three clusters,more » containing gram negative rods, were not identified. Several of the clusters were composed of strains exhibiting tolerance to a wide range of heavy metals, whereas three of the clusters contained bacteria that were capable of growth in the presence of only a few of the metals examined in this study. Antibiotic resistance of the metal resistant strains has also been examined.« less
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work
NASA Technical Reports Server (NTRS)
Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David
2005-01-01
Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.
Spin-orbit effects on the (119)Sn magnetic-shielding tensor in solids: a ZORA/DFT investigation.
Alkan, Fahri; Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil
2016-07-28
Periodic-boundary and cluster calculations of the magnetic-shielding tensors of (119)Sn sites in various co-ordination and stereochemical environments are reported. The results indicate a significant difference between the predicted NMR chemical shifts for tin(ii) sites that exhibit stereochemically-active lone pairs and tin(iv) sites that do not have stereochemically-active lone pairs. The predicted magnetic shieldings determined either with the cluster model treated with the ZORA/Scalar Hamiltonian or with the GIPAW formalism are dependent on the oxidation state and the co-ordination geometry of the tin atom. The inclusion of relativistic effects at the spin-orbit level removes systematic differences in computed magnetic-shielding parameters between tin sites of differing stereochemistries, and brings computed NMR shielding parameters into significant agreement with experimentally-determined chemical-shift principal values. Slight improvement in agreement with experiment is noted in calculations using hybrid exchange-correlation functionals.
Cluster-state quantum computing enhanced by high-fidelity generalized measurements.
Biggerstaff, D N; Kaltenbaek, R; Hamel, D R; Weihs, G; Rudolph, T; Resch, K J
2009-12-11
We introduce and implement a technique to extend the quantum computational power of cluster states by replacing some projective measurements with generalized quantum measurements (POVMs). As an experimental demonstration we fully realize an arbitrary three-qubit cluster computation by implementing a tunable linear-optical POVM, as well as fast active feedforward, on a two-qubit photonic cluster state. Over 206 different computations, the average output fidelity is 0.9832+/-0.0002; furthermore the error contribution from our POVM device and feedforward is only of O(10(-3)), less than some recent thresholds for fault-tolerant cluster computing.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Are Cloud Environments Ready for Scientific Applications?
NASA Astrophysics Data System (ADS)
Mehrotra, P.; Shackleford, K.
2011-12-01
Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.
CE-ACCE: The Cloud Enabled Advanced sCience Compute Environment
NASA Astrophysics Data System (ADS)
Cinquini, L.; Freeborn, D. J.; Hardman, S. H.; Wong, C.
2017-12-01
Traditionally, Earth Science data from NASA remote sensing instruments has been processed by building custom data processing pipelines (often based on a common workflow engine or framework) which are typically deployed and run on an internal cluster of computing resources. This approach has some intrinsic limitations: it requires each mission to develop and deploy a custom software package on top of the adopted framework; it makes use of dedicated hardware, network and storage resources, which must be specifically purchased, maintained and re-purposed at mission completion; and computing services cannot be scaled on demand beyond the capability of the available servers.More recently, the rise of Cloud computing, coupled with other advances in containerization technology (most prominently, Docker) and micro-services architecture, has enabled a new paradigm, whereby space mission data can be processed through standard system architectures, which can be seamlessly deployed and scaled on demand on either on-premise clusters, or commercial Cloud providers. In this talk, we will present one such architecture named CE-ACCE ("Cloud Enabled Advanced sCience Compute Environment"), which we have been developing at the NASA Jet Propulsion Laboratory over the past year. CE-ACCE is based on the Apache OODT ("Object Oriented Data Technology") suite of services for full data lifecycle management, which are turned into a composable array of Docker images, and complemented by a plug-in model for mission-specific customization. We have applied this infrastructure to both flying and upcoming NASA missions, such as ECOSTRESS and SMAP, and demonstrated deployment on the Amazon Cloud, either using simple EC2 instances, or advanced AWS services such as Amazon Lambda and ECS (EC2 Container Services).
HPC enabled real-time remote processing of laparoscopic surgery
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.
2016-03-01
Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.
High-Performance Data Analysis Tools for Sun-Earth Connection Missions
NASA Technical Reports Server (NTRS)
Messmer, Peter
2011-01-01
The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the potential to interact, so one can build a cluster of PCs, each equipped with a GPU, and use mpiDL to communicate between the nodes and GPULib to accelerate the computations on each node.
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
Volunteer Clouds and Citizen Cyberscience for LHC Physics
NASA Astrophysics Data System (ADS)
Aguado Sanchez, Carlos; Blomer, Jakob; Buncic, Predrag; Chen, Gang; Ellis, John; Garcia Quintas, David; Harutyunyan, Artem; Grey, Francois; Lombrana Gonzalez, Daniel; Marquina, Miguel; Mato, Pere; Rantala, Jarno; Schulz, Holger; Segal, Ben; Sharma, Archana; Skands, Peter; Weir, David; Wu, Jie; Wu, Wenjing; Yadav, Rohit
2011-12-01
Computing for the LHC, and for HEP more generally, is traditionally viewed as requiring specialized infrastructure and software environments, and therefore not compatible with the recent trend in "volunteer computing", where volunteers supply free processing time on ordinary PCs and laptops via standard Internet connections. In this paper, we demonstrate that with the use of virtual machine technology, at least some standard LHC computing tasks can be tackled with volunteer computing resources. Specifically, by presenting volunteer computing resources to HEP scientists as a "volunteer cloud", essentially identical to a Grid or dedicated cluster from a job submission perspective, LHC simulations can be processed effectively. This article outlines both the technical steps required for such a solution and the implications for LHC computing as well as for LHC public outreach and for participation by scientists from developing regions in LHC research.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
NASA Astrophysics Data System (ADS)
Decyk, Viktor K.; Dauger, Dean E.
We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.
Solving the scalability issue in quantum-based refinement: Q|R#1.
Zheng, Min; Moriarty, Nigel W; Xu, Yanting; Reimers, Jeffrey R; Afonine, Pavel V; Waller, Mark P
2017-12-01
Accurately refining biomacromolecules using a quantum-chemical method is challenging because the cost of a quantum-chemical calculation scales approximately as n m , where n is the number of atoms and m (≥3) is based on the quantum method of choice. This fundamental problem means that quantum-chemical calculations become intractable when the size of the system requires more computational resources than are available. In the development of the software package called Q|R, this issue is referred to as Q|R#1. A divide-and-conquer approach has been developed that fragments the atomic model into small manageable pieces in order to solve Q|R#1. Firstly, the atomic model of a crystal structure is analyzed to detect noncovalent interactions between residues, and the results of the analysis are represented as an interaction graph. Secondly, a graph-clustering algorithm is used to partition the interaction graph into a set of clusters in such a way as to minimize disruption to the noncovalent interaction network. Thirdly, the environment surrounding each individual cluster is analyzed and any residue that is interacting with a particular cluster is assigned to the buffer region of that particular cluster. A fragment is defined as a cluster plus its buffer region. The gradients for all atoms from each of the fragments are computed, and only the gradients from each cluster are combined to create the total gradients. A quantum-based refinement is carried out using the total gradients as chemical restraints. In order to validate this interaction graph-based fragmentation approach in Q|R, the entire atomic model of an amyloid cross-β spine crystal structure (PDB entry 2oNA) was refined.
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
The gputools package enables GPU computing in R.
Buckner, Joshua; Wilson, Justin; Seligman, Mark; Athey, Brian; Watson, Stanley; Meng, Fan
2010-01-01
By default, the R statistical environment does not make use of parallelism. Researchers may resort to expensive solutions such as cluster hardware for large analysis tasks. Graphics processing units (GPUs) provide an inexpensive and computationally powerful alternative. Using R and the CUDA toolkit from Nvidia, we have implemented several functions commonly used in microarray gene expression analysis for GPU-equipped computers. R users can take advantage of the better performance provided by an Nvidia GPU. The package is available from CRAN, the R project's repository of packages, at http://cran.r-project.org/web/packages/gputools More information about our gputools R package is available at http://brainarray.mbni.med.umich.edu/brainarray/Rgpgpu
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
Evaluating the Efficacy of the Cloud for Cluster Computation
NASA Technical Reports Server (NTRS)
Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom
2012-01-01
Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
A Computational Cluster for Multiscale Simulations of Ionic Liquids
2008-09-16
AND SUBTITLE DURIP: A Computational Cluster for Multiscale Simulations of Ionic Liquids 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA955007-1-0512 5c...AVAILABILITY STATEMENT ZO\\5oc\\\\%1>^ 13. SUPPLEMENTARY NOTES 14. ABSTRACT The focus of this project was to acquire and use computer cluster nodes...by ANSI Std. Z39.18 Adobe Professional 7.0 Comprehensive Final Report: Gregory A. Voth, PI Contract/Grant Title: DURIP: A Computational Cluster for
Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks
Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang
2018-01-01
In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches. PMID:29439439
Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr
2010-10-28
Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing
Rusakov, Dmitri A.; Savtchenko, Leonid P.
2017-01-01
Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877
Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment
NASA Technical Reports Server (NTRS)
Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.
Deciu, Cosmin; Sun, Jun; Wall, Mark A
2007-09-01
We discuss several aspects related to load balancing of database search jobs in a distributed computing environment, such as Linux cluster. Load balancing is a technique for making the most of multiple computational resources, which is particularly relevant in environments in which the usage of such resources is very high. The particular case of the Sequest program is considered here, but the general methodology should apply to any similar database search program. We show how the runtimes for Sequest searches of tandem mass spectral data can be predicted from profiles of previous representative searches, and how this information can be used for better load balancing of novel data. A well-known heuristic load balancing method is shown to be applicable to this problem, and its performance is analyzed for a variety of search parameters.
2005-12-09
decision making logic that respond to the environment (concentration of operands - the state vector), and bias or "mood" as established by its history of...mentioned in the chart, there is no need for file management in a ABC Machine. Information is distributed, no history is maintained. The instruction set... Postgresql ) for collection of cluster samples/snapshots over intervals of time. An prototypical example of an XML file to configure and launch the ABC
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing
NASA Astrophysics Data System (ADS)
Decyk, V. K.; Dauger, D. E.
We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
Cluster Computing for Embedded/Real-Time Systems
NASA Technical Reports Server (NTRS)
Katz, D.; Kepner, J.
1999-01-01
Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.
2016-10-27
AFRL-AFOSR-UK-TR-2016-0037 Towards cluster-assembled materials of true monodispersity in size and chemical environment: Synthesis, Dynamics and...Towards cluster-assembled materials of true monodispersity in size and chemical environment: synthesis, dynamics and activity 5a. CONTRACT NUMBER 5b...report Towards cluster-assembled materials of true monodispersity in size and chemical environment: Synthesis, Dynamics and Activity Ulrich Heiz
Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1995-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.
Communication: Biological applications of coupled-cluster frozen-density embedding
NASA Astrophysics Data System (ADS)
Heuser, Johannes; Höfener, Sebastian
2018-04-01
We report the implementation of the Laplace-transform scaled opposite-spin (LT-SOS) resolution-of-the-identity second-order approximate coupled-cluster singles and doubles (RICC2) combined with frozen-density embedding for excitation energies and molecular properties. In the present work, we furthermore employ the Hartree-Fock density for the interaction energy leading to a simplified Lagrangian which is linear in the Lagrangian multipliers. This approximation has the key advantage of a decoupling of the coupled-cluster amplitude and multipliers, leading also to a significant reduction in computation time. Using the new simplified Lagrangian in combination with efficient wavefunction models such as RICC2 or LT-SOS-RICC2 and density-functional theory (DFT) for the environment molecules (CC2-in-DFT) enables the efficient study of biological applications such as the rhodopsin and visual cone pigments using ab initio methods as routine applications.
Electron attachment to molecules in a cluster environment: suppression and enhancement effects
NASA Astrophysics Data System (ADS)
Fabrikant, Ilya I.
2018-05-01
Cluster environments can strongly influence dissociative electron attachment (DEA) processes. These effects are important in many applications, particularly for surface chemistry, radiation damage, and atmospheric physics. We review several mechanisms for DEA suppression and enhancement due to cluster environments, particularly due to microhydration. Long-range electron-molecule and electron-cluster interactions play often a significant role in these effects and can be analysed by using theoretical models. Nevertheless many observations remain unexplained due to complexity of the physics and chemistry of interaction of DEA fragments with the cluster environment.
Desktop supercomputer: what can it do?
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Degtyarev, A.; Korkhov, V.
2017-12-01
The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
Environment overwhelms both nature and nurture in a model spin glass
NASA Astrophysics Data System (ADS)
Middleton, A. Alan; Yang, Jie
We are interested in exploring what information determines the particular history of the glassy long term dynamics in a disordered material. We study the effect of initial configurations and the realization of stochastic dynamics on the long time evolution of configurations in a two-dimensional Ising spin glass model. The evolution of nearest neighbor correlations is computed using patchwork dynamics, a coarse-grained numerical heuristic for temporal evolution. The dependence of the nearest neighbor spin correlations at long time on both initial spin configurations and noise histories are studied through cross-correlations of long-time configurations and the spin correlations are found to be independent of both. We investigate how effectively rigid bond clusters coarsen. Scaling laws are used to study the convergence of configurations and the distribution of sizes of nearly rigid clusters. The implications of the computational results on simulations and phenomenological models of spin glasses are discussed. We acknowledge NSF support under DMR-1410937 (CMMT program).
Construction and application of Red5 cluster based on OpenStack
NASA Astrophysics Data System (ADS)
Wang, Jiaqing; Song, Jianxin
2017-08-01
With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.
Application of microarray analysis on computer cluster and cloud platforms.
Bernau, C; Boulesteix, A-L; Knaus, J
2013-01-01
Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.
Optimized distributed computing environment for mask data preparation
NASA Astrophysics Data System (ADS)
Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung
2005-11-01
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.
Cloudgene: A graphical execution platform for MapReduce programs on private and public clouds
2012-01-01
Background The MapReduce framework enables a scalable processing and analyzing of large datasets by distributing the computational load on connected computer nodes, referred to as a cluster. In Bioinformatics, MapReduce has already been adopted to various case scenarios such as mapping next generation sequencing data to a reference genome, finding SNPs from short read data or matching strings in genotype files. Nevertheless, tasks like installing and maintaining MapReduce on a cluster system, importing data into its distributed file system or executing MapReduce programs require advanced knowledge in computer science and could thus prevent scientists from usage of currently available and useful software solutions. Results Here we present Cloudgene, a freely available platform to improve the usability of MapReduce programs in Bioinformatics by providing a graphical user interface for the execution, the import and export of data and the reproducibility of workflows on in-house (private clouds) and rented clusters (public clouds). The aim of Cloudgene is to build a standardized graphical execution environment for currently available and future MapReduce programs, which can all be integrated by using its plug-in interface. Since Cloudgene can be executed on private clusters, sensitive datasets can be kept in house at all time and data transfer times are therefore minimized. Conclusions Our results show that MapReduce programs can be integrated into Cloudgene with little effort and without adding any computational overhead to existing programs. This platform gives developers the opportunity to focus on the actual implementation task and provides scientists a platform with the aim to hide the complexity of MapReduce. In addition to MapReduce programs, Cloudgene can also be used to launch predefined systems (e.g. Cloud BioLinux, RStudio) in public clouds. Currently, five different bioinformatic programs using MapReduce and two systems are integrated and have been successfully deployed. Cloudgene is freely available at http://cloudgene.uibk.ac.at. PMID:22888776
Quantum Dynamics of Helium Clusters
1993-03-01
the structure of both these and the HeN clusters in the body fixed frame by computing principal moments of inertia, thereby avoiding the...8217 of helium clusters, with the modification that we subtract 0.96 K from the computed values so that lor sufficiently large clusters we recover the...phonon spectrum of liquid He. To get a picture of these spectra one needs to compute the structure functions 51. Monte Carlo random walk simulations
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
Possibilistic clustering for shape recognition
NASA Technical Reports Server (NTRS)
Keller, James M.; Krishnapuram, Raghu
1993-01-01
Clustering methods have been used extensively in computer vision and pattern recognition. Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering in that total commitment of a vector to a given class is not required at each iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from Bezdek's Fuzzy C-Means (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Unfortunately, the memberships resulting from FCM and its derivatives do not correspond to the intuitive concept of degree of belonging, and moreover, the algorithms have considerable trouble in noisy environments. Recently, the clustering problem was cast into the framework of possibility theory. Our approach was radically different from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. An appropriate objective function whose minimum will characterize a good possibilistic partition of the data was constructed, and the membership and prototype update equations from necessary conditions for minimization of our criterion function were derived. The ability of this approach to detect linear and quartic curves in the presence of considerable noise is shown.
Possibilistic clustering for shape recognition
NASA Technical Reports Server (NTRS)
Keller, James M.; Krishnapuram, Raghu
1992-01-01
Clustering methods have been used extensively in computer vision and pattern recognition. Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering in that total commitment of a vector to a given class is not required at each iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from Bezdek's Fuzzy C-Means (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Unfortunately, the memberships resulting from FCM and its derivatives do not correspond to the intuitive concept of degree of belonging, and moreover, the algorithms have considerable trouble in noisy environments. Recently, we cast the clustering problem into the framework of possibility theory. Our approach was radically different from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. We constructed an appropriate objective function whose minimum will characterize a good possibilistic partition of the data, and we derived the membership and prototype update equations from necessary conditions for minimization of our criterion function. In this paper, we show the ability of this approach to detect linear and quartic curves in the presence of considerable noise.
NASA Astrophysics Data System (ADS)
Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.
2008-05-01
We use the previously identified 15 infrared star cluster counterparts to X-ray point sources in the interacting galaxies NGC 4038/4039 (the Antennae) to study the relationship between total cluster mass and X-ray binary number. This significant population of X-Ray/IR associations allows us to perform, for the first time, a statistical study of X-ray point sources and their environments. We define a quantity, η, relating the fraction of X-ray sources per unit mass as a function of cluster mass in the Antennae. We compute cluster mass by fitting spectral evolutionary models to Ks luminosity. Considering that this method depends on cluster age, we use four different age distributions to explore the effects of cluster age on the value of η and find it varies by less than a factor of 4. We find a mean value of η for these different distributions of η = 1.7 × 10-8 M-1⊙ with ση = 1.2 × 10-8 M-1⊙. Performing a χ2 test, we demonstrate η could exhibit a positive slope, but that it depends on the assumed distribution in cluster ages. While the estimated uncertainties in η are factors of a few, we believe this is the first estimate made of this quantity to "order of magnitude" accuracy. We also compare our findings to theoretical models of open and globular cluster evolution, incorporating the X-ray binary fraction per cluster.
Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster
NASA Astrophysics Data System (ADS)
Gertsenberger, Konstantin; Rogachevsky, Oleg
2018-02-01
Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Hillyer, T. N.; Wilkins, J.
2012-12-01
The CERES Science Team integrates data from 5 CERES instruments onboard the Terra, Aqua and NPP missions. The processing chain fuses CERES observations with data from 19 other unique sources. The addition of CERES Flight Model 5 (FM5) onboard NPP, coupled with ground processing system upgrades further emphasizes the need for an automated job-submission utility to manage multiple processing streams concurrently. The operator-driven, legacy-processing approach relied on manually staging data from magnetic tape to limited spinning disk attached to a shared memory architecture system. The migration of CERES production code to a distributed, cluster computing environment with approximately one petabyte of spinning disk containing all precursor input data products facilitates the development of a CERES-specific, automated workflow manager. In the cluster environment, I/O is the primary system resource in contention across jobs. Therefore, system load can be maximized with a throttling workload manager. This poster discusses a Java and Perl implementation of an automated job management tool tailored for CERES processing.
NASA Technical Reports Server (NTRS)
Dickinson, Mark
1993-01-01
In the course of a survey investigating the cluster environments of distant 3CR radio galaxies, I have identified a previously unknown 'giant luminous arc' gravitational lens. The lensing cluster is associated with the radio galaxy 3C 220.1 at z = 0.62 and is the most distant cluster now known to produce such arcs. I present imaging and spectroscopic observations of the cluster and the arc, and discuss the implications for the cluster mass. At z greater than 0.6 the cluster velocity dispersions implied by such giant arcs may provide an interesting constraint on theories of large scale structure formation. The parent investigation in which this arc was identified concerns galaxy clusters and radio galaxy environments at 0.35 less than z less than 0.8. At the present epoch, powerful FR 2 radio galaxies tend to be found in environments of poor or average galaxy density. In contrast, at the higher redshifts investigated here, richer group and cluster environments are common. I present additional data on other clusters from this survey, and discuss its extension to z greater than 1 through a program of near-infrared and optical imaging.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
NASA Technical Reports Server (NTRS)
1974-01-01
The present work gathers together numerous papers describing the use of remote sensing technology for mapping, monitoring, and management of earth resources and man's environment. Studies using various types of sensing equipment are described, including multispectral scanners, radar imagery, spectrometers, lidar, and aerial photography, and both manual and computer-aided data processing techniques are described. Some of the topics covered include: estimation of population density in Tokyo districts from ERTS-1 data, a clustering algorithm for unsupervised crop classification, passive microwave sensing of moist soils, interactive computer processing for land use planning, the use of remote sensing to delineate floodplains, moisture detection from Skylab, scanning thermal plumes, electrically scanning microwave radiometers, oil slick detection by X-band synthetic aperture radar, and the use of space photos for search of oil and gas fields. Individual items are announced in this issue.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
Computing and Applying Atomic Regulons to Understand Gene Expression and Regulation
Faria, José P.; Davis, James J.; Edirisinghe, Janaka N.; Taylor, Ronald C.; Weisenhorn, Pamela; Olson, Robert D.; Stevens, Rick L.; Rocha, Miguel; Rocha, Isabel; Best, Aaron A.; DeJongh, Matthew; Tintle, Nathan L.; Parrello, Bruce; Overbeek, Ross; Henry, Christopher S.
2016-01-01
Understanding gene function and regulation is essential for the interpretation, prediction, and ultimate design of cell responses to changes in the environment. An important step toward meeting the challenge of understanding gene function and regulation is the identification of sets of genes that are always co-expressed. These gene sets, Atomic Regulons (ARs), represent fundamental units of function within a cell and could be used to associate genes of unknown function with cellular processes and to enable rational genetic engineering of cellular systems. Here, we describe an approach for inferring ARs that leverages large-scale expression data sets, gene context, and functional relationships among genes. We computed ARs for Escherichia coli based on 907 gene expression experiments and compared our results with gene clusters produced by two prevalent data-driven methods: Hierarchical clustering and k-means clustering. We compared ARs and purely data-driven gene clusters to the curated set of regulatory interactions for E. coli found in RegulonDB, showing that ARs are more consistent with gold standard regulons than are data-driven gene clusters. We further examined the consistency of ARs and data-driven gene clusters in the context of gene interactions predicted by Context Likelihood of Relatedness (CLR) analysis, finding that the ARs show better agreement with CLR predicted interactions. We determined the impact of increasing amounts of expression data on AR construction and find that while more data improve ARs, it is not necessary to use the full set of gene expression experiments available for E. coli to produce high quality ARs. In order to explore the conservation of co-regulated gene sets across different organisms, we computed ARs for Shewanella oneidensis, Pseudomonas aeruginosa, Thermus thermophilus, and Staphylococcus aureus, each of which represents increasing degrees of phylogenetic distance from E. coli. Comparison of the organism-specific ARs showed that the consistency of AR gene membership correlates with phylogenetic distance, but there is clear variability in the regulatory networks of closely related organisms. As large scale expression data sets become increasingly common for model and non-model organisms, comparative analyses of atomic regulons will provide valuable insights into fundamental regulatory modules used across the bacterial domain. PMID:27933038
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
School food environments associated with adiposity in Canadian children.
Fitzpatrick, C; Datta, G D; Henderson, M; Gray-Donald, K; Kestens, Y; Barnett, T A
2017-07-01
Targeting obesogenic features of children's environment that are amenable to change represents a promising strategy for health promotion. The school food environment, defined as the services and policies regarding nutrition and the availability of food in the school and surrounding neighborhood, is particularly important given that students travel through the school neighborhood almost daily and that they consume a substantial proportion of their calories at school. As part of the Quebec Adipose and Lifestyle Investigation in Youth (QUALITY) cohort study, we assessed features of school indoor dietary environment and the surrounding school neighborhoods, when children were aged 8-10 years (2005-2008). School principals reported on food practices and policies within the schools. The density of convenience stores and fast-food outlets surrounding the school was computed using a Geographical Information System. Indicators of school neighborhood deprivation were derived from census data. Adiposity outcomes were measured in a clinical setting 2 years later, when participants were aged 10-12 years (2008-2011). We conducted cluster analyses to identify school food environment types. Associations between school types and adiposity were estimated in linear regression models. Cluster analysis identified three school types with distinct food environments. Schools were characterized as: overall healthful (45%); a healthful food environment in the surrounding neighborhood, but an unhealthful indoor food environment (22%); or overall unhealthful (33%). Less healthful schools were located in more deprived neighborhoods and were associated with greater child adiposity. Despite regulatory efforts to improve school food environments, there is substantial inequity in dietary environments across schools. Ensuring healthful indoor and outdoor food environments across schools should be included in comprehensive efforts to reduce obesity-related health disparities.
Using the GeoFEST Faulted Region Simulation System
NASA Technical Reports Server (NTRS)
Parker, Jay W.; Lyzenga, Gregory A.; Donnellan, Andrea; Judd, Michele A.; Norton, Charles D.; Baker, Teresa; Tisdale, Edwin R.; Li, Peggy
2004-01-01
GeoFEST (the Geophysical Finite Element Simulation Tool) simulates stress evolution, fault slip and plastic/elastic processes in realistic materials, and so is suitable for earthquake cycle studies in regions such as Southern California. Many new capabilities and means of access for GeoFEST are now supported. New abilities include MPI-based cluster parallel computing using automatic PYRAMID/Parmetis-based mesh partitioning, automatic mesh generation for layered media with rectangular faults, and results visualization that is integrated with remote sensing data. The parallel GeoFEST application has been successfully run on over a half-dozen computers, including Intel Xeon clusters, Itanium II and Altix machines, and the Apple G5 cluster. It is not separately optimized for different machines, but relies on good domain partitioning for load-balance and low communication, and careful writing of the parallel diagonally preconditioned conjugate gradient solver to keep communication overhead low. Demonstrated thousand-step solutions for over a million finite elements on 64 processors require under three hours, and scaling tests show high efficiency when using more than (order of) 4000 elements per processor. The source code and documentation for GeoFEST is available at no cost from Open Channel Foundation. In addition GeoFEST may be used through a browser-based portal environment available to approved users. That environment includes semi-automated geometry creation and mesh generation tools, GeoFEST, and RIVA-based visualization tools that include the ability to generate a flyover animation showing deformations and topography. Work is in progress to support simulation of a region with several faults using 16 million elements, using a strain energy metric to adapt the mesh to faithfully represent the solution in a region of widely varying strain.
Rinkevicius, Zilvinas; Li, Xin; Sandberg, Jaime A R; Mikkelsen, Kurt V; Ågren, Hans
2014-03-11
We introduce a density functional theory/molecular mechanical approach for computation of linear response properties of molecules in heterogeneous environments, such as metal surfaces or nanoparticles embedded in solvents. The heterogeneous embedding environment, consisting from metallic and nonmetallic parts, is described by combined force fields, where conventional force fields are used for the nonmetallic part and capacitance-polarization-based force fields are used for the metallic part. The presented approach enables studies of properties and spectra of systems embedded in or placed at arbitrary shaped metallic surfaces, clusters, or nanoparticles. The capability and performance of the proposed approach is illustrated by sample calculations of optical absorption spectra of thymidine absorbed on gold surfaces in an aqueous environment, where we study how different organizations of the gold surface and how the combined, nonadditive effect of the two environments is reflected in the optical absorption spectrum.
The optical properties of galaxies in the Ophiuchus cluster
NASA Astrophysics Data System (ADS)
Durret, F.; Wakamatsu, K.; Adami, C.; Nagayama, T.; Omega Muleka Mwewa Mwaba, J. M.
2018-05-01
Context. Ophiuchus is one of the most massive clusters known, but due to its low Galactic latitude its optical properties remain poorly known. Aims: We investigate the optical properties of Ophiuchus to obtain clues on the formation epoch of this cluster, and compare them to those of the Coma cluster, which is comparable in mass to Ophiuchus but much more dynamically disturbed. Methods: Based on a deep image of the Ophiuchus cluster in the r' band obtained at the Canada France Hawaii Telescope with the MegaCam camera, we have applied an iterative process to subtract the contribution of the numerous stars that, due to the low Galactic latitude of the cluster, pollute the image, and have obtained a photometric catalogue of 2818 galaxies fully complete at r' = 20.5 mag and still 91% complete at r' = 21.5 mag. We use this catalogue to derive the cluster Galaxy Luminosity Function (GLF) for the overall image and for a region (hereafter the "rectangle" region) covering exactly the same physical size as the region in which the GLF of the Coma cluster was previously studied. We then compute density maps based on an adaptive kernel technique, for different magnitude limits, and define three circular regions covering 0.08, 0.08, and 0.06 deg2, respectively, centred on the cluster (C), on northwest (NW) of the cluster, and southeast (SE) of the cluster, in which we compute the GLFs. Results: The GLF fits are much better when a Gaussian is added to the usual Schechter function, to account for the excess of very bright galaxies. Compared to Coma, Ophiuchus shows a strong excess of bright galaxies. Conclusions: The properties of the two nearby very massive clusters Ophiuchus and Coma are quite comparable, though they seem embedded in different large-scale environments. Our interpretation is that Ophiuchus was built up long ago, as confirmed by its relaxed state (see paper I) while Coma is still in the process of forming. The photometric catalogue of Ophiuchus (full Table B.1) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/613/A20
Genome-Wide Prediction of Metabolic Enzymes, Pathways, and Gene Clusters in Plants
Schläpfer, Pascal; Zhang, Peifen; Wang, Chuan; ...
2017-04-01
Plant metabolism underpins many traits of ecological and agronomic importance. Plants produce numerous compounds to cope with their environments but the biosynthetic pathways for most of these compounds have not yet been elucidated. To engineer and improve metabolic traits, we will need comprehensive and accurate knowledge of the organization and regulation of plant metabolism at the genome scale. Here, we present a computational pipeline to identify metabolic enzymes, pathways, and gene clusters from a sequenced genome. Using this pipeline, we generated metabolic pathway databases for 22 species and identified metabolic gene clusters from 18 species. This unified resource can bemore » used to conduct a wide array of comparative studies of plant metabolism. Using the resource, we discovered a widespread occurrence of metabolic gene clusters in plants: 11,969 clusters from 18 species. The prevalence of metabolic gene clusters offers an intriguing possibility of an untapped source for uncovering new metabolite biosynthesis pathways. For example, more than 1,700 clusters contain enzymes that could generate a specialized metabolite scaffold (signature enzymes) and enzymes that modify the scaffold (tailoring enzymes). In four species with sufficient gene expression data, we identified 43 highly coexpressed clusters that contain signature and tailoring enzymes, of which eight were characterized previously to be functional pathways. Finally, we identified patterns of genome organization that implicate local gene duplication and, to a lesser extent, single gene transposition as having played roles in the evolution of plant metabolic gene clusters.« less
Genome-Wide Prediction of Metabolic Enzymes, Pathways, and Gene Clusters in Plants1[OPEN
Zhang, Peifen; Kim, Taehyong; Banf, Michael; Chavali, Arvind K.; Nilo-Poyanco, Ricardo; Bernard, Thomas
2017-01-01
Plant metabolism underpins many traits of ecological and agronomic importance. Plants produce numerous compounds to cope with their environments but the biosynthetic pathways for most of these compounds have not yet been elucidated. To engineer and improve metabolic traits, we need comprehensive and accurate knowledge of the organization and regulation of plant metabolism at the genome scale. Here, we present a computational pipeline to identify metabolic enzymes, pathways, and gene clusters from a sequenced genome. Using this pipeline, we generated metabolic pathway databases for 22 species and identified metabolic gene clusters from 18 species. This unified resource can be used to conduct a wide array of comparative studies of plant metabolism. Using the resource, we discovered a widespread occurrence of metabolic gene clusters in plants: 11,969 clusters from 18 species. The prevalence of metabolic gene clusters offers an intriguing possibility of an untapped source for uncovering new metabolite biosynthesis pathways. For example, more than 1,700 clusters contain enzymes that could generate a specialized metabolite scaffold (signature enzymes) and enzymes that modify the scaffold (tailoring enzymes). In four species with sufficient gene expression data, we identified 43 highly coexpressed clusters that contain signature and tailoring enzymes, of which eight were characterized previously to be functional pathways. Finally, we identified patterns of genome organization that implicate local gene duplication and, to a lesser extent, single gene transposition as having played roles in the evolution of plant metabolic gene clusters. PMID:28228535
Genome-Wide Prediction of Metabolic Enzymes, Pathways, and Gene Clusters in Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schläpfer, Pascal; Zhang, Peifen; Wang, Chuan
Plant metabolism underpins many traits of ecological and agronomic importance. Plants produce numerous compounds to cope with their environments but the biosynthetic pathways for most of these compounds have not yet been elucidated. To engineer and improve metabolic traits, we will need comprehensive and accurate knowledge of the organization and regulation of plant metabolism at the genome scale. Here, we present a computational pipeline to identify metabolic enzymes, pathways, and gene clusters from a sequenced genome. Using this pipeline, we generated metabolic pathway databases for 22 species and identified metabolic gene clusters from 18 species. This unified resource can bemore » used to conduct a wide array of comparative studies of plant metabolism. Using the resource, we discovered a widespread occurrence of metabolic gene clusters in plants: 11,969 clusters from 18 species. The prevalence of metabolic gene clusters offers an intriguing possibility of an untapped source for uncovering new metabolite biosynthesis pathways. For example, more than 1,700 clusters contain enzymes that could generate a specialized metabolite scaffold (signature enzymes) and enzymes that modify the scaffold (tailoring enzymes). In four species with sufficient gene expression data, we identified 43 highly coexpressed clusters that contain signature and tailoring enzymes, of which eight were characterized previously to be functional pathways. Finally, we identified patterns of genome organization that implicate local gene duplication and, to a lesser extent, single gene transposition as having played roles in the evolution of plant metabolic gene clusters.« less
Genome-Wide Prediction of Metabolic Enzymes, Pathways, and Gene Clusters in Plants.
Schläpfer, Pascal; Zhang, Peifen; Wang, Chuan; Kim, Taehyong; Banf, Michael; Chae, Lee; Dreher, Kate; Chavali, Arvind K; Nilo-Poyanco, Ricardo; Bernard, Thomas; Kahn, Daniel; Rhee, Seung Y
2017-04-01
Plant metabolism underpins many traits of ecological and agronomic importance. Plants produce numerous compounds to cope with their environments but the biosynthetic pathways for most of these compounds have not yet been elucidated. To engineer and improve metabolic traits, we need comprehensive and accurate knowledge of the organization and regulation of plant metabolism at the genome scale. Here, we present a computational pipeline to identify metabolic enzymes, pathways, and gene clusters from a sequenced genome. Using this pipeline, we generated metabolic pathway databases for 22 species and identified metabolic gene clusters from 18 species. This unified resource can be used to conduct a wide array of comparative studies of plant metabolism. Using the resource, we discovered a widespread occurrence of metabolic gene clusters in plants: 11,969 clusters from 18 species. The prevalence of metabolic gene clusters offers an intriguing possibility of an untapped source for uncovering new metabolite biosynthesis pathways. For example, more than 1,700 clusters contain enzymes that could generate a specialized metabolite scaffold (signature enzymes) and enzymes that modify the scaffold (tailoring enzymes). In four species with sufficient gene expression data, we identified 43 highly coexpressed clusters that contain signature and tailoring enzymes, of which eight were characterized previously to be functional pathways. Finally, we identified patterns of genome organization that implicate local gene duplication and, to a lesser extent, single gene transposition as having played roles in the evolution of plant metabolic gene clusters. © 2017 American Society of Plant Biologists. All Rights Reserved.
Egocentric daily activity recognition via multitask clustering.
Yan, Yan; Ricci, Elisa; Liu, Gaowen; Sebe, Nicu
2015-10-01
Recognizing human activities from videos is a fundamental research problem in computer vision. Recently, there has been a growing interest in analyzing human behavior from data collected with wearable cameras. First-person cameras continuously record several hours of their wearers' life. To cope with this vast amount of unlabeled and heterogeneous data, novel algorithmic solutions are required. In this paper, we propose a multitask clustering framework for activity of daily living analysis from visual data gathered from wearable cameras. Our intuition is that, even if the data are not annotated, it is possible to exploit the fact that the tasks of recognizing everyday activities of multiple individuals are related, since typically people perform the same actions in similar environments, e.g., people working in an office often read and write documents). In our framework, rather than clustering data from different users separately, we propose to look for clustering partitions which are coherent among related tasks. In particular, two novel multitask clustering algorithms, derived from a common optimization problem, are introduced. Our experimental evaluation, conducted both on synthetic data and on publicly available first-person vision data sets, shows that the proposed approach outperforms several single-task and multitask learning methods.
Prospects of molybdenum and rhenium octahedral cluster complexes as X-ray contrast agents.
Krasilnikova, Anna A; Shestopalov, Michael A; Brylev, Konstantin A; Kirilova, Irina A; Khripko, Olga P; Zubareva, Kristina E; Khripko, Yuri I; Podorognaya, Valentina T; Shestopalova, Lidiya V; Fedorov, Vladimir E; Mironov, Yuri V
2015-03-01
Investigation of new X-ray contrast media for radiography is an important field of science since discovering of X-rays in 1895. Despite the wide diversity of available X-ray contrast media the toxicity, especially nephrotoxicity, is still a big problem to be solved. The octahedral metal-cluster complexes of the general formula [{M6Q8}L6] can be considered as quite promising candidates for the role of new radiocontrast media due to the high local concentration of heavy elements, high tuning ability of ligand environment and low toxicity. To exemplify this, the X-ray computed tomography experiments for the first time were carried out on some octahedral cluster complexes of molybdenum and rhenium. Based on the obtained data it was proposed to investigate the toxicological proprieties of cluster complex Na2H8[{Re6Se8}(P(CH2CH2CONH2)(CH2CH2COO)2)6]. Observed low cytotoxic and acute toxic effects along with rapid renal excretion of the cluster complex evidence its perspective as an X-ray contrast media for radiography. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Smedes, H. W.; Linnerud, H. J.; Woolaver, L. B.; Su, M. Y.; Jayroe, R. R.
1972-01-01
Two clustering techniques were used for terrain mapping by computer of test sites in Yellowstone National Park. One test was made with multispectral scanner data using a composite technique which consists of (1) a strictly sequential statistical clustering which is a sequential variance analysis, and (2) a generalized K-means clustering. In this composite technique, the output of (1) is a first approximation of the cluster centers. This is the input to (2) which consists of steps to improve the determination of cluster centers by iterative procedures. Another test was made using the three emulsion layers of color-infrared aerial film as a three-band spectrometer. Relative film densities were analyzed using a simple clustering technique in three-color space. Important advantages of the clustering technique over conventional supervised computer programs are (1) human intervention, preparation time, and manipulation of data are reduced, (2) the computer map, gives unbiased indication of where best to select the reference ground control data, (3) use of easy to obtain inexpensive film, and (4) the geometric distortions can be easily rectified by simple standard photogrammetric techniques.
Baun, Christian
2016-01-01
Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korovin, Yu. A.; Maksimushkina, A. V., E-mail: AVMaksimushkina@mephi.ru; Frolova, T. A.
2016-12-15
The cross sections of nuclear reactions involving emission of clusters of light nuclei in proton collisions with a heavy-metal target are computed for incident-proton energies between 30 MeV and 2.6 GeV. The calculation relies on the ALICE/ASH and CASCADE/INPE computer codes. The parameters determining the pre-equilibrium cluster emission are varied in the computation.
NASA Astrophysics Data System (ADS)
Romanchuk, V. A.; Lukashenko, V. V.
2018-05-01
The technique of functioning of a control system by a computing cluster based on neurocomputers is proposed. Particular attention is paid to the method of choosing the structure of the computing cluster due to the fact that the existing methods are not effective because of a specialized hardware base - neurocomputers, which are highly parallel computer devices with an architecture different from the von Neumann architecture. A developed algorithm for choosing the computational structure of a cloud cluster is described, starting from the direction of data transfer in the flow control graph of the program and its adjacency matrix.
Distributed GPU Computing in GIScience
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.
2013-12-01
Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.
Gdula, Michal R.; Poterlowicz, Krzysztof; Mardaryev, Andrei N.; Sharov, Andrey A.; Peng, Y.; Fessing, Michael Y.; Botchkarev, Vladimir A.
2014-01-01
The nucleus of epidermal keratinocytes is a complex and highly compartmentalized organelle, whose structure is markedly changed during terminal differentiation and transition of the genome from a transcriptionally active state seen in the basal and spinous epidermal cells to a fully inactive state in the keratinized cells of the cornified layer. Here, using multi-color confocal microscopy, followed by computational image analysis and mathematical modelling, we demonstrate that in normal mouse foot-pad epidermis transition of keratinocytes from basal epidermal layer to the granular layer is accompanied by marked differences in nuclear architecture and micro-environment including: i) decrease of the nuclear volume, ii) decrease in expression of the markers of transcriptionally-active chromatin; iii) internalization and decrease in the number of nucleoli; iv) increase in the number of pericentromeric heterochromatic clusters; v) increase in the frequency of associations between pericentromeric clusters, chromosomal territory 3, and nucleoli. These data suggest a role for nucleoli and pericentromeric heterochromatin clusters as organizers of nuclear micro-environment required for proper execution of gene expression programs in differentiating keratinocytes and provide important background information for further analyses of alterations in the topological genome organization seen in pathological skin conditions including disorders of epidermal differentiation and epidermal tumors. PMID:23407401
Energy-efficient and fast data gathering protocols for indoor wireless sensor networks.
Tümer, Abdullah Erdal; Gündüz, Mesut
2010-01-01
Wireless Sensor Networks have become an important technology with numerous potential applications for the interaction of computers and the physical environment in civilian and military areas. In the routing protocols that are specifically designed for the applications used by sensor networks, the limited available power of the sensor nodes has been taken into consideration in order to extend the lifetime of the networks. In this paper, two protocols based on LEACH and called R-EERP and S-EERP with base and threshold values are presented. R-EERP and S-EERP are two efficient energy aware routing protocols that can be used for some critical applications such as detecting dangerous gases (methane, ammonium, carbon monoxide, etc.) in an indoor environment. In R-EERP, sensor nodes are deployed randomly in a field similar to LEACH. In S-EERP, nodes are deployed sequentially in the rooms of the flats of a multi-story building. In both protocols, nodes forming clusters do not change during a cluster change time, only the cluster heads change. Furthermore, an XOR operation is performed on the collected data in order to prevent the sending of the same data sensed by the nodes close to each other. Simulation results show that our proposed protocols are more energy-efficient than the conventional LEACH protocol.
Heidelberg, John F.; Tully, Benjamin J.
2017-01-01
Metagenomics has become an integral part of defining microbial diversity in various environments. Many ecosystems have characteristically low biomass and few cultured representatives. Linking potential metabolisms to phylogeny in environmental microorganisms is important for interpreting microbial community functions and the impacts these communities have on geochemical cycles. However, with metagenomic studies there is the computational hurdle of ‘binning’ contigs into phylogenetically related units or putative genomes. Binning methods have been implemented with varying approaches such as k-means clustering, Gaussian mixture models, hierarchical clustering, neural networks, and two-way clustering; however, many of these suffer from biases against low coverage/abundance organisms and closely related taxa/strains. We are introducing a new binning method, BinSanity, that utilizes the clustering algorithm affinity propagation (AP), to cluster assemblies using coverage with compositional based refinement (tetranucleotide frequency and percent GC content) to optimize bins containing multiple source organisms. This separation of composition and coverage based clustering reduces bias for closely related taxa. BinSanity was developed and tested on artificial metagenomes varying in size and complexity. Results indicate that BinSanity has a higher precision, recall, and Adjusted Rand Index compared to five commonly implemented methods. When tested on a previously published environmental metagenome, BinSanity generated high completion and low redundancy bins corresponding with the published metagenome-assembled genomes. PMID:28289564
Building CHAOS: An Operating System for Livermore Linux Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garlick, J E; Dunlap, C M
2003-02-21
The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less
Simulating The Dynamical Evolution Of Galaxies In Group And Cluster Environments
NASA Astrophysics Data System (ADS)
Vijayaraghavan, Rukmani
2015-07-01
Galaxy clusters are harsh environments for their constituent galaxies. A variety of physical processes effective in these dense environments transform gas-rich, spiral, star-forming galaxies to elliptical or spheroidal galaxies with very little gas and therefore minimal star formation. The consequences of these processes are well understood observationally. Galaxies in progressively denser environments have systematically declining star formation rates and gas content. However, a theoretical understanding of of where, when, and how these processes act, and the interplay between the various galaxy transformation mechanisms in clusters remains elusive. In this dissertation, I use numerical simulations of cluster mergers as well as galaxies evolving in quiescent environments to develop a theoretical framework to understand some of the physics of galaxy transformation in cluster environments. Galaxies can be transformed in smaller groups before they are accreted by their eventual massive cluster environments, an effect termed `pre-processing'. Galaxy cluster mergers themselves can accelerate many galaxy transformation mechanisms, including tidal and ram pressure stripping of galaxies and galaxy-galaxy collisions and mergers that result in reassemblies of galaxies' stars and gas. Observationally, cluster mergers have distinct velocity and phase-space signatures depending on the observer's line of sight with respect to the merger direction. Using dark matter only as well as hydrodynamic simulations of cluster mergers with random ensembles of particles tagged with galaxy models, I quantify the effects of cluster mergers on galaxy evolution before, during, and after the mergers. Based on my theoretical predictions of the dynamical signatures of these mergers in combination with galaxy transformation signatures, one can observationally identify remnants of mergers and quantify the effect of the environment on galaxies in dense group and cluster environments. The presence of long-lived, hot X-ray emitting coronae observed in a large fraction of group and cluster galaxies is not well-understood. These coronae are not fully stripped by ram pressure and tidal forces that are efficient in these environments. Theoretically, this is a fascinating and challenging problem that involves understanding and simulating the multitude of physical processes in these dense environments that can remove or replenish galaxies' hot coronae. To solve this problem, I have developed and implemented a robust simulation technique where I simulate the evolution of a realistic cluster environment with a population of galaxies and their gas. With this technique, it is possible to isolate and quantify the importance of the various cluster physical processes for coronal survival. To date, I have performed hydrodynamic simulations of galaxies being ram pressure stripped in quiescent group and cluster environments. Using these simulations, I have characterized the physics of ram pressure stripping and investigated the survival of these coronae in the presence of tidal and ram pressure stripping. I have also generated synthetic X-ray observations of these simulated systems to compare with observed coronae. I have also performed magnetohydrodynamic simulations of galaxies evolving in a magnetized intracluster medium plasma to isolate the effect of magnetic fields on coronal evolution, as well the effect of orbiting galaxies in amplifying magnetic fields. This work is an important step towards understanding the effect of cluster environments on galactic gas, and consequently, their long term evolution and impact on star formation rates.
TOSCA-based orchestration of complex clusters at the IaaS level
NASA Astrophysics Data System (ADS)
Caballer, M.; Donvito, G.; Moltó, G.; Rocha, R.; Velten, M.
2017-10-01
This paper describes the adoption and extension of the TOSCA standard by the INDIGO-DataCloud project for the definition and deployment of complex computing clusters together with the required support in both OpenStack and OpenNebula, carried out in close collaboration with industry partners such as IBM. Two examples of these clusters are described in this paper, the definition of an elastic computing cluster to support the Galaxy bioinformatics application where the nodes are dynamically added and removed from the cluster to adapt to the workload, and the definition of an scalable Apache Mesos cluster for the execution of batch jobs and support for long-running services. The coupling of TOSCA with Ansible Roles to perform automated installation has resulted in the definition of high-level, deterministic templates to provision complex computing clusters across different Cloud sites.
Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management
NASA Astrophysics Data System (ADS)
Hendrix, Val; Benjamin, Doug; Yao, Yushu
2012-12-01
Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment in a cloud environment. Our future cloud efforts will further build on this work.
Shahriari, Mohammadali; Biglarbegian, Mohammad
2018-01-01
This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
New computing systems and their impact on structural analysis and design
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
Nonlinear Fluid Computations in a Distributed Environment
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.; Smith, Merritt H.
1995-01-01
The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.
A Primer on High-Throughput Computing for Genomic Selection
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
2011-01-01
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303
STAR FORMATION AND SUPERCLUSTER ENVIRONMENT OF 107 NEARBY GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Seth A.; Hickox, Ryan C.; Wegner, Gary A.
We analyze the relationship between star formation (SF), substructure, and supercluster environment in a sample of 107 nearby galaxy clusters using data from the Sloan Digital Sky Survey. Previous works have investigated the relationships between SF and cluster substructure, and cluster substructure and supercluster environment, but definitive conclusions relating all three of these variables has remained elusive. We find an inverse relationship between cluster SF fraction ( f {sub SF}) and supercluster environment density, calculated using the Galaxy luminosity density field at a smoothing length of 8 h {sup −1} Mpc (D8). The slope of f {sub SF} versus D8more » is −0.008 ± 0.002. The f {sub SF} of clusters located in low-density large-scale environments, 0.244 ± 0.011, is higher than for clusters located in high-density supercluster cores, 0.202 ± 0.014. We also divide superclusters, according to their morphology, into filament- and spider-type systems. The inverse relationship between cluster f {sub SF} and large-scale density is dominated by filament- rather than spider-type superclusters. In high-density cores of superclusters, we find a higher f {sub SF} in spider-type superclusters, 0.229 ± 0.016, than in filament-type superclusters, 0.166 ± 0.019. Using principal component analysis, we confirm these results and the direct correlation between cluster substructure and SF. These results indicate that cluster SF is affected by both the dynamical age of the cluster (younger systems exhibit higher amounts of SF); the large-scale density of the supercluster environment (high-density core regions exhibit lower amounts of SF); and supercluster morphology (spider-type superclusters exhibit higher amounts of SF at high densities).« less
An Improved Clustering Algorithm of Tunnel Monitoring Data for Cloud Computing
Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing
2014-01-01
With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data. PMID:24982971
Cerón-Muñoz, M F; Tonhati, H; Costa, C N; Rojas-Sarmiento, D; Echeverri Echeverri, D M
2004-08-01
Descriptive herd variables (DVHE) were used to explain genotype by environment interactions (G x E) for milk yield (MY) in Brazilian and Colombian production environments and to develop a herd-cluster model to estimate covariance components and genetic parameters for each herd environment group. Data consisted of 180,522 lactation records of 94,558 Holstein cows from 937 Brazilian and 400 Colombian herds. Herds in both countries were jointly grouped in thirds according to 8 DVHE: production level, phenotypic variability, age at first calving, calving interval, percentage of imported semen, lactation length, and herd size. For each DVHE, REML bivariate animal model analyses were used to estimate genetic correlations for MY between upper and lower thirds of the data. Based on estimates of genetic correlations, weights were assigned to each DVHE to group herds in a cluster analysis using the FASTCLUS procedure in SAS. Three clusters were defined, and genetic and residual variance components were heterogeneous among herd clusters. Estimates of heritability in clusters 1 and 3 were 0.28 and 0.29, respectively, but the estimate was larger (0.39) in Cluster 2. The genetic correlations of MY from different clusters ranged from 0.89 to 0.97. The herd-cluster model based on DVHE properly takes into account G x E by grouping similar environments accordingly and seems to be an alternative to simply considering country borders to distinguish between environments.
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-03-01
the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post
Formation of Cool Cores in Galaxy Clusters via Hierarchical Mergers
NASA Astrophysics Data System (ADS)
Motl, Patrick M.; Burns, Jack O.; Loken, Chris; Norman, Michael L.; Bryan, Greg
2004-05-01
We present a new scenario for the formation of cool cores in rich galaxy clusters, based on results from recent high spatial dynamic range, adaptive mesh Eulerian hydrodynamic simulations of large-scale structure formation. We find that cores of cool gas, material that would be identified as a classical cooling flow on the basis of its X-ray luminosity excess and temperature profile, are built from the accretion of discrete stable subclusters. Any ``cooling flow'' present is overwhelmed by the velocity field within the cluster; the bulk flow of gas through the cluster typically has speeds up to about 2000 km s-1, and significant rotation is frequently present in the cluster core. The inclusion of consistent initial cosmological conditions for the cluster within its surrounding supercluster environment is crucial when the evolution of cool cores in rich galaxy clusters is simulated. This new model for the hierarchical assembly of cool gas naturally explains the high frequency of cool cores in rich galaxy clusters, despite the fact that a majority of these clusters show evidence of substructure that is believed to arise from recent merger activity. Furthermore, our simulations generate complex cluster cores in concordance with recent X-ray observations of cool fronts, cool ``bullets,'' and filaments in a number of galaxy clusters. Our simulations were computed with a coupled N-body, Eulerian, adaptive mesh refinement, hydrodynamics cosmology code that properly treats the effects of shocks and radiative cooling by the gas. We employ up to seven levels of refinement to attain a peak resolution of 15.6 kpc within a volume 256 Mpc on a side and assume a standard ΛCDM cosmology.
CHARGING AND COAGULATION OF DUST IN PROTOPLANETARY PLASMA ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, L. S.; Land, V.; Hyde, T. W., E-mail: lorin_matthews@baylor.edu
2012-01-01
Combining a particle-particle, particle-cluster, and cluster-cluster agglomeration model with an aggregate charging model, the coagulation and charging of dust particles in plasma environments relevant for protoplanetary disks have been investigated, including the effect of electron depletion in high dust density environments. The results show that charged aggregates tend to grow by adding small particles and clusters to larger particles and clusters, and that cluster-cluster aggregation is significantly more effective than particle-cluster aggregation. Comparisons of the grain structure show that with increasing aggregate charge the compactness factor, {phi}{sub {sigma}}, decreases and has a narrower distribution, indicating a fluffier structure. Neutral aggregatesmore » are more compact, with larger {phi}{sub {sigma}}, and exhibit a larger variation in fluffiness. Overall, increased aggregate charge leads to larger, fluffier, and more massive aggregates.« less
Rapid insights from remote sensing in the geosciences
NASA Astrophysics Data System (ADS)
Plaza, Antonio
2015-03-01
The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Admin. under Contract DE-AC04-94AL85000.
Commodity Cluster Computing for Remote Sensing Applications using Red Hat LINUX
NASA Technical Reports Server (NTRS)
Dorband, John
2003-01-01
Since 1994, we have been doing research at Goddard Space Flight Center on implementing a wide variety of applications on commodity based computing clusters. This talk is about these clusters and haw they are used on these applications including ones for remote sensing.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.
Menicucci, Nicolas C
2014-03-28
A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.
Method of identifying clusters representing statistical dependencies in multivariate data
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
Approach is first to cluster and then to compute spatial boundaries for resulting clusters. Next step is to compute, from set of Monte Carlo samples obtained from scrambled data, estimates of probabilities of obtaining at least as many points within boundaries as were actually observed in original data.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Urban Typologies: Towards an ORNL Urban Information System (UrbIS)
NASA Astrophysics Data System (ADS)
KC, B.; King, A. W.; Sorokine, A.; Crow, M. C.; Devarakonda, R.; Hilbert, N. L.; Karthik, R.; Patlolla, D.; Surendran Nair, S.
2016-12-01
Urban environments differ in a large number of key attributes; these include infrastructure, morphology, demography, and economic and social variables, among others. These attributes determine many urban properties such as energy and water consumption, greenhouse gas emissions, air quality, public health, sustainability, and vulnerability and resilience to climate change. Characterization of urban environments by a single property such as population size does not sufficiently capture this complexity. In addressing this multivariate complexity one typically faces such problems as disparate and scattered data, challenges of big data management, spatial searching, insufficient computational capacity for data-driven analysis and modelling, and the lack of tools to quickly visualize the data and compare the analytical results across different cities and regions. We have begun the development of an Urban Information System (UrbIS) to address these issues, one that embraces the multivariate "big data" of urban areas and their environments across the United States utilizing the Big Data as a Service (BDaaS) concept. With technological roots in High-performance Computing (HPC), BDaaS is based on the idea of outsourcing computations to different computing paradigms, scalable to super-computers. UrbIS aims to incorporate federated metadata search, integrated modeling and analysis, and geovisualization into a single seamless workflow. The system includes web-based 2D/3D visualization with an iGlobe interface, fast cloud-based and server-side data processing and analysis, and a metadata search engine based on the Mercury data search system developed at Oak Ridge National Laboratory (ORNL). Results of analyses will be made available through web services. We are implementing UrbIS in ORNL's Compute and Data Environment for Science (CADES) and are leveraging ORNL experience in complex data and geospatial projects. The development of UrbIS is being guided by an investigation of urban heat islands (UHI) using high-dimensional clustering and statistics to define urban typologies (types of cities) in an investigation of how UHI vary with urban type across the United States.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.
2016-01-01
Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387
Spiral Arm Morphology in Cluster Environment
NASA Astrophysics Data System (ADS)
Choi, Isaac Yeoun-Gyu; Ann, Hong Bae
2011-10-01
We examine the dependence of the morphology of spiral galaxies on the environment using the KIAS Value Added Galaxy Catalog (VAGC) which is derived from the Sloan Digital Sky Survey (SDSS) DR7. Our goal is to understand whether the local environment or global conditions dominate in determining the morphology of spiral galaxies. For the analysis, we conduct a morphological classification of galaxies in 20 X-ray selected Abell clusters up to z˜0.06, using SDSS color images and the X-ray data from the Northern ROSAT All-Sky (NORAS) catalog. We analyze the distribution of arm classes along the clustercentric radius as well as that of Hubble types. To segregate the effect of local environment from the global environment, we compare the morphological distribution of galaxies in two X-lay luminosity groups, the low-Lx clusters (Lx < 0.15×1044erg/s) and high-Lx clusters (Lx > 1.8×1044erg/s). We find that the morphology-clustercentric relation prevails in the cluster envirnment although there is a brake near the cluster virial radius. The grand design arms comprise about 40% of the cluster spiral galaxies with a weak morphology-clustercentric radius relation for the arm classes, in the sense that flocculent galaxies tend to increase outward, regardless of the X-ray luminosity. From the cumulative radial distribution of cluster galaxies, we found that the low-Lx clusters are fully virialized while the high-Lx clusters are not.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
The Czech National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.
2017-10-01
The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.
The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster
NASA Astrophysics Data System (ADS)
Löwe, P.; Klump, J.; Thaler, J.
2012-04-01
Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times lasting up to full 20 CPU days. The deployment of GRASS GIS on a compute cluster allows our users to tackle GIS tasks previously out of reach of single workstations. In addition, this GRASS GIS cluster implementation will be made available to other users at GFZ in the course of 2012. It will thus become a research utility in the sense of "Software as a Service" (SaaS) and can be seen as our first step towards building a GFZ corporate cloud service.
NASA Astrophysics Data System (ADS)
Tsaur, Woei-Jiunn; Pai, Haw-Tyng
2008-11-01
The applications of group computing and communication motivate the requirement to provide group access control in mobile ad hoc networks (MANETs). The operation in MANETs' groups performs a decentralized manner and accommodated membership dynamically. Moreover, due to lack of centralized control, MANETs' groups are inherently insecure and vulnerable to attacks from both within and outside the groups. Such features make access control more challenging in MANETs. Recently, several researchers have proposed group access control mechanisms in MANETs based on a variety of threshold signatures. However, these mechanisms cannot actually satisfy MANETs' dynamic environments. This is because the threshold-based mechanisms cannot be achieved when the number of members is not up to the threshold value. Hence, by combining the efficient elliptic curve cryptosystem, self-certified public key cryptosystem and secure filter technique, we construct dynamic key management schemes based on hierarchical clustering for securing group access control in MANETs. Specifically, the proposed schemes can constantly accomplish secure group access control only by renewing the secure filters of few cluster heads, when a cluster head joins or leaves a cross-cluster. In such a new way, we can find that the proposed group access control scheme can be very effective for securing practical applications in MANETs.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Star formation and galaxy evolution in different environments, from the field to massive clusters
NASA Astrophysics Data System (ADS)
Tyler, Krystal
This thesis focuses on how a galaxy's environment affects its star formation, from the galactic environment of the most luminous IR galaxies in the universe to groups and massive clusters of galaxies. Initially, we studied a class of high-redshift galaxies with extremely red optical-to-mid-IR colors. We used Spitzer spectra and photometry to identify whether the IR outputs of these objects are dominated by AGNs or star formation. In accordance with the expectation that the AGN contribution should increase with IR luminosity, we find most of our very red IR-luminous galaxies to be dominated by an AGN, though a few appear to be star-formation dominated. We then observed how the density of the extraglactic environment plays a role in galaxy evolution. We begin with Spitzer and HST observations of intermediate-redshift groups. Although the environment has clearly changed some properties of its members, group galaxies at a given mass and morphology have comparable amounts of star formation as field galaxies. We conclude the main difference between the two environments is the higher fraction of massive early-type galaxies in groups. Clusters show even more distinct trends. Using three different star-formation indicators, we found the mass-SFR relation for cluster galaxies can look similar to the field (A2029) or have a population of low-star-forming galaxies in addition to the field-like galaxies (Coma). We contribute this to differing merger histories: recently-accreted galaxies would not have time for their star formation to be quenched by the cluster environment (A2029), while an accretion event in the past few Gyr would give galaxies enough time to have their star formation suppressed by the cluster environment. Since these two main quenching mechanisms depend on the density of the intracluster gas, we turn to a group of X-ray underluminous clusters to study how star-forming galaxies have been affected in clusters with lower than expected X-ray emission. We find the distribution of star-forming galaxies with respect to stellar mass varies from cluster to cluster, echoing what we found for Coma and A2029. In other words, while some preprocessing occurs in groups, the cluster environment still contributes to the quenching of star formation.
Tran, Van Tan; Nguyen, Minh Thao; Tran, Quoc Tri
2017-10-12
Density functional theory and the multiconfigurational CASSCF/CASPT2 method have been employed to study the low-lying states of VGe n -/0 (n = 1-4) clusters. For VGe -/0 and VGe 2 -/0 clusters, the relative energies and geometrical structures of the low-lying states are reported at the CASSCF/CASPT2 level. For the VGe 3 -/0 and VGe 4 -/0 clusters, the computational results show that due to the large contribution of the Hartree-Fock exact exchange, the hybrid B3LYP, B3PW91, and PBE0 functionals overestimate the energies of the high-spin states as compared to the pure GGA BP86 and PBE functionals and the CASPT2 method. On the basis of the pure GGA BP86 and PBE functionals and the CASSCF/CASPT2 results, the ground states of anionic and neutral clusters are defined, the relative energies of the excited states are computed, and the electron detachment energies of the anionic clusters are evaluated. The computational results are employed to give new assignments for all features in the photoelectron spectra of VGe 3 - and VGe 4 - clusters.
Large Data at Small Universities: Astronomical processing using a computer classroom
NASA Astrophysics Data System (ADS)
Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen
2016-06-01
The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-08-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
Comparison of OPC job prioritization schemes to generate data for mask manufacturing
NASA Astrophysics Data System (ADS)
Lewis, Travis; Veeraraghavan, Vijay; Jantzen, Kenneth; Kim, Stephen; Park, Minyoung; Russell, Gordon; Simmons, Mark
2015-03-01
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows
NASA Astrophysics Data System (ADS)
Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.
2014-12-01
The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.
Construction and Utilization of a Beowulf Computing Cluster: A User's Perspective
NASA Technical Reports Server (NTRS)
Woods, Judy L.; West, Jeff S.; Sulyma, Peter R.
2000-01-01
Lockheed Martin Space Operations - Stennis Programs (LMSO) at the John C Stennis Space Center (NASA/SSC) has designed and built a Beowulf computer cluster which is owned by NASA/SSC and operated by LMSO. The design and construction of the cluster are detailed in this paper. The cluster is currently used for Computational Fluid Dynamics (CFD) simulations. The CFD codes in use and their applications are discussed. Examples of some of the work are also presented. Performance benchmark studies have been conducted for the CFD codes being run on the cluster. The results of two of the studies are presented and discussed. The cluster is not currently being utilized to its full potential; therefore, plans are underway to add more capabilities. These include the addition of structural, thermal, fluid, and acoustic Finite Element Analysis codes as well as real-time data acquisition and processing during test operations at NASA/SSC. These plans are discussed as well.
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
NASA Astrophysics Data System (ADS)
Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping
2017-02-01
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
Nguyen, Thanh; Khosravi, Abbas; Creighton, Douglas; Nahavandi, Saeid
2014-12-30
Understanding neural functions requires knowledge from analysing electrophysiological data. The process of assigning spikes of a multichannel signal into clusters, called spike sorting, is one of the important problems in such analysis. There have been various automated spike sorting techniques with both advantages and disadvantages regarding accuracy and computational costs. Therefore, developing spike sorting methods that are highly accurate and computationally inexpensive is always a challenge in the biomedical engineering practice. An automatic unsupervised spike sorting method is proposed in this paper. The method uses features extracted by the locality preserving projection (LPP) algorithm. These features afterwards serve as inputs for the landmark-based spectral clustering (LSC) method. Gap statistics (GS) is employed to evaluate the number of clusters before the LSC can be performed. The proposed LPP-LSC is highly accurate and computationally inexpensive spike sorting approach. LPP spike features are very discriminative; thereby boost the performance of clustering methods. Furthermore, the LSC method exhibits its efficiency when integrated with the cluster evaluator GS. The proposed method's accuracy is approximately 13% superior to that of the benchmark combination between wavelet transformation and superparamagnetic clustering (WT-SPC). Additionally, LPP-LSC computing time is six times less than that of the WT-SPC. LPP-LSC obviously demonstrates a win-win spike sorting solution meeting both accuracy and computational cost criteria. LPP and LSC are linear algorithms that help reduce computational burden and thus their combination can be applied into real-time spike analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks
Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza
2015-01-01
Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502
Processing ARM VAP data on an AWS cluster
NASA Astrophysics Data System (ADS)
Martin, T.; Macduff, M.; Shippert, T.
2017-12-01
The Atmospheric Radiation Measurement (ARM) Data Management Facility (DMF) manages over 18,000 processes and 1.3 TB of data each day. This includes many Value Added Products (VAPs) that make use of multiple instruments to produce the derived products that are scientifically relevant. A thermodynamic and cloud profile VAP is being developed to provide input to the ARM Large-eddy simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/vaps/lasso-122) . This algorithm is CPU intensive and the processing requirements exceeded the available DMF computing capacity. Amazon Web Service (AWS) along with CfnCluster was investigated to see how it would perform. This cluster environment is cost effective and scales dynamically based on demand. We were able to take advantage of autoscaling which allowed the cluster to grow and shrink based on the size of the processing queue. We also were able to take advantage of the Amazon Web Services spot market to further reduce the cost. Our test was very successful and found that cloud resources can be used to efficiently and effectively process time series data. This poster will present the resources and methodology used to successfully run the algorithm.
Improved Ant Colony Clustering Algorithm and Its Performance Study
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Structure-sequence based analysis for identification of conserved regions in proteins
Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth
2013-05-28
Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
Fast distributed large-pixel-count hologram computation using a GPU cluster.
Pan, Yuechao; Xu, Xuewu; Liang, Xinan
2013-09-10
Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
A New Soft Computing Method for K-Harmonic Means Clustering.
Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe
2016-01-01
The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.
Dipnall, J F; Pasco, J A; Berk, M; Williams, L J; Dodd, S; Jacka, F N; Meyer, D
2017-01-01
Key lifestyle-environ risk factors are operative for depression, but it is unclear how risk factors cluster. Machine-learning (ML) algorithms exist that learn, extract, identify and map underlying patterns to identify groupings of depressed individuals without constraints. The aim of this research was to use a large epidemiological study to identify and characterise depression clusters through "Graphing lifestyle-environs using machine-learning methods" (GLUMM). Two ML algorithms were implemented: unsupervised Self-organised mapping (SOM) to create GLUMM clusters and a supervised boosted regression algorithm to describe clusters. Ninety-six "lifestyle-environ" variables were used from the National health and nutrition examination study (2009-2010). Multivariate logistic regression validated clusters and controlled for possible sociodemographic confounders. The SOM identified two GLUMM cluster solutions. These solutions contained one dominant depressed cluster (GLUMM5-1, GLUMM7-1). Equal proportions of members in each cluster rated as highly depressed (17%). Alcohol consumption and demographics validated clusters. Boosted regression identified GLUMM5-1 as more informative than GLUMM7-1. Members were more likely to: have problems sleeping; unhealthy eating; ≤2 years in their home; an old home; perceive themselves underweight; exposed to work fumes; experienced sex at ≤14 years; not perform moderate recreational activities. A positive relationship between GLUMM5-1 (OR: 7.50, P<0.001) and GLUMM7-1 (OR: 7.88, P<0.001) with depression was found, with significant interactions with those married/living with partner (P=0.001). Using ML based GLUMM to form ordered depressive clusters from multitudinous lifestyle-environ variables enabled a deeper exploration of the heterogeneous data to uncover better understandings into relationships between the complex mental health factors. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
Jade: using on-demand cloud analysis to give scientists back their flow
NASA Astrophysics Data System (ADS)
Robinson, N.; Tomlinson, J.; Hilson, A. J.; Arribas, A.; Powell, T.
2017-12-01
The UK's Met Office generates 400 TB weather and climate data every day by running physical models on its Top 20 supercomputer. As data volumes explode, there is a danger that analysis workflows become dominated by watching progress bars, and not thinking about science. We have been researching how we can use distributed computing to allow analysts to process these large volumes of high velocity data in a way that's easy, effective and cheap.Our prototype analysis stack, Jade, tries to encapsulate this. Functionality includes: An under-the-hood Dask engine which parallelises and distributes computations, without the need to retrain analysts Hybrid compute clusters (AWS, Alibaba, and local compute) comprising many thousands of cores Clusters which autoscale up/down in response to calculation load using Kubernetes, and balances the cluster across providers based on the current price of compute Lazy data access from cloud storage via containerised OpenDAP This technology stack allows us to perform calculations many orders of magnitude faster than is possible on local workstations. It is also possible to outperform dedicated local compute clusters, as cloud compute can, in principle, scale to much larger scales. The use of ephemeral compute resources also makes this implementation cost efficient.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Hu, Jun; Mercer, Jay; Peyton, Liam; Kantarcioglu, Murat; Malin, Bradley; Buckeridge, David; Samet, Saeed; Earle, Craig
2011-01-01
Background Providers have been reluctant to disclose patient data for public-health purposes. Even if patient privacy is ensured, the desire to protect provider confidentiality has been an important driver of this reluctance. Methods Six requirements for a surveillance protocol were defined that satisfy the confidentiality needs of providers and ensure utility to public health. The authors developed a secure multi-party computation protocol using the Paillier cryptosystem to allow the disclosure of stratified case counts and denominators to meet these requirements. The authors evaluated the protocol in a simulated environment on its computation performance and ability to detect disease outbreak clusters. Results Theoretical and empirical assessments demonstrate that all requirements are met by the protocol. A system implementing the protocol scales linearly in terms of computation time as the number of providers is increased. The absolute time to perform the computations was 12.5 s for data from 3000 practices. This is acceptable performance, given that the reporting would normally be done at 24 h intervals. The accuracy of detection disease outbreak cluster was unchanged compared with a non-secure distributed surveillance protocol, with an F-score higher than 0.92 for outbreaks involving 500 or more cases. Conclusion The protocol and associated software provide a practical method for providers to disclose patient data for sentinel, syndromic or other indicator-based surveillance while protecting patient privacy and the identity of individual providers. PMID:21486880
ERIC Educational Resources Information Center
Hofmann, Richard J.
A very general model for the computation of independent cluster solutions in factor analysis is presented. The model is discussed as being either orthogonal or oblique. Furthermore, it is demonstrated that for every orthogonal independent cluster solution there is an oblique analog. Using three illustrative examples, certain generalities are made…
Energy Efficient In-network RFID Data Filtering Scheme in Wireless Sensor Networks
Bashir, Ali Kashif; Lim, Se-Jung; Hussain, Chauhdary Sajjad; Park, Myong-Soon
2011-01-01
RFID (Radio frequency identification) and wireless sensor networks are backbone technologies for pervasive environments. In integration of RFID and WSN, RFID data uses WSN protocols for multi-hop communications. Energy is a critical issue in WSNs; however, RFID data contains a lot of duplication. These duplications can be eliminated at the base station, but unnecessary transmissions of duplicate data within the network still occurs, which consumes nodes’ energy and affects network lifetime. In this paper, we propose an in-network RFID data filtering scheme that efficiently eliminates the duplicate data. For this we use a clustering mechanism where cluster heads eliminate duplicate data and forward filtered data towards the base station. Simulation results prove that our approach saves considerable amounts of energy in terms of communication and computational cost, compared to existing filtering schemes. PMID:22163999
The [(AI 2O 3) 2] - Anion Cluster: Electron Localization-Delocalization Isomerism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierka, Marek; Dobler, Jens; Sauer, Joachim
2009-10-05
Three-dimensional bulk alumina and its two-dimensional thin films show great structural diversity, posing considerable challenges to their experimental structural characterization and computational modeling. Recently, structural diversity has also been demonstrated for zerodimensional gas phase aluminum oxide clusters. Mass-selected clusters not only make systematic studies of the structural and electronic properties as a function of size possible, but lately have also emerged as powerful molecular models of complex surfaces and solid catalysts. In particular, the [(Al 2O 3) 3-5] + clusters were the first example of polynuclear maingroup metal oxide cluster that are able to thermally activate CH 4. Over themore » past decades gas phase aluminum oxide clusters have been extensively studied both experimentally and computationally, but definitive structural assignments were made for only a handful of them: the planar [Al 3O 3] - and [Al 5O 4] - cluster anions, and the [(Al 2O 3) 1-4(AlO)] + cluster cations. For stoichiometric clusters only the atomic structures of [(Al 2O 3) 4] +/0 have been nambiguously resolved. Here we report on the structures of the [(Al 2O 3) 2] -/0 clusters combining photoelectron spectroscopy (PES) and quantum chemical calculations employing a genetic algorithm as a global optimization technique. The [(Al 2O 3) 2] - cluster anion show energetically close lying but structurally distinct cage and sheet-like isomers which differ by delocalization/localization of the extra electron. The experimental results are crucial for benchmarking the different computational methods applied with respect to a proper description of electron localization and the relative energies for the isomers which will be of considerable value for future computational studies of aluminum oxide and related systems.« less
NASA Astrophysics Data System (ADS)
Chen, Siyue; Leung, Henry; Dondo, Maxwell
2014-05-01
As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.
Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls
Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.
2013-01-01
As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.
Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G
2016-03-01
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lynch, Amanda H.; Abramson, David; Görgen, Klaus; Beringer, Jason; Uotila, Petteri
2007-10-01
Fires in the Australian savanna have been hypothesized to affect monsoon evolution, but the hypothesis is controversial and the effects have not been quantified. A distributed computing approach allows the development of a challenging experimental design that permits simultaneous variation of all fire attributes. The climate model simulations are distributed around multiple independent computer clusters in six countries, an approach that has potential for a range of other large simulation applications in the earth sciences. The experiment clarifies that savanna burning can shape the monsoon through two mechanisms. Boundary-layer circulation and large-scale convergence is intensified monotonically through increasing fire intensity and area burned. However, thresholds of fire timing and area are evident in the consequent influence on monsoon rainfall. In the optimal band of late, high intensity fires with a somewhat limited extent, it is possible for the wet season to be significantly enhanced.
Perspective: Quantum mechanical methods in biochemistry and biophysics.
Cui, Qiang
2016-10-14
In this perspective article, I discuss several research topics relevant to quantum mechanical (QM) methods in biophysical and biochemical applications. Due to the immense complexity of biological problems, the key is to develop methods that are able to strike the proper balance of computational efficiency and accuracy for the problem of interest. Therefore, in addition to the development of novel ab initio and density functional theory based QM methods for the study of reactive events that involve complex motifs such as transition metal clusters in metalloenzymes, it is equally important to develop inexpensive QM methods and advanced classical or quantal force fields to describe different physicochemical properties of biomolecules and their behaviors in complex environments. Maintaining a solid connection of these more approximate methods with rigorous QM methods is essential to their transferability and robustness. Comparison to diverse experimental observables helps validate computational models and mechanistic hypotheses as well as driving further development of computational methodologies.
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation
Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas
2011-01-01
High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089
Method and system for data clustering for very large databases
NASA Technical Reports Server (NTRS)
Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)
1998-01-01
Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
Yim, Won Cheol; Cushman, John C.
2017-07-22
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, Won Cheol; Cushman, John C.
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-01-01
Introduction Research in cognitive science often involves the generation and analysis of computational cognitive models to explain various...HPC) clusters and volunteer computing for large-scale computational resources. The majority of applications on the Department of Defense HPC... clusters focus on solving partial differential equations (Post, 2009). These tend to be lean, fast models with little noise. While we lack specific
Hartree-Fock calculation of the differential photoionization cross sections of small Li clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galitskiy, S. A.; Artemyev, A. N.; Jänkälä, K.
2015-01-21
Cross sections and angular distribution parameters for the single-photon ionization of all electron orbitals of Li{sub 2−8} are systematically computed in a broad interval of the photoelectron kinetic energies for the energetically most stable geometry of each cluster. Calculations of the partial photoelectron continuum waves in clusters are carried out by the single center method within the Hartree-Fock approximation. We study photoionization cross sections per one electron and analyze in some details general trends in the photoionization of inner and outer shells with respect to the size and geometry of a cluster. The present differential cross sections computed for Li{submore » 2} are in a good agreement with the available theoretical data, whereas those computed for Li{sub 3−8} clusters can be considered as theoretical predictions.« less
NASA Astrophysics Data System (ADS)
Lund, Matthew Lawrence
The space radiation environment is a significant challenge to future manned and unmanned space travels. Future missions will rely more on accurate simulations of radiation transport in space through spacecraft to predict astronaut dose and energy deposition within spacecraft electronics. The International Space Station provides long-term measurements of the radiation environment in Low Earth Orbit (LEO); however, only the Apollo missions provided dosimetry data beyond LEO. Thus dosimetry analysis for deep space missions is poorly supported with currently available data, and there is a need to develop dosimetry-predicting models for extended deep space missions. GEANT4, a Monte Carlo Method, provides a powerful toolkit in C++ for simulation of radiation transport in arbitrary media, thus including the spacecraft and space travels. The newest version of GEANT4 supports multithreading and MPI, resulting in faster distributive processing of simulations in high-performance computing clusters. This thesis introduces a new application based on GEANT4 that greatly reduces computational time using Kingspeak and Ember computational clusters at the Center for High Performance Computing (CHPC) to simulate radiation transport through full spacecraft geometry, reducing simulation time to hours instead of weeks without post simulation processing. Additionally, this thesis introduces a new set of detectors besides the historically used International Commission of Radiation Units (ICRU) spheres for calculating dose distribution, including a Thermoluminescent Detector (TLD), Tissue Equivalent Proportional Counter (TEPC), and human phantom combined with a series of new primitive scorers in GEANT4 to calculate dose equivalence based on the International Commission of Radiation Protection (ICRP) standards. The developed models in this thesis predict dose depositions in the International Space Station and during the Apollo missions showing good agreement with experimental measurements. From these models the greatest contributor to radiation dose for the Apollo missions was from Galactic Cosmic Rays due to the short time within the radiation belts. The Apollo 14 dose measurements were an order of magnitude higher compared to other Apollo missions. The GEANT4 model of the Apollo Command Module shows consistent doses due to Galactic Cosmic Rays and Radiation Belts for all missions, with a small variation in dose distribution across the capsule. The model also predicts well the dose depositions and equivalent dose values in various human organs for the International Space Station or Apollo Command Module.
Operating Dedicated Data Centers - Is It Cost-Effective?
NASA Astrophysics Data System (ADS)
Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.
2014-06-01
The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.
A Real-Time Executive for Multiple-Computer Clusters.
1984-12-01
in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by ’b4, internal _loopback by 02"b4, clear loooback by ’b4, go offline by Ŝ"b4, eo online by ’b4, onboard _diagnostic by Oa’b4, cdr
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
ERIC Educational Resources Information Center
Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.
2006-01-01
Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…
An Experiment in Computer Ethics: Clustering Composition with Computer Applications.
ERIC Educational Resources Information Center
Nydahl, Joel
Babson College (a school of business and management in Wellesley, Massachusetts) attempted to make a group of first-year students computer literate through "clustering." The same group of students were enrolled in two courses: a special section of "Composition" which stressed word processing as a composition aid and a regular…
Web Program for Development of GUIs for Cluster Computers
NASA Technical Reports Server (NTRS)
Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward
2003-01-01
WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.
Multi-hop routing mechanism for reliable sensor computing.
Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min
2009-01-01
Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.
The enhancement of rapidly quenched galaxies in distant clusters at 0.5 < z < 1.0
NASA Astrophysics Data System (ADS)
Socolovsky, Miguel; Almaini, Omar; Hatch, Nina A.; Wild, Vivienne; Maltby, David T.; Hartley, William G.; Simpson, Chris
2018-05-01
We investigate the relationship between environment and galaxy evolution in the redshift range 0.5 < z < 1.0. Galaxy overdensities are selected using a friends-of-friends algorithm, applied to deep photometric data in the Ultra-Deep Survey field. A study of the resulting stellar mass functions reveals clear differences between cluster and field environments, with a strong excess of low-mass rapidly quenched galaxies in cluster environments compared to the field. Cluster environments also show a corresponding deficit of young, low-mass star-forming galaxies, which show a sharp radial decline towards cluster centres. By comparing mass functions and radial distributions, we conclude that young star-forming galaxies are rapidly quenched as they enter overdense environments, becoming post-starburst galaxies before joining the red sequence. Our results also point to the existence of two environmental quenching pathways operating in galaxy clusters, operating on different time-scales. Fast quenching acts on galaxies with high specific star formation rates, operating on time-scales shorter than the cluster dynamical time (<1 Gyr). In contrast, slow quenching affects galaxies with moderate specific star formation rates, regardless of their stellar mass, and acts on longer time-scales (≳ 1 Gyr). Of the cluster galaxies in the stellar mass range 9.0 < log (M/M⊙) < 10.5 quenched during this epoch, we find that 73 per cent were transformed through fast quenching, while the remaining 27 per cent followed the slow quenching route.
Reducing Earth Topography Resolution for SMAP Mission Ground Tracks Using K-Means Clustering
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2013-01-01
The K-means clustering algorithm is used to reduce Earth topography resolution for the SMAP mission ground tracks. As SMAP propagates in orbit, knowledge of the radar antenna footprints on Earth is required for the antenna misalignment calibration. Each antenna footprint contains a latitude and longitude location pair on the Earth surface. There are 400 pairs in one data set for the calibration model. It is computationally expensive to calculate corresponding Earth elevation for these data pairs. Thus, the antenna footprint resolution is reduced. Similar topographical data pairs are grouped together with the K-means clustering algorithm. The resolution is reduced to the mean of each topographical cluster called the cluster centroid. The corresponding Earth elevation for each cluster centroid is assigned to the entire group. Results show that 400 data points are reduced to 60 while still maintaining algorithm performance and computational efficiency. In this work, sensitivity analysis is also performed to show a trade-off between algorithm performance versus computational efficiency as the number of cluster centroids and algorithm iterations are increased.
HORN-6 special-purpose clustered computing system for electroholography.
Ichihashi, Yasuyuki; Nakayama, Hirotaka; Ito, Tomoyoshi; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Sugie, Takashige
2009-08-03
We developed the HORN-6 special-purpose computer for holography. We designed and constructed the HORN-6 board to handle an object image composed of one million points and constructed a cluster system composed of 16 HORN-6 boards. Using this HORN-6 cluster system, we succeeded in creating a computer-generated hologram of a three-dimensional image composed of 1,000,000 points at a rate of 1 frame per second, and a computer-generated hologram of an image composed of 100,000 points at a rate of 10 frames per second, which is near video rate, when the size of a computer-generated hologram is 1,920 x 1,080. The calculation speed is approximately 4,600 times faster than that of a personal computer with an Intel 3.4-GHz Pentium 4 CPU.
Goh, Yong-Shian; Lee, Alice; Chan, Sally Wai-Chi; Chan, Moon Fai
2015-08-01
This study aimed to determine whether definable profiles existed in a cohort of nursing staff with regard to demographic characteristics, job satisfaction, acculturation, work environment, stress, cultural values and coping abilities. A survey was conducted in one hospital in Singapore from June to July 2012, and 814 full-time staff nurses completed a self-report questionnaire (89% response rate). Demographic characteristics, job satisfaction, acculturation, work environment, perceived stress, cultural values, ways of coping and intention to leave current workplace were assessed as outcomes. The two-step cluster analysis revealed three clusters. Nurses in cluster 1 (n = 222) had lower acculturation scores than nurses in cluster 3. Cluster 2 (n = 362) was a group of younger nurses who reported higher intention to leave (22.4%), stress level and job dissatisfaction than the other two clusters. Nurses in cluster 3 (n = 230) were mostly Singaporean and reported the lowest intention to leave (13.0%). Resources should be allocated to specifically address the needs of younger nurses and hopefully retain them in the profession. Management should focus their retention strategies on junior nurses and provide a work environment that helps to strengthen their intention to remain in nursing by increasing their job satisfaction. © 2014 Wiley Publishing Asia Pty Ltd.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.
Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.
Galaxy Merger Candidates in High-redshift Cluster Environments
NASA Astrophysics Data System (ADS)
Delahaye, A. G.; Webb, T. M. A.; Nantais, J.; DeGroot, A.; Wilson, G.; Muzzin, A.; Yee, H. K. C.; Foltz, R.; Noble, A. G.; Demarco, R.; Tudorica, A.; Cooper, M. C.; Lidman, C.; Perlmutter, S.; Hayden, B.; Boone, K.; Surace, J.
2017-07-01
We compile a sample of spectroscopically and photometrically selected cluster galaxies from four high-redshift galaxy clusters (1.59< z< 1.71) from the Spitzer Adaptation of the Red-Sequence Cluster Survey (SpARCS), and a comparison field sample selected from the UKIDSS Deep Survey. Using near-infrared imaging from the Hubble Space Telescope, we classify potential mergers involving massive ({M}* ≥slant 3× {10}10 {M}⊙ ) cluster members by eye, based on morphological properties such as tidal distortions, double nuclei, and projected near neighbors within 20 kpc. With a catalog of 23 spectroscopic and 32 photometric massive cluster members across the four clusters and 65 spectroscopic and 26 photometric comparable field galaxies, we find that after taking into account contamination from interlopers, {11.0}-5.6+7.0 % of the cluster members are involved in potential mergers, compared to {24.7}-4.6+5.3 % of the field galaxies. We see no evidence of merger enhancement in the central cluster environment with respect to the field, suggesting that galaxy-galaxy merging is not a stronger source of galaxy evolution in cluster environments compared to the field at these redshifts.
Tremblay, Marlène; Hess, Justin P; Christenson, Brock M; McIntyre, Kolby K; Smink, Ben; van der Kamp, Arjen J; de Jong, Lisanne G; Döpfer, Dörte
2016-07-01
Automatic milking systems (AMS) are implemented in a variety of situations and environments. Consequently, there is a need to characterize individual farming practices and regional challenges to streamline management advice and objectives for producers. Benchmarking is often used in the dairy industry to compare farms by computing percentile ranks of the production values of groups of farms. Grouping for conventional benchmarking is commonly limited to the use of a few factors such as farms' geographic region or breed of cattle. We hypothesized that herds' production data and management information could be clustered in a meaningful way using cluster analysis and that this clustering approach would yield better peer groups of farms than benchmarking methods based on criteria such as country, region, breed, or breed and region. By applying mixed latent-class model-based cluster analysis to 529 North American AMS dairy farms with respect to 18 significant risk factors, 6 clusters were identified. Each cluster (i.e., peer group) represented unique management styles, challenges, and production patterns. When compared with peer groups based on criteria similar to the conventional benchmarking standards, the 6 clusters better predicted milk produced (kilograms) per robot per day. Each cluster represented a unique management and production pattern that requires specialized advice. For example, cluster 1 farms were those that recently installed AMS robots, whereas cluster 3 farms (the most northern farms) fed high amounts of concentrates through the robot to compensate for low-energy feed in the bunk. In addition to general recommendations for farms within a cluster, individual farms can generate their own specific goals by comparing themselves to farms within their cluster. This is very comparable to benchmarking but adds the specific characteristics of the peer group, resulting in better farm management advice. The improvement that cluster analysis allows for is characterized by the multivariable approach and the fact that comparisons between production units can be accomplished within a cluster and between clusters as a choice. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Quantum Gibbs Samplers: The Commuting Case
NASA Astrophysics Data System (ADS)
Kastoryano, Michael J.; Brandão, Fernando G. S. L.
2016-06-01
We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing properties of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative {mathbb{L}_p} spaces. We consider two distinct classes of Gibbs samplers, one of them being the well-studied Davies generator modelling the dynamics of a system due to weak-coupling with a large Markovian environment. We show that their spectral gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. Therefore every Gibbs state of a commuting Hamiltonian that satisfies clustering of correlations in this strong sense can be prepared efficiently on a quantum computer. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at temperatures above a certain threshold, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing the associated Gibbs states on a quantum computer.
ATLAS user analysis on private cloud resources at GoeGrid
NASA Astrophysics Data System (ADS)
Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.
2015-12-01
User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.
Globular Clusters: Absolute Proper Motions and Galactic Orbits
NASA Astrophysics Data System (ADS)
Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.
2018-04-01
We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidelberg, S T; Fitzgerald, K J; Richmond, G H
2006-01-24
There has been substantial development of the Lustre parallel filesystem prior to the configuration described below for this milestone. The initial Lustre filesystems that were deployed were directly connected to the cluster interconnect, i.e. Quadrics Elan3. That is, the clients (OSSes) and Meta-data Servers (MDS) were all directly connected to the cluster's internal high speed interconnect. This configuration serves a single cluster very well, but does not provide sharing of the filesystem among clusters. LLNL funded the development of high-efficiency ''portals router'' code by CFS (the company that develops Lustre) to enable us to move the Lustre servers to amore » GigE-connected network configuration, thus making it possible to connect to the servers from several clusters. With portals routing available, here is what changes: (1) another storage-only cluster is deployed to front the Lustre storage devices (these become the Lustre OSSes and MDS), (2) this ''Lustre cluster'' is attached via GigE connections to a large GigE switch/router cloud, (3) a small number of compute-cluster nodes are designated as ''gateway'' or ''portal router'' nodes, and (4) the portals router nodes are GigE-connected to the switch/router cloud. The Lustre configuration is then changed to reflect the new network paths. A typical example of this is a compute cluster and a related visualization cluster: the compute cluster produces the data (writes it to the Lustre filesystem), and the visualization cluster consumes some of the data (reads it from the Lustre filesystem). This process can be expanded by aggregating several collections of Lustre backend storage resources into one or more ''centralized'' Lustre filesystems, and then arranging to have several ''client'' clusters mount these centralized filesystems. The ''client clusters'' can be any combination of compute, visualization, archiving, or other types of cluster. This milestone demonstrates the operation and performance of a scaled-down version of such a large, centralized, shared Lustre filesystem concept.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P
MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less
Diverse Power Iteration Embeddings and Its Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang H.; Yoo S.; Yu, D.
2014-12-14
Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceder, Gerbrand
Novel materials are often the enabler for new energy technologies. In ab-initio computational materials science, method are developed to predict the behavior of materials starting from the laws of physics, so that properties can be predicted before compounds have to be synthesized and tested. As such, a virtual materials laboratory can be constructed, saving time and money. The objectives of this program were to develop first-principles theory to predict the structure and thermodynamic stability of materials. Since its inception the program focused on the development of the cluster expansion to deal with the increased complexity of complex oxides. This researchmore » led to the incorporation of vibrational degrees of freedom in ab-initio thermodynamics, developed methods for multi-component cluster expansions, included the explicit configurational degrees of freedom of localized electrons, developed the formalism for stability in aqueous environments, and culminated in the first ever approach to produce exact ground state predictions of the cluster expansion. Many of these methods have been disseminated to the larger theory community through the Materials Project, pymatgen software, or individual codes. We summarize three of the main accomplishments.« less
de Lima Batista, Ana P.; Zahariev, Federico; Slowing, Igor I.; ...
2015-12-15
The aldol reaction catalyzed by an amine-substituted mesoporous silica nanoparticle (amine-MSN) surface was investigated using a large molecular cluster model (Si 392O 958C 6NH 361) combined with the surface integrated molecular orbital/molecular mechanics (SIMOMM) and fragment molecular orbital (FMO) methods. Three distinct pathways for the carbinolamine formation, the first step of the amine-catalyzed aldol reaction, are proposed and investigated in order to elucidate the role of the silanol environment on the catalytic capability of the amine-MSN material. Here the computational study reveals that the most likely mechanism involves the silanol groups actively participating in the reaction, forming and breaking covalentmore » bonds in the carbinolamine step. Furthermore, the active participation of MSN silanol groups in the reaction mechanism leads to a significant reduction in the overall energy barrier for the carbinolamine formation. In addition, a comparison between the findings using a minimal cluster model and the Si 392O 958C 6NH 361 cluster suggests that the use of larger models is important when heterogeneous catalysis problems are the target.« less
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Etherington, J.; Thomas, D.; Maraston, C.; ...
2016-01-04
Measurements of the galaxy stellar mass function are crucial to understand the formation of galaxies in the Universe. In a hierarchical clustering paradigm it is plausible that there is a connection between the properties of galaxies and their environments. Evidence for environmental trends has been established in the local Universe. The Dark Energy Survey (DES) provides large photometric datasets that enable further investigation of the assembly of mass. In this study we use ~3.2 million galaxies from the (South Pole Telescope) SPT-East field in the DES science verification (SV) dataset. From grizY photometry we derive galaxy stellar masses and absolutemore » magnitudes, and determine the errors on these properties using Monte-Carlo simulations using the full photometric redshift probability distributions. We compute galaxy environments using a fixed conical aperture for a range of scales. We construct galaxy environment probability distribution functions and investigate the dependence of the environment errors on the aperture parameters. We compute the environment components of the galaxy stellar mass function for the redshift range 0.15 < z < 1.05. For z < 0.75 we find that the fraction of massive galaxies is larger in high density environment than in low density environments. We show that the low density and high density components converge with increasing redshift up to z ~ 1.0 where the shapes of the mass function components are indistinguishable. As a result, our study shows how high density structures build up around massive galaxies through cosmic time.« less
Ji-Wook Jeong; Seung-Hoon Chae; Eun Young Chae; Hak Hee Kim; Young Wook Choi; Sooyeul Lee
2016-08-01
A computer-aided detection (CADe) algorithm for clustered microcalcifications (MCs) in reconstructed digital breast tomosynthesis (DBT) images is suggested. The MC-like objects were enhanced by a Hessian-based 3D calcification response function, and a signal-to-noise ratio (SNR) enhanced image was also generated to screen the MC clustering seed objects. A connected component segmentation method was used to detect the cluster seed objects, which were considered as potential clustering centers of MCs. Bounding cubes for the accepted clustering seed candidate were generated and the overlapping cubes were combined and examined. After the MC clustering and false-positive (FP) reduction step, the average number of FPs was estimated to be 0.87 per DBT volume with a sensitivity of 90.5%.
Exploiting volatile opportunistic computing resources with Lobster
NASA Astrophysics Data System (ADS)
Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2015-12-01
Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.
Integrating Xgrid into the HENP distributed computing model
NASA Astrophysics Data System (ADS)
Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.
2008-07-01
Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.
NASA Astrophysics Data System (ADS)
Martinet, Nicolas; Durret, Florence; Guennou, Loïc; Adami, Christophe; Biviano, Andrea; Ulmer, Melville P.; Clowe, Douglas; Halliday, Claire; Ilbert, Olivier; Márquez, Isabel; Schirmer, Mischa
2015-03-01
Context. There is some disagreement about the abundance of faint galaxies in high-redshift clusters, with contradictory results in the literature arising from studies of the optical galaxy luminosity function (GLF) for small cluster samples. Aims: We compute GLFs for one of the largest medium-to-high-redshift (0.4 ≤ z < 0.9) cluster samples to date in order to probe the abundance of faint galaxies in clusters. We also study how the GLF depends on cluster redshift, mass, and substructure and compare the GLFs of clusters with those of the field. We separately investigate the GLFs of blue and red-sequence (RS) galaxies to understand the evolution of different cluster populations. Methods: We calculated the GLFs for 31 clusters taken from the DAFT/FADA survey in the B,V,R, and I rest-frame bands. We used photometric redshifts computed from BVRIZJ images to constrain galaxy cluster membership. We carried out a detailed estimate of the completeness of our data. We distinguished the red-sequence and blue galaxies using a V - I versus I colour-magnitude diagram. We studied the evolution of these two populations with redshift. We fitted Schechter functions to our stacked GLFs to determine average cluster characteristics. Results: We find that the shapes of our GLFs are similar for the B,V,R, and I bands with a drop at the red GLF faint ends that is more pronounced at high redshift: αred ~ -0.5 at 0.40 ≤ z < 0.65 and αred > 0.1 at 0.65 ≤ z < 0.90. The blue GLFs have a steeper faint end (αblue ~ -1.6) than the red GLFs, which appears to be independent of redshift. For the full cluster sample, blue and red GLFs meet at MV = -20, MR = -20.5, and MI = -20.3. A study of how galaxy types evolve with redshift shows that late-type galaxies appear to become early types between z ~ 0.9 and today. Finally, the faint ends of the red GLFs of more massive clusters appear to be richer than less massive clusters, which is more typical of the lower redshift behaviour. Conclusions: Our results indicate that these clusters form at redshifts higher than z = 0.9 from galaxy structures that already have an established red sequence. Late-type galaxies then appear to evolve into early types, enriching the red sequence between this redshift and today. This effect is consistent with the evolution of the faint-end slope of the red sequence and the galaxy type evolution that we find. Finally, faint galaxies accreted from the field environment at all redshifts might have replaced the blue late-type galaxies that converted into early types, explaining the lack of evolution in the faint-end slopes of the blue GLFs. Appendix is available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Jensen, Sigurd S.; Haugbølle, Troels
2018-02-01
Hertzsprung-Russell diagrams of star-forming regions show a large luminosity spread. This is incompatible with well-defined isochrones based on classic non-accreting protostellar evolution models. Protostars do not evolve in isolation of their environment, but grow through accretion of gas. In addition, while an age can be defined for a star-forming region, the ages of individual stars in the region will vary. We show how the combined effect of a protostellar age spread, a consequence of sustained star formation in the molecular cloud, and time-varying protostellar accretion for individual protostars can explain the observed luminosity spread. We use a global magnetohydrodynamic simulation including a sub-scale sink particle model of a star-forming region to follow the accretion process of each star. The accretion profiles are used to compute stellar evolution models for each star, incorporating a model of how the accretion energy is distributed to the disc, radiated away at the accretion shock, or incorporated into the outer layers of the protostar. Using a modelled cluster age of 5 Myr, we naturally reproduce the luminosity spread and find good agreement with observations of the Collinder 69 cluster, and the Orion Nebular Cluster. It is shown how stars in binary and multiple systems can be externally forced creating recurrent episodic accretion events. We find that in a realistic global molecular cloud model massive stars build up mass over relatively long time-scales. This leads to an important conceptual change compared to the classic picture of non-accreting stellar evolution segmented into low-mass Hayashi tracks and high-mass Henyey tracks.
Biologically Active Secondary Metabolites from the Fungi.
Bills, Gerald F; Gloer, James B
2016-11-01
Many Fungi have a well-developed secondary metabolism. The diversity of fungal species and the diversification of biosynthetic gene clusters underscores a nearly limitless potential for metabolic variation and an untapped resource for drug discovery and synthetic biology. Much of the ecological success of the filamentous fungi in colonizing the planet is owed to their ability to deploy their secondary metabolites in concert with their penetrative and absorptive mode of life. Fungal secondary metabolites exhibit biological activities that have been developed into life-saving medicines and agrochemicals. Toxic metabolites, known as mycotoxins, contaminate human and livestock food and indoor environments. Secondary metabolites are determinants of fungal diseases of humans, animals, and plants. Secondary metabolites exhibit a staggering variation in chemical structures and biological activities, yet their biosynthetic pathways share a number of key characteristics. The genes encoding cooperative steps of a biosynthetic pathway tend to be located contiguously on the chromosome in coregulated gene clusters. Advances in genome sequencing, computational tools, and analytical chemistry are enabling the rapid connection of gene clusters with their metabolic products. At least three fungal drug precursors, penicillin K and V, mycophenolic acid, and pleuromutilin, have been produced by synthetic reconstruction and expression of respective gene clusters in heterologous hosts. This review summarizes general aspects of fungal secondary metabolism and recent developments in our understanding of how and why fungi make secondary metabolites, how these molecules are produced, and how their biosynthetic genes are distributed across the Fungi. The breadth of fungal secondary metabolite diversity is highlighted by recent information on the biosynthesis of important fungus-derived metabolites that have contributed to human health and agriculture and that have negatively impacted crops, food distribution, and human environments.
Investigation of the cluster formation in lithium niobate crystals by computer modeling method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voskresenskii, V. M.; Starodub, O. R., E-mail: ol-star@mail.ru; Sidorov, N. V.
The processes occurring upon the formation of energetically equilibrium oxygen-octahedral clusters in the ferroelectric phase of a stoichiometric lithium niobate (LiNbO{sub 3}) crystal have been investigated by the computer modeling method within the semiclassical atomistic model. An energetically favorable cluster size (at which a structure similar to that of a congruent crystal is organized) is shown to exist. A stoichiometric cluster cannot exist because of the electroneutrality loss. The most energetically favorable cluster is that with a Li/Nb ratio of about 0.945, a value close to the lithium-to-niobium ratio for a congruent crystal.
A cluster expansion model for predicting activation barrier of atomic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit, E-mail: achatter@iitk.ac.in
2013-06-15
We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEBmore » results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog.« less
Gholami, Mohammad; Brennan, Robert W
2016-01-06
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency.
Gholami, Mohammad; Brennan, Robert W.
2016-01-01
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency. PMID:26751447
m-BIRCH: an online clustering approach for computer vision applications
NASA Astrophysics Data System (ADS)
Madan, Siddharth K.; Dana, Kristin J.
2015-03-01
We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.
Core Collapse: The Race Between Stellar Evolution and Binary Heating
NASA Astrophysics Data System (ADS)
Converse, Joseph M.; Chandar, R.
2012-01-01
The dynamical formation of binary stars can dramatically affect the evolution of their host star clusters. In relatively small clusters (M < 6000 Msun) the most massive stars rapidly form binaries, heating the cluster and preventing any significant contraction of the core. The situation in much larger globular clusters (M 105 Msun) is quite different, with many showing collapsed cores, implying that binary formation did not affect them as severely as lower mass clusters. More massive clusters, however, should take longer to form their binaries, allowing stellar evolution more time to prevent the heating by causing the larger stars to die off. Here, we simulate the evolution of clusters between those of open and globular clusters in order to find at what size a star cluster is able to experience true core collapse. Our simulations make use of a new GPU-based computing cluster recently purchased at the University of Toledo. We also present some benchmarks of this new computational resource.
Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Hamer, George
2003-01-01
Beowulf clusters can provide a cost-effective way to compute numerical models and process large amounts of remote sensing image data. Usually a Beowulf cluster is designed to accomplish a specific set of processing goals, and processing is very efficient when the problem remains inside the constraints of the original design. There are cases, however, when one might wish to compute a problem that is beyond the capacity of the local Beowulf system. In these cases, spreading the problem to multiple clusters or to other machines on the network may provide a cost-effective solution.
A new approach of data clustering using a flock of agents.
Picarougne, Fabien; Azzag, Hanene; Venturini, Gilles; Guinot, Christiane
2007-01-01
This paper presents a new bio-inspired algorithm (FClust) that dynamically creates and visualizes groups of data. This algorithm uses the concepts of a flock of agents that move together in a complex manner with simple local rules. Each agent represents one data. The agents move together in a 2D environment with the aim of creating homogeneous groups of data. These groups are visualized in real time, and help the domain expert to understand the underlying structure of the data set, like for example a realistic number of classes, clusters of similar data, isolated data. We also present several extensions of this algorithm, which reduce its computational cost, and make use of a 3D display. This algorithm is then tested on artificial and real-world data, and a heuristic algorithm is used to evaluate the relevance of the obtained partitioning.
Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure
NASA Astrophysics Data System (ADS)
Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen
2017-02-01
Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
Illinois Occupational Skill Standards: Information Technology Operate Cluster.
ERIC Educational Resources Information Center
Illinois Occupational Skill Standards and Credentialing Council, Carbondale.
This document contains Illinois Occupational Skill Standards for occupations in the Information Technology Operate Cluster (help desk support, computer maintenance and technical support technician, systems operator, application and computer support specialist, systems administrator, network administrator, and database administrator). The skill…
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Universal clustering of dark matter in phase space
NASA Astrophysics Data System (ADS)
Zavala, Jesús; Afshordi, Niayesh
2016-03-01
We have recently introduced a novel statistical measure of dark matter clustering in phase space, the particle phase-space average density (P2SAD). In a two-paper series, we studied the structure of P2SAD in the Milky Way-size Aquarius haloes, constructed a physically motivated model to describe it, and illustrated its potential as a powerful tool to predict signals sensitive to the nanostructure of dark matter haloes. In this work, we report a remarkable universality of the clustering of dark matter in phase space as measured by P2SAD within the subhaloes of host haloes across different environments covering a range from dwarf-size to cluster-size haloes (1010-1015 M⊙). Simulations show that the universality of P2SAD holds for more than seven orders of magnitude, over a 2D phase space, covering over three orders of magnitude in distance/velocity, with a simple functional form that can be described by our model. Invoking the universality of P2SAD, we can accurately predict the non-linear power spectrum of dark matter at small scales all the way down to the decoupling mass limit of cold dark matter particles. As an application, we compute the subhalo boost to the annihilation of dark matter in a wide range of host halo masses.
OCCAM: a flexible, multi-purpose and extendable HPC cluster
NASA Astrophysics Data System (ADS)
Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.
2017-10-01
The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.
WEIGHING GALAXY CLUSTERS WITH GAS. I. ON THE METHODS OF COMPUTING HYDROSTATIC MASS BIAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Erwin T.; Nagai, Daisuke; Nelson, Kaylea, E-mail: erwin.lau@yale.edu
2013-11-10
Mass estimates of galaxy clusters from X-ray and Sunyeav-Zel'dovich observations assume the intracluster gas is in hydrostatic equilibrium with their gravitational potential. However, since galaxy clusters are dynamically active objects whose dynamical states can deviate significantly from the equilibrium configuration, the departure from the hydrostatic equilibrium assumption is one of the largest sources of systematic uncertainties in cluster cosmology. In the literature there have been two methods for computing the hydrostatic mass bias based on the Euler and the modified Jeans equations, respectively, and there has been some confusion about the validity of these two methods. The word 'Jeans' wasmore » a misnomer, which incorrectly implies that the gas is collisionless. To avoid further confusion, we instead refer these methods as 'summation' and 'averaging' methods respectively. In this work, we show that these two methods for computing the hydrostatic mass bias are equivalent by demonstrating that the equation used in the second method can be derived from taking spatial averages of the Euler equation. Specifically, we identify the correspondences of individual terms in these two methods mathematically and show that these correspondences are valid to within a few percent level using hydrodynamical simulations of galaxy cluster formation. In addition, we compute the mass bias associated with the acceleration of gas and show that its contribution is small in the virialized regions in the interior of galaxy clusters, but becomes non-negligible in the outskirts of massive galaxy clusters. We discuss future prospects of understanding and characterizing biases in the mass estimate of galaxy clusters using both hydrodynamical simulations and observations and their implications for cluster cosmology.« less
Weighing Galaxy Clusters with Gas. I. On the Methods of Computing Hydrostatic Mass Bias
NASA Astrophysics Data System (ADS)
Lau, Erwin T.; Nagai, Daisuke; Nelson, Kaylea
2013-11-01
Mass estimates of galaxy clusters from X-ray and Sunyeav-Zel'dovich observations assume the intracluster gas is in hydrostatic equilibrium with their gravitational potential. However, since galaxy clusters are dynamically active objects whose dynamical states can deviate significantly from the equilibrium configuration, the departure from the hydrostatic equilibrium assumption is one of the largest sources of systematic uncertainties in cluster cosmology. In the literature there have been two methods for computing the hydrostatic mass bias based on the Euler and the modified Jeans equations, respectively, and there has been some confusion about the validity of these two methods. The word "Jeans" was a misnomer, which incorrectly implies that the gas is collisionless. To avoid further confusion, we instead refer these methods as "summation" and "averaging" methods respectively. In this work, we show that these two methods for computing the hydrostatic mass bias are equivalent by demonstrating that the equation used in the second method can be derived from taking spatial averages of the Euler equation. Specifically, we identify the correspondences of individual terms in these two methods mathematically and show that these correspondences are valid to within a few percent level using hydrodynamical simulations of galaxy cluster formation. In addition, we compute the mass bias associated with the acceleration of gas and show that its contribution is small in the virialized regions in the interior of galaxy clusters, but becomes non-negligible in the outskirts of massive galaxy clusters. We discuss future prospects of understanding and characterizing biases in the mass estimate of galaxy clusters using both hydrodynamical simulations and observations and their implications for cluster cosmology.
VEDA: a web-based virtual environment for dynamic atomic force microscopy.
Melcher, John; Hu, Shuiqing; Raman, Arvind
2008-06-01
We describe here the theory and applications of virtual environment dynamic atomic force microscopy (VEDA), a suite of state-of-the-art simulation tools deployed on nanoHUB (www.nanohub.org) for the accurate simulation of tip motion in dynamic atomic force microscopy (dAFM) over organic and inorganic samples. VEDA takes advantage of nanoHUB's cyberinfrastructure to run high-fidelity dAFM tip dynamics computations on local clusters and the teragrid. Consequently, these tools are freely accessible and the dAFM simulations are run using standard web-based browsers without requiring additional software. A wide range of issues in dAFM ranging from optimal probe choice, probe stability, and tip-sample interaction forces, power dissipation, to material property extraction and scanning dynamics over hetereogeneous samples can be addressed.
Invited Article: VEDA: A web-based virtual environment for dynamic atomic force microscopy
NASA Astrophysics Data System (ADS)
Melcher, John; Hu, Shuiqing; Raman, Arvind
2008-06-01
We describe here the theory and applications of virtual environment dynamic atomic force microscopy (VEDA), a suite of state-of-the-art simulation tools deployed on nanoHUB (www.nanohub.org) for the accurate simulation of tip motion in dynamic atomic force microscopy (dAFM) over organic and inorganic samples. VEDA takes advantage of nanoHUB's cyberinfrastructure to run high-fidelity dAFM tip dynamics computations on local clusters and the teragrid. Consequently, these tools are freely accessible and the dAFM simulations are run using standard web-based browsers without requiring additional software. A wide range of issues in dAFM ranging from optimal probe choice, probe stability, and tip-sample interaction forces, power dissipation, to material property extraction and scanning dynamics over hetereogeneous samples can be addressed.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms
He, Li; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546
Xu, Peng; Gordon, Mark S
2014-09-04
Anionic water clusters are generally considered to be extremely challenging to model using fragmentation approaches due to the diffuse nature of the excess electron distribution. The local correlation coupled cluster (CC) framework cluster-in-molecule (CIM) approach combined with the completely renormalized CR-CC(2,3) method [abbreviated CIM/CR-CC(2,3)] is shown to be a viable alternative for computing the vertical electron binding energies (VEBE). CIM/CR-CC(2,3) with the threshold parameter ζ set to 0.001, as a trade-off between accuracy and computational cost, demonstrates the reliability of predicting the VEBE, with an average percentage error of ∼15% compared to the full ab initio calculation at the same level of theory. The errors are predominantly from the electron correlation energy. The CIM/CR-CC(2,3) approach provides the ease of a black-box type calculation with few threshold parameters to manipulate. The cluster sizes that can be studied by high-level ab initio methods are significantly increased in comparison with full CC calculations. Therefore, the VEBE computed by the CIM/CR-CC(2,3) method can be used as benchmarks for testing model potential approaches in small-to-intermediate-sized water clusters.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
NASA Astrophysics Data System (ADS)
You, Yu-Wei; Kong, Xiang-Shan; Wu, Xuebang; Liu, C. S.; Fang, Q. F.; Chen, J. L.; Luo, G.-N.
2017-08-01
The formation of transmutation solute-rich precipitates has been reported to seriously degrade the mechanical properties of tungsten in a fusion environment. However, the underlying mechanisms controlling the formation of the precipitates are still unknown. In this study, first-principles calculations are therefore performed to systemically determine the stable structures and binding energies of solute clusters in tungsten consisting of tantalum, rhenium and osmium atoms as well as irradiation-induced vacancies. These clusters are known to act as precursors for the formation of precipitates. We find that osmium can easily segregate to form clusters even in defect-free tungsten alloys, whereas extremely high tantalum and rhenium concentrations are required for the formation of clusters. Vacancies greatly facilitate the clustering of rhenium and osmium, while tantalum is an exception. The binding energies of vacancy-osmium clusters are found to be much higher than those of vacancy-tantalum and vacancy-rhenium clusters. Osmium is observed to strongly promote the formation of vacancy-rhenium clusters, while tantalum can suppress the formation of vacancy-rhenium and vacancy-osmium clusters. The local strain and electronic structure are analyzed to reveal the underlying mechanisms governing the cluster formation. Employing the law of mass action, we predict the evolution of the relative concentration of vacancy-rhenium clusters. This work presents a microscopic picture describing the nucleation and growth of solute clusters in tungsten alloys in a fusion reactor environment, and thereby explains recent experimental phenomena.
Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabha, H.; Marleau, G.
2012-07-01
For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presentedmore » with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)« less
Gate sequence for continuous variable one-way quantum computation
Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi
2013-01-01
Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.
A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering
ERIC Educational Resources Information Center
Chahine, Firas Safwan
2012-01-01
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…
Parallel evolutionary computation in bioinformatics applications.
Pinho, Jorge; Sobral, João Luis; Rocha, Miguel
2013-05-01
A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain
2014-11-01
Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.
Genotyping in the cloud with Crossbow.
Gurtowski, James; Schatz, Michael C; Langmead, Ben
2012-09-01
Crossbow is a scalable, portable, and automatic cloud computing tool for identifying SNPs from high-coverage, short-read resequencing data. It is built on Apache Hadoop, an implementation of the MapReduce software framework. Hadoop allows Crossbow to distribute read alignment and SNP calling subtasks over a cluster of commodity computers. Two robust tools, Bowtie and SOAPsnp, implement the fundamental alignment and variant calling operations respectively, and have demonstrated capabilities within Crossbow of analyzing approximately one billion short reads per hour on a commodity Hadoop cluster with 320 cores. Through protocol examples, this unit will demonstrate the use of Crossbow for identifying variations in three different operating modes: on a Hadoop cluster, on a single computer, and on the Amazon Elastic MapReduce cloud computing service.
Jeong, Ji-Wook; Chae, Seung-Hoon; Chae, Eun Young; Kim, Hak Hee; Choi, Young-Wook; Lee, Sooyeul
2016-01-01
We propose computer-aided detection (CADe) algorithm for microcalcification (MC) clusters in reconstructed digital breast tomosynthesis (DBT) images. The algorithm consists of prescreening, MC detection, clustering, and false-positive (FP) reduction steps. The DBT images containing the MC-like objects were enhanced by a multiscale Hessian-based three-dimensional (3D) objectness response function and a connected-component segmentation method was applied to extract the cluster seed objects as potential clustering centers of MCs. Secondly, a signal-to-noise ratio (SNR) enhanced image was also generated to detect the individual MC candidates and prescreen the MC-like objects. Each cluster seed candidate was prescreened by counting neighboring individual MC candidates nearby the cluster seed object according to several microcalcification clustering criteria. As a second step, we introduced bounding boxes for the accepted seed candidate, clustered all the overlapping cubes, and examined. After the FP reduction step, the average number of FPs per case was estimated to be 2.47 per DBT volume with a sensitivity of 83.3%.
Closed-cage tungsten oxide clusters in the gas phase.
Singh, D M David Jeba; Pradeep, T; Thirumoorthy, Krishnan; Balasubramanian, Krishnan
2010-05-06
During the course of a study on the clustering of W-Se and W-S mixtures in the gas phase using laser desorption ionization (LDI) mass spectrometry, we observed several anionic W-O clusters. Three distinct species, W(6)O(19)(-), W(13)O(29)(-), and W(14)O(32)(-), stand out as intense peaks in the regular mass spectral pattern of tungsten oxide clusters suggesting unusual stabilities for them. Moreover, these clusters do not fragment in the postsource decay analysis. While trying to understand the precursor material, which produced these clusters, we found the presence of nanoscale forms of tungsten oxide. The structure and thermodynamic parameters of tungsten clusters have been explored using relativistic quantum chemical methods. Our computed results of atomization energy are consistent with the observed LDI mass spectra. The computational results suggest that the clusters observed have closed-cage structure. These distinct W(13) and W(14) clusters were observed for the first time in the gas phase.
Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450
Measurement-based quantum computation on two-body interacting qubits with adiabatic evolution.
Kyaw, Thi Ha; Li, Ying; Kwek, Leong-Chuan
2014-10-31
A cluster state cannot be a unique ground state of a two-body interacting Hamiltonian. Here, we propose the creation of a cluster state of logical qubits encoded in spin-1/2 particles by adiabatically weakening two-body interactions. The proposal is valid for any spatial dimensional cluster states. Errors induced by thermal fluctuations and adiabatic evolution within finite time can be eliminated ensuring fault-tolerant quantum computing schemes.
NASA Astrophysics Data System (ADS)
Lenz, Annika; Ojamäe, Lars
2009-10-01
The size distribution of water clusters at equilibrium is studied using quantum-chemical calculations in combination with statistical thermodynamics. The necessary energetic data is obtained by quantum-chemical B3LYP computations and through extrapolations from the B3LYP results for the larger clusters. Clusters with up to 60 molecules are included in the equilibrium computations. Populations of different cluster sizes are calculated using both an ideal gas model with noninteracting clusters and a model where a correction for the interaction energy is included analogous to the van der Waals law. In standard vapor the majority of the water molecules are monomers. For the ideal gas model at 1 atm large clusters [56-mer (0-120 K) and 28-mer (100-260 K)] dominate at low temperatures and separate to smaller clusters [21-22-mer (170-280 K) and 4-6-mer (270-320 K) and to monomers (300-350 K)] when the temperature is increased. At lower pressure the transition from clusters to monomers lies at lower temperatures and fewer cluster sizes are formed. The computed size distribution exhibits enhanced peaks for the clusters consisting of 21 and 28 water molecules; these sizes are for protonated water clusters often referred to as magic numbers. If cluster-cluster interactions are included in the model the transition from clusters to monomers is sharper (i.e., occurs over a smaller temperature interval) than when the ideal-gas model is used. Clusters with 20-22 molecules dominate in the liquid region. When a large icelike cluster is included it will dominate for temperatures up to 325 K for the noninteracting clusters model. Thermodynamic properties (Cp, ΔH) were calculated with in general good agreement with experimental values for the solid and gas phase. A formula for the number of H-bond topologies in a given cluster structure is derived. For the 20-mer it is shown that the number of topologies contributes to making the population of dodecahedron-shaped cluster larger than that of a lower-energy fused prism cluster at high temperatures.
Lenz, Annika; Ojamäe, Lars
2009-10-07
The size distribution of water clusters at equilibrium is studied using quantum-chemical calculations in combination with statistical thermodynamics. The necessary energetic data is obtained by quantum-chemical B3LYP computations and through extrapolations from the B3LYP results for the larger clusters. Clusters with up to 60 molecules are included in the equilibrium computations. Populations of different cluster sizes are calculated using both an ideal gas model with noninteracting clusters and a model where a correction for the interaction energy is included analogous to the van der Waals law. In standard vapor the majority of the water molecules are monomers. For the ideal gas model at 1 atm large clusters [56-mer (0-120 K) and 28-mer (100-260 K)] dominate at low temperatures and separate to smaller clusters [21-22-mer (170-280 K) and 4-6-mer (270-320 K) and to monomers (300-350 K)] when the temperature is increased. At lower pressure the transition from clusters to monomers lies at lower temperatures and fewer cluster sizes are formed. The computed size distribution exhibits enhanced peaks for the clusters consisting of 21 and 28 water molecules; these sizes are for protonated water clusters often referred to as magic numbers. If cluster-cluster interactions are included in the model the transition from clusters to monomers is sharper (i.e., occurs over a smaller temperature interval) than when the ideal-gas model is used. Clusters with 20-22 molecules dominate in the liquid region. When a large icelike cluster is included it will dominate for temperatures up to 325 K for the noninteracting clusters model. Thermodynamic properties (C(p), DeltaH) were calculated with in general good agreement with experimental values for the solid and gas phase. A formula for the number of H-bond topologies in a given cluster structure is derived. For the 20-mer it is shown that the number of topologies contributes to making the population of dodecahedron-shaped cluster larger than that of a lower-energy fused prism cluster at high temperatures.
Stereotactic multibeam radiation therapy system in a PACS environment
NASA Astrophysics Data System (ADS)
Fresne, Francoise; Le Gall, G.; Barillot, Christian; Gibaud, Bernard; Manens, Jean-Pierre; Toumoulin, Christine; Lemoine, Didier; Chenal, C.; Scarabin, Jean-Marie
1991-05-01
A Multibeam radiation therapy treatment is a non-invasive technique devoted to treat a lesion within the cerebral medium by focusing photon-beams on the same target from a high number of entrance points. We present here a computer assisted dosimetric planning procedure which includes: (1) an analysis module to define the target volume by using 2D and 3D displays, (2) a planing module to issue a treatment strategy including the dosimetric simulations and (3) a treatment module setting up the parameters to order the robotized treatment system (i.e. chair- framework, radiation unit machine). Another important feature of this system is its connection to the PACS system SIRENE settled in the University hospital of Rennes which makes possible the archiving and the communication of the multimodal images (CT, MRI, Angiography) used by this application. The corporate use of stereotactic methods and the multimodality imagery ensures spatial coherence and makes the target definition and the cognition of the structures environment more accurate. The dosimetric planning suited to the spatial reference (i.e. the stereotactic frame) guarantees an optimal distribution of the dose computed by an original 3D volumetric algorithm. The robotic approach of the treatment stage has consisted to design a computer driven chair-framework cluster to position the target volume at the radiation unit isocenter.
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Downie, J. Stephen
2005-09-01
The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.
Torgerson, Carinna M; Quinn, Catherine; Dinov, Ivo; Liu, Zhizhong; Petrosyan, Petros; Pelphrey, Kevin; Haselgrove, Christian; Kennedy, David N; Toga, Arthur W; Van Horn, John Darrell
2015-03-01
Under the umbrella of the National Database for Clinical Trials (NDCT) related to mental illnesses, the National Database for Autism Research (NDAR) seeks to gather, curate, and make openly available neuroimaging data from NIH-funded studies of autism spectrum disorder (ASD). NDAR has recently made its database accessible through the LONI Pipeline workflow design and execution environment to enable large-scale analyses of cortical architecture and function via local, cluster, or "cloud"-based computing resources. This presents a unique opportunity to overcome many of the customary limitations to fostering biomedical neuroimaging as a science of discovery. Providing open access to primary neuroimaging data, workflow methods, and high-performance computing will increase uniformity in data collection protocols, encourage greater reliability of published data, results replication, and broaden the range of researchers now able to perform larger studies than ever before. To illustrate the use of NDAR and LONI Pipeline for performing several commonly performed neuroimaging processing steps and analyses, this paper presents example workflows useful for ASD neuroimaging researchers seeking to begin using this valuable combination of online data and computational resources. We discuss the utility of such database and workflow processing interactivity as a motivation for the sharing of additional primary data in ASD research and elsewhere.
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Target Information Processing: A Joint Decision and Estimation Approach
2012-03-29
ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important
A 3D-PIV System for Gas Turbine Applications
NASA Astrophysics Data System (ADS)
Acharya, Sumanta
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware, which was purchased, and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after the summary describes the computer cluster, its setup, and some cluster hardware, and presents highlights of those efforts since installation of the cluster.
NASA Technical Reports Server (NTRS)
McGalliard, James
2008-01-01
This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.
Switch configuration for migration to optical fiber network
NASA Technical Reports Server (NTRS)
Zobrist, George W.
1993-01-01
The purpose is to investigate the migration of an Ethernet LAN segment to fiber optics. At the present time it is proposed to support a Fiber Distributed Data Interface (FDDI) backbone and to upgrade the VAX cluster to fiber optic interface. Possibly some workstations will have an FDDI interface. The remaining stations on the Ethernet LAN will be segmented. The rationale for migrating from the present Ethernet configuration to a fiber optic backbone is due to the increase in the number of workstations and the movement of applications to a windowing environment, extensive document transfers, and compute intensive applications.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Lawrence, Bryan; Gomez-Dans, Jose; Holt, John
2015-04-01
We explore how the popular IPython Notebook computing system can be hosted on a cloud platform to provide a flexible virtual research hosting environment for Earth Observation data processing and analysis and how this approach can be expanded more broadly into a generic SaaS (Software as a Service) offering for the environmental sciences. OPTIRAD (OPTImisation environment for joint retrieval of multi-sensor RADiances) is a project funded by the European Space Agency to develop a collaborative research environment for Data Assimilation of Earth Observation products for land surface applications. Data Assimilation provides a powerful means to combine multiple sources of data and derive new products for this application domain. To be most effective, it requires close collaboration between specialists in this field, land surface modellers and end users of data generated. A goal of OPTIRAD then is to develop a collaborative research environment to engender shared working. Another significant challenge is that of data volume and complexity. Study of land surface requires high spatial and temporal resolutions, a relatively large number of variables and the application of algorithms which are computationally expensive. These problems can be addressed with the application of parallel processing techniques on specialist compute clusters. However, scientific users are often deterred by the time investment required to port their codes to these environments. Even when successfully achieved, it may be difficult to readily change or update. This runs counter to the scientific process of continuous experimentation, analysis and validation. The IPython Notebook provides users with a web-based interface to multiple interactive shells for the Python programming language. Code, documentation and graphical content can be saved and shared making it directly applicable to OPTIRAD's requirements for a shared working environment. Given the web interface it can be readily made into a hosted service with Wakari and Microsoft Azure being notable examples. Cloud-hosting of the Notebook allows the same familiar Python interface to be retained but backed by Cloud Computing attributes of scalability, elasticity and resource pooling. This combination makes it a powerful solution to address the needs of long-tail science users of Big Data: an intuitive interactive interface with which to access powerful compute resources. IPython Notebook can be hosted as a single user desktop environment but the recent development by the IPython community of JupyterHub enables it to be run as a multi-user hosting environment. In addition, IPython.parallel allows the exposition of parallel compute infrastructure through a Python interface. Applying these technologies in combination, a collaborative research environment has been developed for OPTIRAD on the UK JASMIN/CEMS facility's private cloud (http://jasmin.ac.uk). Based on this experience, a generic virtualised solution is under development suitable for use by the wider environmental science community - on both JASMIN and portable to third party cloud platforms.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Adaptive density trajectory cluster based on time and space distance
NASA Astrophysics Data System (ADS)
Liu, Fagui; Zhang, Zhijie
2017-10-01
There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.
Applied anatomic site study of palatal anchorage implants using cone beam computed tomography.
Lai, Ren-fa; Zou, Hui; Kong, Wei-dong; Lin, Wei
2010-06-01
The purpose of this study was to conduct quantitative research on bone height and bone mineral density of palatal implant sites for implantation, and to provide reference sites for safe and stable palatal implants. Three-dimensional reformatting images were reconstructed by cone beam computed tomography (CBCT) in 34 patients, aged 18 to 35 years, using EZ Implant software. Bone height was measured at 20 sites of interest on the palate. Bone mineral density was measured at the 10 sites with the highest implantation rate, classified using K-mean cluster analysis based on bone height and bone mineral density. According to the cluster analysis, 10 sites were classified into three clusters. Significant differences in bone height and bone mineral density were detected between these three clusters (P<0.05). The greatest bone height was obtained in cluster 2, followed by cluster 1 and cluster 3. The highest bone mineral density was found in cluster 3, followed by cluster 1 and cluster 2. CBCT plays an important role in pre-surgical treatment planning. CBCT is helpful in identifying safe and stable implantation sites for palatal anchorage.
NASA Astrophysics Data System (ADS)
Yoo, Soohaeng; Shao, Nan; Zeng, X. C.
2009-10-01
We report improved results of lowest-lying silicon clusters Si 30-Si 38. A large population of low-energy clusters are collected from previous searches by several research groups and the binding energies of these clusters are computed using density-functional theory (DFT) methods. Best candidates (isomers with high binding energies) are identified from the screening calculations. Additional constrained search is then performed for the best candidates using the basin-hopping method combined with DFT geometry optimization. The obtained low-lying clusters are classified according to binding energies computed using either the Perdew-Burke-Ernzerhof (PBE) functional or the Becke exchange and Lee-Yang-Parr correlation (BLYP) functional. We propose to rank low-lying clusters according to the mean PBE/BLYP binding energies in view that the PBE functional tends to give greater binding energies for more compact clusters whereas the BLYP functional tends to give greater binding energies for less compact clusters or clusters composed of small-sized magic-number clusters. Except for Si 30, the new search confirms again that medium-size silicon clusters Si 31-Si 38 constructed with proper fullerene cage motifs are most promising to be the lowest-energy structures.
Characteristics of airflow and particle deposition in COPD current smokers
NASA Astrophysics Data System (ADS)
Zou, Chunrui; Choi, Jiwoong; Haghighi, Babak; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2017-11-01
A recent imaging-based cluster analysis of computed tomography (CT) lung images in a chronic obstructive pulmonary disease (COPD) cohort identified four clusters, viz. disease sub-populations. Cluster 1 had relatively normal airway structures; Cluster 2 had wall thickening; Cluster 3 exhibited decreased wall thickness and luminal narrowing; Cluster 4 had a significant decrease of luminal diameter and a significant reduction of lung deformation, thus having relatively low pulmonary functions. To better understand the characteristics of airflow and particle deposition in these clusters, we performed computational fluid and particle dynamics analyses on representative cluster patients and healthy controls using CT-based airway models and subject-specific 3D-1D coupled boundary conditions. The results show that particle deposition in central airways of cluster 4 patients was noticeably increased especially with increasing particle size despite reduced vital capacity as compared to other clusters and healthy controls. This may be attributable in part to significant airway constriction in cluster 4. This study demonstrates the potential application of cluster-guided CFD analysis in disease populations. NIH Grants U01HL114494 and S10-RR022421, and FDA Grant U01FD005837.
Access and visualization using clusters and other parallel computers
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, Bruce; Block, Gary; Collier, Jim; Curkendall, Dave; Good, John; Husman, Laura; Jacob, Joe; Laity, Anastasia;
2003-01-01
JPL's Parallel Applications Technologies Group has been exploring the issues of data access and visualization of very large data sets over the past 10 or so years. this work has used a number of types of parallel computers, and today includes the use of commodity clusters. This talk will highlight some of the applications and tools we have developed, including how they use parallel computing resources, and specifically how we are using modern clusters. Our applications focus on NASA's needs; thus our data sets are usually related to Earth and Space Science, including data delivered from instruments in space, and data produced by telescopes on the ground.
Formation and Assembly of Massive Star Clusters
NASA Astrophysics Data System (ADS)
McMillan, Stephen
The formation of stars and star clusters is a major unresolved problem in astrophysics. It is central to modeling stellar populations and understanding galaxy luminosity distributions in cosmological models. Young massive clusters are major components of starburst galaxies, while globular clusters are cornerstones of the cosmic distance scale and represent vital laboratories for studies of stellar dynamics and stellar evolution. Yet how these clusters form and how rapidly and efficiently they expel their natal gas remain unclear, as do the consequences of this gas expulsion for cluster structure and survival. Also unclear is how the properties of low-mass clusters, which form from small-scale instabilities in galactic disks and inform much of our understanding of cluster formation and star-formation efficiency, differ from those of more massive clusters, which probably formed in starburst events driven by fast accretion at high redshift, or colliding gas flows in merging galaxies. Modeling cluster formation requires simulating many simultaneous physical processes, placing stringent demands on both software and hardware. Simulations of galaxies evolving in cosmological contexts usually lack the numerical resolution to simulate star formation in detail. They do not include detailed treatments of important physical effects such as magnetic fields, radiation pressure, ionization, and supernova feedback. Simulations of smaller clusters include these effects, but fall far short of the mass of even single young globular clusters. With major advances in computing power and software, we can now directly address this problem. We propose to model the formation of massive star clusters by integrating the FLASH adaptive mesh refinement magnetohydrodynamics (MHD) code into the Astrophysical Multi-purpose Software Environment (AMUSE) framework, to work with existing stellar-dynamical and stellar evolution modules in AMUSE. All software will be freely distributed on-line, allowing open access to state-of- the-art simulation techniques within a modern, modular software environment. We will follow the gravitational collapse of 0.1-10 million-solar mass gas clouds through star formation and coalescence into a star cluster, modeling in detail the coupling of the gas and the newborn stars. We will study the effects of star formation by detecting accreting regions of gas in self-gravitating, turbulent, MHD, FLASH models that we will translate into collisional dynamical systems of stars modeled with an N-body code, coupled together in the AMUSE framework. Our FLASH models will include treatments of radiative transfer from the newly formed stars, including heating and radiative acceleration of the surrounding gas. Specific questions to be addressed are: (1) How efficiently does the gas in a star forming region form stars, how does this depend on mass, metallicity, and other parameters, and what terminates star formation? What observational predictions can be made to constrain our models? (2) How important are different mechanisms for driving turbulence and removing gas from a cluster: accretion, radiative feedback, and mechanical feedback? (3) How does the infant mortality rate of young clusters depend on the initial properties of the parent cloud? (4) What are the characteristic formation timescales of massive star clusters, and what observable imprints does the assembly process leave on their structure at an age of 10-20 Myr, when formation is essentially complete and many clusters can be observed? These studies are directly relevant to NASA missions at many electromagnetic wavelengths, including Chandra, GALEX, Hubble, and Spitzer. Each traces different aspects of cluster formation and evolution: X-rays trace supernovae, ultraviolet traces young stars, visible colors can distinguish between young blue stars and older red stars, and the infrared directly shows young embedded star clusters.
Coupled-cluster computations of atomic nuclei
NASA Astrophysics Data System (ADS)
Hagen, G.; Papenbrock, T.; Hjorth-Jensen, M.; Dean, D. J.
2014-09-01
In the past decade, coupled-cluster theory has seen a renaissance in nuclear physics, with computations of neutron-rich and medium-mass nuclei. The method is efficient for nuclei with product-state references, and it describes many aspects of weakly bound and unbound nuclei. This report reviews the technical and conceptual developments of this method in nuclear physics, and the results of coupled-cluster calculations for nucleonic matter, and for exotic isotopes of helium, oxygen, calcium, and some of their neighbors.
Clustering recommendations to compute agent reputation
NASA Astrophysics Data System (ADS)
Bedi, Punam; Kaur, Harmeet
2005-03-01
Traditional centralized approaches to security are difficult to apply to multi-agent systems which are used nowadays in e-commerce applications. Developing a notion of trust that is based on the reputation of an agent can provide a softer notion of security that is sufficient for many multi-agent applications. Our paper proposes a mechanism for computing reputation of the trustee agent for use by the trustier agent. The trustier agent computes the reputation based on its own experience as well as the experience the peer agents have with the trustee agents. The trustier agents intentionally interact with the peer agents to get their experience information in the form of recommendations. We have also considered the case of unintentional encounters between the referee agents and the trustee agent, which can be directly between them or indirectly through a set of interacting agents. The clustering is done to filter off the noise in the recommendations in the form of outliers. The trustier agent clusters the recommendations received from referee agents on the basis of the distances between recommendations using the hierarchical agglomerative method. The dendogram hence obtained is cut at the required similarity level which restricts the maximum distance between any two recommendations within a cluster. The cluster with maximum number of elements denotes the views of the majority of recommenders. The center of this cluster represents the reputation of the trustee agent which can be computed using c-means algorithm.
Efficiency of parallel direct optimization
NASA Technical Reports Server (NTRS)
Janies, D. A.; Wheeler, W. C.
2001-01-01
Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
DESPIC: Detecting Early Signatures of Persuasion in Information Cascades
2015-08-27
over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework
SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.
Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui
2013-04-01
Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
Implementation of the force decomposition machine for molecular dynamics simulations.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2012-09-01
We present the design and implementation of the force decomposition machine (FDM), a cluster of personal computers (PCs) that is tailored to running molecular dynamics (MD) simulations using the distributed diagonal force decomposition (DDFD) parallelization method. The cluster interconnect architecture is optimized for the communication pattern of the DDFD method. Our implementation of the FDM relies on standard commodity components even for networking. Although the cluster is meant for DDFD MD simulations, it remains general enough for other parallel computations. An analysis of several MD simulation runs on both the FDM and a standard PC cluster demonstrates that the FDM's interconnect architecture provides a greater performance compared to a more general cluster interconnect. Copyright © 2012 Elsevier Inc. All rights reserved.
Competency Index. [Business/Computer Technologies Cluster.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Center on Education and Training for Employment.
This index allows the user to scan the competencies under each title for the 28 subjects appropriate for use in a competency list for the 12 occupations within the business/computer technologies cluster. Titles of the 28 units are as follows: employability skills; professionalism; teamwork; professional and ethical standards; economic and business…
The Ever-Present Demand for Public Computing Resources. CDS Spotlight
ERIC Educational Resources Information Center
Pirani, Judith A.
2014-01-01
This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…
Bioinformatics and Astrophysics Cluster (BinAc)
NASA Astrophysics Data System (ADS)
Krüger, Jens; Lutz, Volker; Bartusch, Felix; Dilling, Werner; Gorska, Anna; Schäfer, Christoph; Walter, Thomas
2017-09-01
BinAC provides central high performance computing capacities for bioinformaticians and astrophysicists from the state of Baden-Württemberg. The bwForCluster BinAC is part of the implementation concept for scientific computing for the universities in Baden-Württemberg. Community specific support is offered through the bwHPC-C5 project.
Monitoring by Use of Clusters of Sensor-Data Vectors
NASA Technical Reports Server (NTRS)
Iverson, David L.
2007-01-01
The inductive monitoring system (IMS) is a system of computer hardware and software for automated monitoring of the performance, operational condition, physical integrity, and other aspects of the health of a complex engineering system (e.g., an industrial process line or a spacecraft). The input to the IMS consists of streams of digitized readings from sensors in the monitored system. The IMS determines the type and amount of any deviation of the monitored system from a nominal or normal ( healthy ) condition on the basis of a comparison between (1) vectors constructed from the incoming sensor data and (2) corresponding vectors in a database of nominal or normal behavior. The term inductive reflects the use of a process reminiscent of traditional mathematical induction to learn about normal operation and build the nominal-condition database. The IMS offers two major advantages over prior computational monitoring systems: The computational burden of the IMS is significantly smaller, and there is no need for abnormal-condition sensor data for training the IMS to recognize abnormal conditions. The figure schematically depicts the relationships among the computational processes effected by the IMS. Training sensor data are gathered during normal operation of the monitored system, detailed computational simulation of operation of the monitored system, or both. The training data are formed into vectors that are used to generate the database. The vectors in the database are clustered into regions that represent normal or nominal operation. Once the database has been generated, the IMS compares the vectors of incoming sensor data with vectors representative of the clusters. The monitored system is deemed to be operating normally or abnormally, depending on whether the vector of incoming sensor data is or is not, respectively, sufficiently close to one of the clusters. For this purpose, a distance between two vectors is calculated by a suitable metric (e.g., Euclidean distance) and "sufficiently close" signifies lying at a distance less than a specified threshold value. It must be emphasized that although the IMS is intended to detect off-nominal or abnormal performance or health, it is not necessarily capable of performing a thorough or detailed diagnosis. Limited diagnostic information may be available under some circumstances. For example, the distance of a vector of incoming sensor data from the nearest cluster could serve as an indication of the severity of a malfunction. The identity of the nearest cluster may be a clue as to the identity of the malfunctioning component or subsystem. It is possible to decrease the IMS computation time by use of a combination of cluster-indexing and -retrieval methods. For example, in one method, the distances between each cluster and two or more reference vectors can be used for the purpose of indexing and retrieval. The clusters are sorted into a list according to these distance values, typically in ascending order of distance. When a set of input data arrives and is to be tested, the data are first arranged as an ordered set (that is, a vector). The distances from the input vector to the reference points are computed. The search of clusters from the list can then be limited to those clusters lying within a certain distance range from the input vector; the computation time is reduced by not searching the clusters at a greater distance.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
NASA Technical Reports Server (NTRS)
Wharton, S. W.
1980-01-01
An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. The algorithm interfaces the rapid numerical processing capacity of a computer with the human ability to integrate qualitative information. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters and the analyst, who evaluate and elect to modify the cluster structure. Clusters can be deleted or lumped pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The ICAP was implemented in APL (A Programming Language), an interactive computer language. The flexibility of the algorithm was evaluated using data from different LANDSAT scenes to simulate two situations: one in which the analyst is assumed to have no prior knowledge about the data and wishes to have the clusters formed more or less automatically; and the other in which the analyst is assumed to have some knowledge about the data structure and wishes to use that information to closely supervise the clustering process. For comparison, an existing clustering method was also applied to the two data sets.
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
NASA Astrophysics Data System (ADS)
Ogloblina, Daria; Schmidt, Steffen J.; Adams, Nikolaus A.
2018-06-01
Cavitation is a process where a liquid evaporates due to a pressure drop and re-condenses violently. Noise, material erosion and altered system dynamics characterize for such a process for which shock waves, rarefaction waves and vapor generation are typical phenomena. The current paper presents novel results for collapsing vapour-bubble clusters in a liquid environment close to a wall obtained by computational fluid mechanics (CFD) simulations. The driving pressure initially is 10 MPa in the liquid. Computations are carried out by using a fully compressible single-fluid flow model in combination with a conservative finite volume method (FVM). The investigated bubble clusters (referred to as "clouds") differ by their initial vapor volume fractions, initial stand-off distances to the wall and by initial bubble radii. The effects of collapse focusing due to bubble-bubble interaction are analysed by investigating the intensities and positions of individual bubble collapses, as well as by the resulting shock-induced pressure field at the wall. Stronger interaction of the bubbles leads to an intensification of the collapse strength for individual bubbles, collapse focusing towards the center of the cloud and enhanced re-evaporation. The obtained results reveal collapse features which are common for all cases, as well as case-specific differences during collapse-rebound cycles. Simultaneous measurements of maximum pressures at the wall and within the flow field and of the vapor volume evolution show that not only the primary collapse but also subsequent collapses are potentially relevant for erosion.
Compositional clustering in task structure learning
Frank, Michael J.
2018-01-01
Humans are remarkably adept at generalizing knowledge between experiences in a way that can be difficult for computers. Often, this entails generalizing constituent pieces of experiences that do not fully overlap, but nonetheless share useful similarities with, previously acquired knowledge. However, it is often unclear how knowledge gained in one context should generalize to another. Previous computational models and data suggest that rather than learning about each individual context, humans build latent abstract structures and learn to link these structures to arbitrary contexts, facilitating generalization. In these models, task structures that are more popular across contexts are more likely to be revisited in new contexts. However, these models can only re-use policies as a whole and are unable to transfer knowledge about the transition structure of the environment even if only the goal has changed (or vice-versa). This contrasts with ecological settings, where some aspects of task structure, such as the transition function, will be shared between context separately from other aspects, such as the reward function. Here, we develop a novel non-parametric Bayesian agent that forms independent latent clusters for transition and reward functions, affording separable transfer of their constituent parts across contexts. We show that the relative performance of this agent compared to an agent that jointly clusters reward and transition functions depends environmental task statistics: the mutual information between transition and reward functions and the stochasticity of the observations. We formalize our analysis through an information theoretic account of the priors, and propose a meta learning agent that dynamically arbitrates between strategies across task domains to optimize a statistical tradeoff. PMID:29672581
Exploratory Item Classification Via Spectral Graph Clustering
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2017-01-01
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Coupled multipolar interactions in small-particle metallic clusters.
Pustovit, Vitaly N; Sotelo, Juan A; Niklasson, Gunnar A
2002-03-01
We propose a new formalism for computing the optical properties of small clusters of particles. It is a generalization of the coupled dipole-dipole particle-interaction model and allows one in principle to take into account all multipolar interactions in the long-wavelength limit. The method is illustrated by computations of the optical properties of N = 6 particle clusters for different multipolar approximations. We examine the effect of separation between particles and compare the optical spectra with the discrete-dipole approximation and the generalized Mie theory.
A fuzzy clustering algorithm to detect planar and quadric shapes
NASA Technical Reports Server (NTRS)
Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa
1992-01-01
In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amjad Majid; Albert, Don; Andersson, Par
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.
Solvation effects on chemical shifts by embedded cluster integral equation theory.
Frach, Roland; Kast, Stefan M
2014-12-11
The accurate computational prediction of nuclear magnetic resonance (NMR) parameters like chemical shifts represents a challenge if the species studied is immersed in strongly polarizing environments such as water. Common approaches to treating a solvent in the form of, e.g., the polarizable continuum model (PCM) ignore strong directional interactions such as H-bonds to the solvent which can have substantial impact on magnetic shieldings. We here present a computational methodology that accounts for atomic-level solvent effects on NMR parameters by extending the embedded cluster reference interaction site model (EC-RISM) integral equation theory to the prediction of chemical shifts of N-methylacetamide (NMA) in aqueous solution. We examine the influence of various so-called closure approximations of the underlying three-dimensional RISM theory as well as the impact of basis set size and different treatment of electrostatic solute-solvent interactions. We find considerable and systematic improvement over reference PCM and gas phase calculations. A smaller basis set in combination with a simple point charge model already yields good performance which can be further improved by employing exact electrostatic quantum-mechanical solute-solvent interaction energies. A larger basis set benefits more significantly from exact over point charge electrostatics, which can be related to differences of the solvent's charge distribution.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
Assessing the Amazon Cloud Suitability for CLARREO's Computational Needs
NASA Technical Reports Server (NTRS)
Goldin, Daniel; Vakhnin, Andrei A.; Currey, Jon C.
2015-01-01
In this document we compare the performance of the Amazon Web Services (AWS), also known as Amazon Cloud, with the CLARREO (Climate Absolute Radiance and Refractivity Observatory) cluster and assess its suitability for computational needs of the CLARREO mission. A benchmark executable to process one month and one year of PARASOL (Polarization and Anistropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) data was used. With the optimal AWS configuration, adequate data-processing times, comparable to the CLARREO cluster, were found. The assessment of alternatives to the CLARREO cluster continues and several options, such as a NASA-based cluster, are being considered.
Efficient architecture for spike sorting in reconfigurable hardware.
Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying
2013-11-01
This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.
Continuous Variable Cluster State Generation over the Optical Spatial Mode Comb
Pooser, Raphael C.; Jing, Jietai
2014-10-20
One way quantum computing uses single qubit projective measurements performed on a cluster state (a highly entangled state of multiple qubits) in order to enact quantum gates. The model is promising due to its potential scalability; the cluster state may be produced at the beginning of the computation and operated on over time. Continuous variables (CV) offer another potential benefit in the form of deterministic entanglement generation. This determinism can lead to robust cluster states and scalable quantum computation. Recent demonstrations of CV cluster states have made great strides on the path to scalability utilizing either time or frequency multiplexingmore » in optical parametric oscillators (OPO) both above and below threshold. The techniques relied on a combination of entangling operators and beam splitter transformations. Here we show that an analogous transformation exists for amplifiers with Gaussian inputs states operating on multiple spatial modes. By judicious selection of local oscillators (LOs), the spatial mode distribution is analogous to the optical frequency comb consisting of axial modes in an OPO cavity. We outline an experimental system that generates cluster states across the spatial frequency comb which can also scale the amount of quantum noise reduction to potentially larger than in other systems.« less
Overlapping Community Detection based on Network Decomposition
NASA Astrophysics Data System (ADS)
Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin
2016-04-01
Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.
Zhang, Jiang; Liu, Qi; Chen, Huafu; Yuan, Zhen; Huang, Jin; Deng, Lihua; Lu, Fengmei; Zhang, Junpeng; Wang, Yuqing; Wang, Mingwen; Chen, Liangyin
2015-01-01
Clustering analysis methods have been widely applied to identifying the functional brain networks of a multitask paradigm. However, the previously used clustering analysis techniques are computationally expensive and thus impractical for clinical applications. In this study a novel method, called SOM-SAPC that combines self-organizing mapping (SOM) and supervised affinity propagation clustering (SAPC), is proposed and implemented to identify the motor execution (ME) and motor imagery (MI) networks. In SOM-SAPC, SOM was first performed to process fMRI data and SAPC is further utilized for clustering the patterns of functional networks. As a result, SOM-SAPC is able to significantly reduce the computational cost for brain network analysis. Simulation and clinical tests involving ME and MI were conducted based on SOM-SAPC, and the analysis results indicated that functional brain networks were clearly identified with different response patterns and reduced computational cost. In particular, three activation clusters were clearly revealed, which include parts of the visual, ME and MI functional networks. These findings validated that SOM-SAPC is an effective and robust method to analyze the fMRI data with multitasks.
An automated workflow for parallel processing of large multiview SPIM recordings
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-01-01
Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585
An automated workflow for parallel processing of large multiview SPIM recordings.
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-04-01
Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters
NASA Astrophysics Data System (ADS)
Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge
2015-12-01
In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.
You Can Touch This! Bringing HST images to life as 3-D models
NASA Astrophysics Data System (ADS)
Christian, Carol A.; Nota, A.; Grice, N. A.; Sabbi, E.; Shaheen, N.; Greenfield, P.; Hurst, A.; Kane, S.; Rao, R.; Dutterer, J.; de Mink, S. E.
2014-01-01
We present the very first results of an innovative process to transform Hubble images into tactile 3-D models of astronomical objects. We have created a very new, unique tool for understanding astronomical phenomena, especially designed to make astronomy accessible to visually impaired children and adults. From the multicolor images of stellar clusters, we construct 3-D computer models that are digitally sliced into layers, each featuring touchable patterning and Braille characters, and are printed on a 3-D printer. The slices are then fitted together, so that the user can explore the structure of the cluster environment with their fingertips, slice-by-slice, analogous to a visual fly-through. Students will be able to identify and spatially locate the different components of these complex astronomical objects, namely gas, dust and stars, and will learn about the formation and composition of stellar clusters. The primary audiences for the 3D models are middle school and high school blind students and, secondarily, blind adults. However, we believe that the final materials will address a broad range of individuals with varied and multi-sensory learning styles, and will be interesting and visually appealing to the public at large.
Parallel algorithm of VLBI software correlator under multiprocessor environment
NASA Astrophysics Data System (ADS)
Zheng, Weimin; Zhang, Dong
2007-11-01
The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.
Panoramic Views of Cluster Evolution Since z = 3
NASA Astrophysics Data System (ADS)
Kodama, Tadayuki; Tanaka, M.; Tanaka, Ichi; Kajisawa, M.
2007-05-01
We have been conducting PISCES project (Panoramic Imaging and Spectroscopy of Cluster Evolution with Subaru) with making use of the wide-field imaging capability of Subaru. Our motivations are first to map out large scale structure and local environment of galaxies therein, and then to investigate the variation in galaxy properties as a function of environment and mass. We have completed multi-colour imaging of 8 distant clusters between 0.4
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
Extrinsic Sources of Scatter in the Richness-mass Relation of Galaxy Clusters
NASA Astrophysics Data System (ADS)
Rozo, Eduardo; Rykoff, Eli; Koester, Benjamin; Nord, Brian; Wu, Hao-Yi; Evrard, August; Wechsler, Risa
2011-10-01
Maximizing the utility of upcoming photometric cluster surveys requires a thorough understanding of the richness-mass relation of galaxy clusters. We use Monte Carlo simulations to study the impact of various sources of observational scatter on this relation. Cluster ellipticity, photometric errors, photometric redshift errors, and cluster-to-cluster variations in the properties of red-sequence galaxies contribute negligible noise. Miscentering, however, can be important, and likely contributes to the scatter in the richness-mass relation of galaxy maxBCG clusters at the low-mass end, where centering is more difficult. We also investigate the impact of projection effects under several empirically motivated assumptions about cluster environments. Using Sloan Digital Sky Survey data and the maxBCG cluster catalog, we demonstrate that variations in cluster environments can rarely (≈1%-5% of the time) result in significant richness boosts. Due to the steepness of the mass/richness function, the corresponding fraction of optically selected clusters that suffer from these projection effects is ≈5%-15%. We expect these numbers to be generic in magnitude, but a precise determination requires detailed, survey-specific modeling.
Script identification from images using cluster-based templates
Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.
1998-12-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.
Script identification from images using cluster-based templates
Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.
1998-01-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.
A clustered origin for isolated massive stars
NASA Astrophysics Data System (ADS)
Lucas, William E.; Rybak, Matus; Bonnell, Ian A.; Gieles, Mark
2018-03-01
High-mass stars are commonly found in stellar clusters promoting the idea that their formation occurs due to the physical processes linked with a young stellar cluster. It has recently been reported that isolated high-mass stars are present in the Large Magellanic Cloud. Due to their low velocities, it has been argued that these are high-mass stars which formed without a surrounding stellar cluster. In this paper, we present an alternative explanation for the origin of these stars in which they formed in a cluster environment but are subsequently dispersed into the field as their natal cluster is tidally disrupted in a merger with a higher mass cluster. They escape the merged cluster with relatively low velocities typical of the cluster interaction and thus of the larger scale velocity dispersion, similarly to the observed stars. N-body simulations of cluster mergers predict a sizeable population of low-velocity (≤20 km s-1), high-mass stars at distances of >20 pc from the cluster. High-mass clusters in which gas poor mergers are frequent would be expected to commonly have haloes of young stars, including high-mass stars, which were actually formed in a cluster environment.
MACS: The impact of environment on galaxy evolution at z>0.5
NASA Astrophysics Data System (ADS)
Ma, Cheng-Jiun
2010-08-01
In order to investigate galaxy evolution in environments of greatly varying density, we conduct an extensive spectroscopic survey of galaxies in eight X-ray luminous clusters at redshift higher than 0.5. Unlike most spectroscopic surveys of cluster galaxies, we sample the galaxy population beyond the virial radius of each cluster (out to ˜6 Mpc), thereby probing regions that differ by typically two orders of magnitude in galaxy density. Galaxies are classified by spectroscopic type into emission-line, absorption-line, post starburst (E+A), and starburst (e(a) and e(b)) galaxies, and the spatial distribution of each type is used as a diagnostic of the presence and efficiency of different physical mechanisms of galaxy evolution. Our analysis yields the perhaps strongest confirmation so far of the morphology-density relation for emission- and absorption-line galaxies. In addition, we find E+A galaxies to be exclusively located within the ram-pressure stripping radius of each cluster. Taking advantage of this largest sample of E+A galaxies in clusters compiled to date, the spatial profile of the distribution of E+A galaxies can be studied for the first time. We show that ram-pressure stripping is the dominant, and possibly only, physical mechanism to cause the post-starburst phase of cluster galaxies. In addition, two particular interesting clusters are studied individually. For MACS J0717.5+3745, a clear morphology-density correlation is observed for lenticular (S0) galaxies around this cluster, but becomes insignificant toward the center of cluster. We interpret this finding as evidence of the creation of S0s being triggered primarily in environments of low to intermediate density. In MACS J0025.4-1225, a cluster undergoing a major merger, all faint E+A galaxies are observed to lie near the peak of the X-ray surface brightness, strongly suggesting that starbursts are enhanced as well as terminated during cluster mergers. We conclude that ram-pressure stripping and/or tidal destruction are central to the evolution of galaxies clusters, and that wide-field spectroscopic surveys around clusters are essential to distinguish between competing physical effects driving galaxy evolution in different environments.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
CLUSTER DYNAMICS LARGELY SHAPES PROTOPLANETARY DISK SIZES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincke, Kirsten; Pfalzner, Susanne, E-mail: kvincke@mpifr-bonn.mpg.de
2016-09-01
To what degree the cluster environment influences the sizes of protoplanetary disks surrounding young stars is still an open question. This is particularly true for the short-lived clusters typical for the solar neighborhood, in which the stellar density and therefore the influence of the cluster environment change considerably over the first 10 Myr. In previous studies, the effect of the gas on the cluster dynamics has often been neglected; this is remedied here. Using the code NBody6++, we study the stellar dynamics in different developmental phases—embedded, expulsion, and expansion—including the gas, and quantify the effect of fly-bys on the diskmore » size. We concentrate on massive clusters (M {sub cl} ≥ 10{sup 3}–6 ∗ 10{sup 4} M {sub Sun}), which are representative for clusters like the Orion Nebula Cluster (ONC) or NGC 6611. We find that not only the stellar density but also the duration of the embedded phase matters. The densest clusters react fastest to the gas expulsion and drop quickly in density, here 98% of relevant encounters happen before gas expulsion. By contrast, disks in sparser clusters are initially less affected, but because these clusters expand more slowly, 13% of disks are truncated after gas expulsion. For ONC-like clusters, we find that disks larger than 500 au are usually affected by the environment, which corresponds to the observation that 200 au-sized disks are common. For NGC 6611-like clusters, disk sizes are cut-down on average to roughly 100 au. A testable hypothesis would be that the disks in the center of NGC 6611 should be on average ≈20 au and therefore considerably smaller than those in the ONC.« less
LBT/LUCIFER view of star-forming galaxies in the cluster 7C 1756+6520 at z ˜ 1.4
NASA Astrophysics Data System (ADS)
Magrini, Laura; Sommariva, Veronica; Cresci, Giovanni; Sani, Eleonora; Galametz, Audrey; Mannucci, Filippo; Petropoulou, Vasiliki; Fumana, Marco
2012-10-01
Galaxy clusters are key places to study the contribution of nature (i.e. mass and morphology) and nurture (i.e. environment) in the formation and evolution of galaxies. Recently, a number of clusters at z > 1, i.e. corresponding to the first epochs of the cluster formation, have been discovered and confirmed spectroscopically. We present new observations obtained with the LBT Near Infrared Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER) spectrograph at Large Binocular Telescope (LBT) of a sample of star-forming galaxies associated with a large-scale structure around the radio galaxy 7C 1756+6520 at z = 1.42. Combining our spectroscopic data and the literature photometric data, we derived some of the properties of these galaxies: star formation rate, metallicity and stellar mass. With the aim of analysing the effect of the cluster environment on galaxy evolution, we have located the galaxies in the plane of the so-called fundamental metallicity relation (FMR), which is known not to evolve with redshift up to z = 2.5 for field galaxies, but it is still unexplored in rich environments at low and high redshifts. We found that the properties of the galaxies in the cluster 7C 1756+6520 are compatible with the FMR which suggests that the effect of the environment on galaxy metallicity at this early epoch of cluster formation is marginal. As a side study, we also report the spectroscopic analysis of a bright active galactic nucleus, belonging to the cluster, which shows a significant outflow of gas.
Computational Role of Tunneling in a Programmable Quantum Annealer
NASA Technical Reports Server (NTRS)
Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut
2016-01-01
Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.
A program to compute the soft Robinson-Foulds distance between phylogenetic networks.
Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai
2017-03-14
Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.
A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees
NASA Astrophysics Data System (ADS)
Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.
2018-04-01
The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.
Research on retailer data clustering algorithm based on Spark
NASA Astrophysics Data System (ADS)
Huang, Qiuman; Zhou, Feng
2017-03-01
Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.
NASA Astrophysics Data System (ADS)
Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.
We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.
Metal Sulfide Cluster Complexes and their Biogeochemical Importance in the Environment
NASA Astrophysics Data System (ADS)
Luther, George W.; Rickard, David T.
2005-10-01
Aqueous clusters of FeS, ZnS and CuS constitute a major fraction of the dissolved metal load in anoxic oceanic, sedimentary, freshwater and deep ocean vent environments. Their ubiquity explains how metals are transported in anoxic environmental systems. Thermodynamic and kinetic considerations show that they have high stability in oxic aqueous environments, and are also a significant fraction of the total metal load in oxic river waters. Molecular modeling indicates that the clusters are very similar to the basic structural elements of the first condensed phase forming from aqueous solutions in the Fe-S, Zn-S and Cu-S systems. The structure of the first condensed phase is determined by the structure of the cluster in solution. This provides an alternative explanation of Ostwald's Rule, where the most soluble, metastable phases form before the stable phases. For example, in the case of FeS, we showed that the first condensed phase is nanoparticulate, metastable mackinawite with a particle size of 2 nm consisting of about 150 FeS subunits, representing the end of a continuum between aqueous FeS clusters and condensed material. These metal sulfide clusters and nanoparticles are significant in biogeochemistry. Metal sulfide clusters reduce sulfide and metal toxicity and help drive ecology. FeS cluster formation drives vent ecology and AgS cluster formation detoxifies Ag in Daphnia magna neonates. We also note a new reaction between FeS and DNA and discuss the potential role of FeS clusters in denaturing DNA.
Spatial patterns in electoral wards with high lymphoma incidence in Yorkshire health region.
Barnes, N.; Cartwright, R. A.; O'Brien, C.; Roberts, B.; Richards, I. D.; Bird, C. C.
1987-01-01
The possibilities of clustering between those electoral wards which display higher than expected incidences of cases of the lymphomas occurring between 1978 and 1982 are examined. Clusters are defined as being those wards with cases in excess (at a probability of less than 10%) which are geographically adjacent to each other. A separate analysis extends the definition of cluster to include high incidence wards that are adjacent or separated by one other ward. The results indicate that many high incidence lymphoma wards do occur close together and when computer simulations are used to compute expected results, many of the observed results are shown to be highly improbable both in the overall number of clustering wards and in the largest number of wards comprising a 'cluster'. PMID:3663469
Galaxy clusters in local Universe simulations without density constraints: a long uphill struggle
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.
2018-06-01
Galaxy clusters are excellent cosmological probes provided that their formation and evolution within the large scale environment are precisely understood. Therefore studies with simulated galaxy clusters have flourished. However detailed comparisons between simulated and observed clusters and their population - the galaxies - are complicated by the diversity of clusters and their surrounding environment. An original way initiated by Bertschinger as early as 1987, to legitimize the one-to-one comparison exercise down to the details, is to produce simulations constrained to resemble the cluster under study within its large scale environment. Subsequently several methods have emerged to produce simulations that look like the local Universe. This paper highlights one of these methods and its essential steps to get simulations that not only resemble the local Large Scale Structure but also that host the local clusters. It includes a new modeling of the radial peculiar velocity uncertainties to remove the observed correlation between the decreases of the simulated cluster masses and of the amount of data used as constraints with the distance from us. This method has the particularity to use solely radial peculiar velocities as constraints: no additional density constraints are required to get local cluster simulacra. The new resulting simulations host dark matter halos that match the most prominent local clusters such as Coma. Zoom-in simulations of the latter and of a volume larger than the 30h-1 Mpc radius inner sphere become now possible to study local clusters and their effects. Mapping the local Sunyaev-Zel'dovich and Sachs-Wolfe effects can follow.
Galaxies at the Extremes: Ultradiffuse Galaxies in the Virgo Cluster
NASA Astrophysics Data System (ADS)
Mihos, Chris
2017-08-01
The ultradiffuse galaxies (UDGs) recently discovered in massive galaxy clusters presents both challenges and opportunities for our understanding of galaxy evolution in dense clusters. Such large, low density galaxies should be most vulnerable to gravitational destruction within the cluster environment. Thus their presence in cluster cores argues either that they must be stabilized by massive dark halos or else be short-lived objects undergoing rapid transformation, perhaps leading to the formation of ultracompact dwarf galaxies (UCDs) if their destruction leaves only a compact nucleus behind. We propose deep imaging of four Virgo Cluster UDGs to probe their local environment within Virgo via accurate tip of the red giant branch (TRGB) distances. With a distance precision of 1 Mpc, we will accurately place the objects in the Virgo core, cluster outskirts, or intervening field. When coupled with our extant kinematic data, we can determine whether they are infalling objects or instead have already passed through the cluster core. We will also compare their compact nuclei to Virgo UCDs, and study their globular cluster (GC) populations in detail. Probing three magnitudes beyond the turnover in the GC luminosity function, we will construct larger and cleaner GC samples than possible with ground-based imaging, using the total mass and radial extent of the globular cluster systems to estimate the dark halo mass and tidal radius for each UDG. The new information provided by HST about the local environment and intrinsic properties of these Virgo UDGs will be used in conjunction with simulation data to study cluster-driven evolution and transformation of low density galaxies.
The Effects of Environmental Characteristics on the Structure of Hospital Clusters.
ERIC Educational Resources Information Center
Fennell, Mary L.
1980-01-01
The population ecology view that variation in sets or clusters of organizations should be isomorphic with variation in cluster environment was used to explain structural variation among hospital clusters. Cluster differentiation seems to be casually affected by range of services, average hospital size, and the periodic closing of hospitals.…
Star clusters in evolving galaxies
NASA Astrophysics Data System (ADS)
Renaud, Florent
2018-04-01
Their ubiquity and extreme densities make star clusters probes of prime importance of galaxy evolution. Old globular clusters keep imprints of the physical conditions of their assembly in the early Universe, and younger stellar objects, observationally resolved, tell us about the mechanisms at stake in their formation. Yet, we still do not understand the diversity involved: why is star cluster formation limited to 105M⊙ objects in the Milky Way, while some dwarf galaxies like NGC 1705 are able to produce clusters 10 times more massive? Why do dwarfs generally host a higher specific frequency of clusters than larger galaxies? How to connect the present-day, often resolved, stellar systems to the formation of globular clusters at high redshift? And how do these links depend on the galactic and cosmological environments of these clusters? In this review, I present recent advances on star cluster formation and evolution, in galactic and cosmological context. The emphasis is put on the theory, formation scenarios and the effects of the environment on the evolution of the global properties of clusters. A few open questions are identified.
2013-01-01
Background Screen-based media (SBM) occupy a considerable portion of young peoples’ discretionary leisure time. The aim of this paper was to investigate whether distinct clusters of SBM use exist, and if so, to examine the relationship of any identified clusters with other activity/sedentary behaviours and physical and mental health indicators. Methods The data for this study come from 643 adolescents, aged 14 years, who were participating in the longitudinal Western Australian Pregnancy Cohort (Raine) Study through May 2003 to June 2006. Time spent on SBM, phone use and reading was assessed using the Multimedia Activity Recall for Children and Adults. Height, weight, muscle strength were measured at a clinic visit and the adolescents also completed questionnaires on their physical activity and psychosocial health. Latent class analysis (LCA) was used to analyse groupings of SBM use. Results Three clusters of SBM use were found; C1 ‘instrumental computer users’ (high email use, general computer use), C2 ‘multi-modal e-gamers’ (both high console and computer game use) and C3 ‘computer e-gamers’ (high computer game use only). Television viewing was moderately high amongst all the clusters. C2 males took fewer steps than their male peers in C1 and C3 (-13,787/week, 95% CI: -4619 to -22957, p = 0.003 and -14,806, 95% CI: -5,306 to -24,305, p = 0.002) and recorded less MVPA than the C1 males (-3.5 h, 95% CI: -1.0 to -5.9, p = 0.005). There was no difference in activity levels between females in clusters C1 and C3. Conclusion SBM use by adolescents did cluster and these clusters related differently to activity/sedentary behaviours and both physical and psychosocial health indicators. It is clear that SBM use is not a single construct and future research needs to take consideration of this if it intends to understand the impact SBM has on health. PMID:24330626
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
On evaluating clustering procedures for use in classification
NASA Technical Reports Server (NTRS)
Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)
1979-01-01
The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.
Application of dynamic topic models to toxicogenomics data.
Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida
2016-10-06
All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.
High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster
NASA Astrophysics Data System (ADS)
Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku
2015-01-01
High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.
The influence of environment on the properties of galaxies
NASA Astrophysics Data System (ADS)
Hashimoto, Yasuhiro
1999-11-01
I will present the result of the evaluation of the environmental influences on three important galactic properties; morphology, star formation rate, and interaction in the local universe. I have used a very large and homogeneous sample of 15749 galaxies drawn from the Las Campanas Redshift Survey (Shectman et al. 1996). This data set consists of galaxies inhabiting the entire range of galactic environments, from the sparsest field to the densest clusters, thus allowing me to study environmental variations without combing multiple data sets with inhomogeneous characteristics. Furthermore, I can also extend the research to a ``general'' environmental investigation by, for the first time, decoupling the very local environment, as characterized by local galaxy density, from the effects of larger-scale environments, such as membership in a cluster. The star formation rate is characterized by the strength of EW(OII), while the galactic morphology is characterized by the automatically-measured concentration index (e.g. Okamura, Kodaira, & Watanabe 1984), which is more closely related to the bulge-to-disk ratio of galaxies than Hubble type, and is therefore expected to behave more independently on star formation activity in a galaxy. On the other hand, the first systematic quantitative investigation of the environmental influence on the interaction of galaxies is made by using two automatically-determined objective measures; the asymmetry index and existence of companions. The principal conclusions of this work are: (1)The concentration of the galactic light profile (characterized by the concentration index) is predominantly correlated with the relatively small-scale environment which is characterized by the local galaxy density. (2)The star formation rate of galaxies (characterized by the EW(OII)) is correlated both with the small-scale environment (the local galaxy density) and the larger scale environment which is characterized by the cluster membership. For weakly star forming galaxies, the star formation rate is correlated both with the local galaxy density and rich cluster membership. It also shows a correlation with poor cluster membership. For strongly star forming galaxies, the star formation rate is correlated with the local density and the poor cluster membership. (3)Interacting galaxies (characterized by the asymmetry index and/or the existence of apparent companions) show no correlation with rich cluster membership, but show a fair to strong correlation with the poor cluster membership.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Gao, M
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less
NASA Astrophysics Data System (ADS)
Bankura, Arindam; Klein, Michael L.; Carnevale, Vincenzo
2013-08-01
Ab initio molecular dynamics calculations have been used to compare and contrast the deprotonation reaction of a histidine residue in aqueous solution with the situation arising in a histidine-tryptophan cluster. The latter is used as a model of the proton storage unit present in the pore of the M2 proton conducting ion channel. We compute potentials of mean force for the dissociation of a proton from the Nδ and Nɛ positions of the imidazole group to estimate the pKas. Anticipating our results, we will see that the estimated pKa for the first protonation event of the M2 channel is in good agreement with experimental estimates. Surprisingly, despite the fact that the histidine is partially desolvated in the M2 channel, the affinity for protons is similar to that of a histidine in aqueous solution. Importantly, the electrostatic environment provided by the indoles is responsible for the stabilization of the charged imidazolium.
Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter
2017-01-01
Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.
Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter
2017-01-01
Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512
Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception
2017-12-01
computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead
Integrating IS Curriculum Knowledge through a Cluster-Computing Project--A Successful Experiment
ERIC Educational Resources Information Center
Kitchens, Fred L.; Sharma, Sushil K.; Harris, Thomas
2004-01-01
MIS curricula in business schools are challenged to provide MIS courses that give students a strong practical understanding of the basic technologies, while also providing enough hands-on experience to solve real life problems. As an experimental capstone MIS course, the authors developed a cluster-computing project to expose business students to…
Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.
2009-09-09
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.
Computational and photoelectron spectroscopic study of the dipole-bound anions, indole(H2O)1,2 (.).
Buytendyk, A M; Buonaugurio, A M; Xu, S-J; Nilles, J M; Bowen, K H; Kirnosov, N; Adamowicz, L
2016-07-14
We report our joint computational and anion photoelectron spectroscopic study of indole-water cluster anions, indole(H2O)1,2 (-). The photoelectron spectra of both cluster anions show the characteristics of dipole-bound anions, and this is confirmed by our theoretical computations. The experimentally determined vertical electron detachment (VDE) energies for indole(H2O)1 (-) and indole(H2O)2 (-) are 144 meV and 251 meV, respectively. The corresponding theoretically determined VDE values for indole(H2O)1 (-) and indole(H2O)2 (-) are 124 meV and 255 meV, respectively. The vibrational features in the photoelectron spectra of these cluster anions are assigned as the vibrations of the water molecule.
Computational and photoelectron spectroscopic study of the dipole-bound anions, indole(H2O)1,2-
NASA Astrophysics Data System (ADS)
Buytendyk, A. M.; Buonaugurio, A. M.; Xu, S.-J.; Nilles, J. M.; Bowen, K. H.; Kirnosov, N.; Adamowicz, L.
2016-07-01
We report our joint computational and anion photoelectron spectroscopic study of indole-water cluster anions, indole(H2O)1,2-. The photoelectron spectra of both cluster anions show the characteristics of dipole-bound anions, and this is confirmed by our theoretical computations. The experimentally determined vertical electron detachment (VDE) energies for indole(H2O)1- and indole(H2O)2- are 144 meV and 251 meV, respectively. The corresponding theoretically determined VDE values for indole(H2O)1- and indole(H2O)2- are 124 meV and 255 meV, respectively. The vibrational features in the photoelectron spectra of these cluster anions are assigned as the vibrations of the water molecule.
Ab initio quantum chemistry: methodology and applications.
Friesner, Richard A
2005-05-10
This Perspective provides an overview of state-of-the-art ab initio quantum chemical methodology and applications. The methods that are discussed include coupled cluster theory, localized second-order Moller-Plesset perturbation theory, multireference perturbation approaches, and density functional theory. The accuracy of each approach for key chemical properties is summarized, and the computational performance is analyzed, emphasizing significant advances in algorithms and implementation over the past decade. Incorporation of a condensed-phase environment by means of mixed quantum mechanical/molecular mechanics or self-consistent reaction field techniques, is presented. A wide range of illustrative applications, focusing on materials science and biology, are discussed briefly.
Massive and Distant Clusters of WISE Survey (MaDCoWS)
NASA Astrophysics Data System (ADS)
Brodwin, Mark; MaDCoWS Collaboration
2018-06-01
The Massive and Distant Clusters of WISE Survey (MaDCoWS) is a comprehensive program to detect and characterize the most massive galaxy clusters in the Universe at z ~ 1, and is the only all-sky survey sensitive to galaxy clusters at this epoch. The foundation for this program is data from the NASA Wide-field Infrared Survey Explorer (WISE). The primary goal is to study the evolution of massive galaxies in the most overdense environments at z > 1 when star formation and AGN activity may be peaking in these structures. Spitzer follow-up imaging of 2000 MaDCoWS clusters has allowed us to select the richest and/or most distant clusters for detailed study. To date we have spectroscopically confirmed over 35 MaDCoWS clusters, spanning a wide range of masses (2-11 x 10^14 Msun), out to z = 1.5. This includes the discovery of the most massive z > 1.15 cluster found to date, as well as a cluster at z = 1.23 that is lensing a z = 2.22 supernova Ia. Multiwavelength follow-up observations of these distant clusters, currently underway, will permit several novel studies of galaxy evolution in rich cluster environments at z > 1.
Origins of ultra-diffuse galaxies in the Coma cluster - I. Constraints from velocity phase-space
NASA Astrophysics Data System (ADS)
Alabi, Adebusola; Ferré-Mateu, Anna; Romanowsky, Aaron J.; Brodie, Jean; Forbes, Duncan A.; Wasserman, Asher; Bellstedt, Sabine; Martín-Navarro, Ignacio; Pandya, Viraj; Stone, Maria B.; Okabe, Nobuhiro
2018-06-01
We use Keck/DEIMOS spectroscopy to confirm the cluster membership of 16 ultra-diffuse galaxies (UDGs) in the Coma cluster, bringing the total number of spectroscopically confirmed UDGs from the Yagi et al. (Y16) catalog to 25. We also identify a new cluster background UDG, confirming that most (˜95 per cent) of the UDGs in the Y16 catalog belong to the Coma cluster. In this pilot study of Coma UDGs in velocity phase-space, we find evidence of a diverse origin for Coma cluster UDGs, similar to normal dwarf galaxies. Some UDGs in our sample are consistent with being late infalls into the cluster environment while some may have been in the cluster for ≥8 Gyr. The late infallen UDGs have higher absolute relative line-of-sight velocities, bluer optical colors, and within the projected cluster core, are smaller in size, compared to the early infalls. The early infall UDGs, which may also have formed in-situ, have been in the cluster environment for as long as the most luminous galaxies in the Coma cluster and they may be failed galaxies which experienced star formation quenching at earlier epochs.
Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems
NASA Astrophysics Data System (ADS)
Jia, C. J.; Wang, Y.; Mendl, C. B.; Moritz, B.; Devereaux, T. P.
2018-03-01
We describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the "checkerboard" decomposition of the Hamiltonian matrix for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.
NASA Technical Reports Server (NTRS)
Fortenberry, Ryan C.; Crawford, T. Daniel; Lee, Timothy J.
2014-01-01
The spectroscopic constants and vibrational frequencies for the 1(sup 3)A' states of HNC, DNC, HOC+, and DOC+ are computed and discussed in this work. The reliable CcCR quartic force field based on high-level coupled cluster ab initio quantum chemical computations is exclusively utilized to provide the anharmonic potential. Then, second order vibrational perturbation theory and vibrational configuration interaction methods are employed to treat the nuclear Schroedinger equation. Second-order perturbation theory is also employed to provide spectroscopic data for all molecules examined. The relationship between these molecules and the corresponding 1(sup 3)A' HCN and HCO+ isomers is further developed here. These data are applicable to laboratory studies involving formation of HNC and HOC+ as well as astronomical observations of chemically active astrophysical environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries
2015-12-31
The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculationsmore » show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.« less
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
Smith, Daniel G A; Burns, Lori A; Sirianni, Dominic A; Nascimento, Daniel R; Kumar, Ashutosh; James, Andrew M; Schriber, Jeffrey B; Zhang, Tianyuan; Zhang, Boyi; Abbott, Adam S; Berquist, Eric J; Lechner, Marvin H; Cunha, Leonardo A; Heide, Alexander G; Waldrop, Jonathan M; Takeshita, Tyler Y; Alenaizan, Asem; Neuhauser, Daniel; King, Rollin A; Simmonett, Andrew C; Turney, Justin M; Schaefer, Henry F; Evangelista, Francesco A; DePrince, A Eugene; Crawford, T Daniel; Patkowski, Konrad; Sherrill, C David
2018-06-11
Psi4NumPy demonstrates the use of efficient computational kernels from the open-source Psi4 program through the popular NumPy library for linear algebra in Python to facilitate the rapid development of clear, understandable Python computer code for new quantum chemical methods, while maintaining a relatively low execution time. Using these tools, reference implementations have been created for a number of methods, including self-consistent field (SCF), SCF response, many-body perturbation theory, coupled-cluster theory, configuration interaction, and symmetry-adapted perturbation theory. Furthermore, several reference codes have been integrated into Jupyter notebooks, allowing background, underlying theory, and formula information to be associated with the implementation. Psi4NumPy tools and associated reference implementations can lower the barrier for future development of quantum chemistry methods. These implementations also demonstrate the power of the hybrid C++/Python programming approach employed by the Psi4 program.
The relative importance of aerosol scattering and absorption in remote sensing
NASA Technical Reports Server (NTRS)
Fraser, R. S.; Kaufman, Y. J.
1983-01-01
The relative importance of aerosol optical thickness and absorption is illustrated through computing radiances for radiative transfer models. The radiance of sunlight reflected from models of the earth-atmosphere system is computed as a function of the aerosol optical thickness and its albedo of single scattering; it is noted that the albedo varies from 0.6 in urban environment to nearly 1 in areas with low graphitic carbon content. The calculations are applied to the example of satellite measurements of biomass. It is found that when surface classifications are made by means of clustering techniques the presence of gradients in the aerosol optical properties results in the dispersion of points in the plot correlating radiances viewed in two different directions. Finally, though such a remote sensing parameter as contrast is weakly affected by aerosol absorption, it is highly dependent on its optical thickness.
ALICE HLT Cluster operation during ALICE Run 2
NASA Astrophysics Data System (ADS)
Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.;
2017-10-01
ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.
Qudit quantum computation on matrix product states with global symmetry
NASA Astrophysics Data System (ADS)
Wang, Dongsheng; Stephen, David; Raussendorf, Robert
Resource states that contain nontrivial symmetry-protected topological order are identified for universal measurement-based quantum computation. Our resource states fall into two classes: one as the qudit generalizations of the qubit cluster state, and the other as the higher-symmetry generalizations of the spin-1 Affleck-Kennedy-Lieb-Tasaki (AKLT) state, namely, with unitary, orthogonal, or symplectic symmetry. The symmetry in cluster states protects information propagation (identity gate), while the higher symmetry in AKLT-type states enables nontrivial gate computation. This work demonstrates a close connection between measurement-based quantum computation and symmetry-protected topological order.
Qudit quantum computation on matrix product states with global symmetry
NASA Astrophysics Data System (ADS)
Wang, Dong-Sheng; Stephen, David T.; Raussendorf, Robert
2017-03-01
Resource states that contain nontrivial symmetry-protected topological order are identified for universal single-qudit measurement-based quantum computation. Our resource states fall into two classes: one as the qudit generalizations of the one-dimensional qubit cluster state, and the other as the higher-symmetry generalizations of the spin-1 Affleck-Kennedy-Lieb-Tasaki (AKLT) state, namely, with unitary, orthogonal, or symplectic symmetry. The symmetry in cluster states protects information propagation (identity gate), while the higher symmetry in AKLT-type states enables nontrivial gate computation. This work demonstrates a close connection between measurement-based quantum computation and symmetry-protected topological order.
A WISE Selection of MIR AGN in Different Environments
NASA Astrophysics Data System (ADS)
Cheeseboro, Belinda D.; Norman, Dara J.
2015-01-01
This study was undertaken to understand the role of large scale environment in the evolution of MIR-selected AGN. In this study we examine AGN candidates in two types of environments: 7 clusters and 6 blank fields. Two types of clusters were studied in this project: 3 virialized and 4 non-virialized. The redshift of the clusters ranged 0.22≤z≤0.28. We used the mid-infrared WISE All-Sky database to identify AGN, applying various methods to refine our AGN candidate selection. To ascertain if there is an excess or deficit of MIR AGN in galaxy clusters vs. blank fields, we compared the AGN candidate distributions in virialized vs. non-virialized clusters to the blank fields. After close examination and comparison of the results to X-ray selected AGN from the Gilmour et al. (2009) study, we concluded that we do not detect an excess or deficit of MIR AGN in our clusters whether the cluster was virialized or non-virialized. This contrasted the conclusion of the Gilmour et al. (2009) study where there was an excess of X-Ray selected AGN in clusters.We also note an interesting feature in our WISE color-color plots that might be used for further investigation.Cheeseboro was supported by the NOAO/KPNO ResearchExperiences for Undergraduates (REU) Program which is funded by theNational Science Foundation Research Experiences for UndergraduatesProgram (AST-1262829).
The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2
NASA Technical Reports Server (NTRS)
Kusmanoff, Antone; Martin, Nancy L.
1989-01-01
In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Batch Computed Tomography Analysis of Projectiles
2016-05-01
error calculation. Projectiles are then grouped together according to the similarity of their components. Also discussed is graphical- cluster analysis...ballistic, armor, grouping, clustering 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF...Fig. 10 Graphical structure of 15 clusters of the jacket/core radii profiles with plots of the profiles contained within each cluster . The size of
Accounting for One-Group Clustering in Effect-Size Estimation
ERIC Educational Resources Information Center
Citkowicz, Martyna; Hedges, Larry V.
2013-01-01
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Fast Multipole Methods for Three-Dimensional N-body Problems
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.
1995-01-01
We are developing computational tools for the simulations of three-dimensional flows past bodies undergoing arbitrary motions. High resolution viscous vortex methods have been developed that allow for extended simulations of two-dimensional configurations such as vortex generators. Our objective is to extend this methodology to three dimensions and develop a robust computational scheme for the simulation of such flows. A fundamental issue in the use of vortex methods is the ability of employing efficiently large numbers of computational elements to resolve the large range of scales that exist in complex flows. The traditional cost of the method scales as Omicron (N(sup 2)) as the N computational elements/particles induce velocities at each other, making the method unacceptable for simulations involving more than a few tens of thousands of particles. In the last decade fast methods have been developed that have operation counts of Omicron (N log N) or Omicron (N) (referred to as BH and GR respectively) depending on the details of the algorithm. These methods are based on the observation that the effect of a cluster of particles at a certain distance may be approximated by a finite series expansion. In order to exploit this observation we need to decompose the element population spatially into clusters of particles and build a hierarchy of clusters (a tree data structure) - smaller neighboring clusters combine to form a cluster of the next size up in the hierarchy and so on. This hierarchy of clusters allows one to determine efficiently when the approximation is valid. This algorithm is an N-body solver that appears in many fields of engineering and science. Some examples of its diverse use are in astrophysics, molecular dynamics, micro-magnetics, boundary element simulations of electromagnetic problems, and computer animation. More recently these N-body solvers have been implemented and applied in simulations involving vortex methods. Koumoutsakos and Leonard (1995) implemented the GR scheme in two dimensions for vector computer architectures allowing for simulations of bluff body flows using millions of particles. Winckelmans presented three-dimensional, viscous simulations of interacting vortex rings, using vortons and an implementation of a BH scheme for parallel computer architectures. Bhatt presented a vortex filament method to perform inviscid vortex ring interactions, with an alternative implementation of a BH scheme for a Connection Machine parallel computer architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The following are reported: theoretical calculations (configuration interaction, relativistic effective core potentials, polyatomics, CASSCF); proposed theoretical studies (clusters of Cu, Ag, Au, Ni, Pt, Pd, Rh, Ir, Os, Ru; transition metal cluster ions; transition metal carbide clusters; bimetallic mixed transition metal clusters); reactivity studies on transition metal clusters (reactivity with H{sub 2}, C{sub 2}H{sub 4}, hydrocarbons; NO and CO chemisorption on surfaces). Computer facilities and codes to be used, are described. 192 refs, 13 figs.
NoSOCS in SDSS - VI. The environmental dependence of AGN in clusters and field in the local Universe
NASA Astrophysics Data System (ADS)
Lopes, P. A. A.; Ribeiro, A. L. B.; Rembold, S. B.
2017-11-01
We investigated the variation in the fraction of optical active galactic nuclei (AGNs) hosts with stellar mass, as well as their local and global environments. Our sample is composed of cluster members and field galaxies at z ≤ 0.1 and we consider only strong AGN. We find a strong variation in the AGN fraction (FAGN) with stellar mass. The field population comprises a higher AGN fraction compared to the global cluster population, especially for objects with log M* > 10.6. Hence, we restricted our analysis to more massive objects. We detected a smooth variation in the FAGN with local stellar mass density for cluster objects, reaching a plateau in the field environment. As a function of cluster-centric distance we verify that FAGN is roughly constant for R > R200, but show a steep decline inwards. We have also verified the dependence of the AGN population on cluster velocity dispersion, finding a constant behaviour for low mass systems (σP ≲ 650-700 km s-1). However, there is a strong decline in FAGN for higher mass clusters (>700 km s-1). When comparing the FAGN in clusters with or without substructure, we only find different results for objects at large radii (R > R200), in the sense that clusters with substructure present some excess in the AGN fraction. Finally, we have found that the phase-space distribution of AGN cluster members is significantly different than other populations. Due to the environmental dependence of FAGN and their phase-space distribution, we interpret AGN to be the result of galaxy interactions, favoured in environments where the relative velocities are low, typical of the field, low mass groups or cluster outskirts.
A hybrid computational strategy to address WGS variant analysis in >5000 samples.
Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli
2016-09-10
The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low coverage data can be accomplished in a scalable, cost effective and fast manner by using heterogeneous computing platforms without compromising on the quality of variants.
NASA Astrophysics Data System (ADS)
Georgiadis, A.; Berg, S.; Makurat, A.; Maitland, G.; Ott, H.
2013-09-01
We investigated the cluster-size distribution of the residual nonwetting phase in a sintered glass-bead porous medium at two-phase flow conditions, by means of micro-computed-tomography (μCT) imaging with pore-scale resolution. Cluster-size distribution functions and cluster volumes were obtained by image analysis for a range of injected pore volumes under both imbibition and drainage conditions; the field of view was larger than the porosity-based representative elementary volume (REV). We did not attempt to make a definition for a two-phase REV but used the nonwetting-phase cluster-size distribution as an indicator. Most of the nonwetting-phase total volume was found to be contained in clusters that were one to two orders of magnitude larger than the porosity-based REV. The largest observed clusters in fact ranged in volume from 65% to 99% of the entire nonwetting phase in the field of view. As a consequence, the largest clusters observed were statistically not represented and were found to be smaller than the estimated maximum cluster length. The results indicate that the two-phase REV is larger than the field of view attainable by μCT scanning, at a resolution which allows for the accurate determination of cluster connectivity.
Clustering for the Development of Engineering Students' Use of Enterprise 3.0
ERIC Educational Resources Information Center
Bassus, Olaf; Ahrens, Andreas; Zascerinska, Jelena
2011-01-01
Enterprise 3.0 which penetrates our society more thoroughly with the availability of broadband services has already been widely integrated into the contemporary processes and work environments. The synergy between Enterprise 3.0 and clustering advances innovation-stimulating environments in engineering education. The present research proposes…
Construction and Environment: Grade 7. Cluster IV.
ERIC Educational Resources Information Center
Calhoun, Olivia H.
A curriculum guide for grade 7, the document is devoted to the occupational cluster "Construction and Environment." It is divided into four units: urban renewal and development, urban and suburban construction and planning, megalopolis, and demography. Each unit is introduced by a statement of the topic, the unit's purpose, main ideas,…
Distributed computations in a dynamic, heterogeneous Grid environment
NASA Astrophysics Data System (ADS)
Dramlitsch, Thomas
2003-06-01
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus. Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen. Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen. Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde.
Fluid{Structure Interaction Modeling of Modified-Porosity Parachutes and Parachute Clusters
NASA Astrophysics Data System (ADS)
Boben, Joseph J.
To increase aerodynamic performance, the geometric porosity of a ringsail spacecraft parachute canopy is sometimes increased, beyond the "rings" and "sails" with hundreds of "ring gaps" and "sail slits." This creates extra computational challenges for fluid-structure interaction (FSI) modeling of clusters of such parachutes, beyond those created by the lightness of the canopy structure, geometric complexities of hundreds of gaps and slits, and the contact between the parachutes of the cluster. In FSI computation of parachutes with such "modified geometric porosity," the ow through the "windows" created by the removal of the panels and the wider gaps created by the removal of the sails cannot be accurately modeled with the Homogenized Modeling of Geometric Porosity (HMGP), which was introduced to deal with the hundreds of gaps and slits. The ow needs to be actually resolved. All these computational challenges need to be addressed simultaneously in FSI modeling of clusters of spacecraft parachutes with modified geometric porosity. The core numerical technology is the Stabilized Space-Time FSI (SSTFSI) technique, and the contact between the parachutes is handled with the Surface-Edge-Node Contact Tracking (SENCT) technique. In the computations reported here, in addition to the SSTFSI and SENCT techniques and HMGP, we use the special techniques we have developed for removing the numerical spinning component of the parachute motion and for restoring the mesh integrity without a remesh. We present results for 2- and 3-parachute clusters with two different payload models. We also present the FSI computations we carried out for a single, subscale modified-porosity parachute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.
A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less
NASA Astrophysics Data System (ADS)
Kitagawa, Yuya; Akinaga, Yoshinobu; Kawashima, Yukio; Jung, Jaewoon; Ten-no, Seiichiro
2012-06-01
A QM/MM (quantum-mechanical/molecular-mechanical) molecular-dynamics approach based on the generalized hybrid-orbital (GHO) method, in conjunction with the second-order perturbation (MP2) theory and the second-order approximate coupled-cluster (CC2) model, is employed to calculate electronic property accounting for a protein environment. Circular dichroism (CD) spectra originating from chiral disulfide bridges of oxytocin and insulin at room temperature are computed. It is shown that the sampling of thermal fluctuation of molecular geometries facilitated by the GHO-MD method plays an important role in the obtained spectra. It is demonstrated that, while the protein environments in an oxytocin molecule have significant electrostatic influence on its chiral center, it is compensated by solvent induced charges. This gives a reasonable explanation to experimental observations. GHO-MD simulations starting from different experimental structures of insulin indicate that existence of the disulfide bridges with negative dihedral angles is crucial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrd, Jason N., E-mail: byrd.jason@ensco.com; ENSCO, Inc., 4849 North Wickham Road, Melbourne, Florida 32940; Lutz, Jesse J., E-mail: jesse.lutz.ctr@afit.edu
The accurate determination of the preferred Si{sub 12}C{sub 12} isomer is important to guide experimental efforts directed towards synthesizing SiC nano-wires and related polymer structures which are anticipated to be highly efficient exciton materials for the opto-electronic devices. In order to definitively identify preferred isomeric structures for silicon carbon nano-clusters, highly accurate geometries, energies, and harmonic zero point energies have been computed using coupled-cluster theory with systematic extrapolation to the complete basis limit for set of silicon carbon clusters ranging in size from SiC{sub 3} to Si{sub 12}C{sub 12}. It is found that post-MBPT(2) correlation energy plays a significant rolemore » in obtaining converged relative isomer energies, suggesting that predictions using low rung density functional methods will not have adequate accuracy. Utilizing the best composite coupled-cluster energy that is still computationally feasible, entailing a 3-4 SCF and coupled-cluster theory with singles and doubles extrapolation with triple-ζ (T) correlation, the closo Si{sub 12}C{sub 12} isomer is identified to be the preferred isomer in the support of previous calculations [X. F. Duan and L. W. Burggraf, J. Chem. Phys. 142, 034303 (2015)]. Additionally we have investigated more pragmatic approaches to obtaining accurate silicon carbide isomer energies, including the use of frozen natural orbital coupled-cluster theory and several rungs of standard and double-hybrid density functional theory. Frozen natural orbitals as a way to compute post-MBPT(2) correlation energy are found to be an excellent balance between efficiency and accuracy.« less
NASA Astrophysics Data System (ADS)
Fukunaga, Naoto; Konishi, Katsuaki
2015-12-01
Poly(ethylene glycol) (PEG) has been widely used for the surface protection of inorganic nanoobjects because of its virtually `inert' nature, but little attention has been paid to its inherent electronic impacts on inorganic cores. Herein, we definitively show, through studies on optical properties of a series of PEG-modified Cd10Se4(SR)10 clusters, that the surrounding PEG environments can electronically affect the properties of the inorganic core. For the clusters with PEG units directly attached to an inorganic core (R = (CH2CH2O)nOCH3, 1-PEGn, n = 3, ~7, ~17, ~46), the absorption bands, associated with the low-energy transitions, continuously blue-shifted with the increasing PEG chain length. The chain length dependencies were also observed in the photoluminescence properties, particularly in the excitation spectral profiles. By combining the spectral features of several PEG17-modified clusters (2-Cm-PEG17 and 3) whose PEG and core units are separated by various alkyl chain-based spacers, it was demonstrated that sufficiently long PEG units, including PEG17 and PEG46, cause electronic perturbations in the cluster properties when they are arranged near the inorganic core. These unique effects of the long-PEG environments could be correlated with their large dipole moments, suggesting that the polarity of the proximal chemical environment is critical when affecting the electronic properties of the inorganic cluster core.Poly(ethylene glycol) (PEG) has been widely used for the surface protection of inorganic nanoobjects because of its virtually `inert' nature, but little attention has been paid to its inherent electronic impacts on inorganic cores. Herein, we definitively show, through studies on optical properties of a series of PEG-modified Cd10Se4(SR)10 clusters, that the surrounding PEG environments can electronically affect the properties of the inorganic core. For the clusters with PEG units directly attached to an inorganic core (R = (CH2CH2O)nOCH3, 1-PEGn, n = 3, ~7, ~17, ~46), the absorption bands, associated with the low-energy transitions, continuously blue-shifted with the increasing PEG chain length. The chain length dependencies were also observed in the photoluminescence properties, particularly in the excitation spectral profiles. By combining the spectral features of several PEG17-modified clusters (2-Cm-PEG17 and 3) whose PEG and core units are separated by various alkyl chain-based spacers, it was demonstrated that sufficiently long PEG units, including PEG17 and PEG46, cause electronic perturbations in the cluster properties when they are arranged near the inorganic core. These unique effects of the long-PEG environments could be correlated with their large dipole moments, suggesting that the polarity of the proximal chemical environment is critical when affecting the electronic properties of the inorganic cluster core. Electronic supplementary information (ESI) available: Details of synthetic procedures and characterisation data of the PEGylated thiols and clusters and additional absorption, photoluminescence emission and excitation spectral data. See DOI: 10.1039/c5nr06307h
Kee, Kerk F; Sparks, Lisa; Struppa, Daniele C; Mannucci, Mirco A; Damiano, Alberto
2016-01-01
By integrating the simplicial model of social aggregation with existing research on opinion leadership and diffusion networks, this article introduces the constructs of simplicial diffusers (mathematically defined as nodes embedded in simplexes; a simplex is a socially bonded cluster) and simplicial diffusing sets (mathematically defined as minimal covers of a simplicial complex; a simplicial complex is a social aggregation in which socially bonded clusters are embedded) to propose a strategic approach for information diffusion of cancer screenings as a health intervention on Facebook for community cancer prevention and control. This approach is novel in its incorporation of interpersonally bonded clusters, culturally distinct subgroups, and different united social entities that coexist within a larger community into a computational simulation to select sets of simplicial diffusers with the highest degree of information diffusion for health intervention dissemination. The unique contributions of the article also include seven propositions and five algorithmic steps for computationally modeling the simplicial model with Facebook data.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Transition properties from the Hermitian formulation of the coupled cluster polarization propagator
NASA Astrophysics Data System (ADS)
Tucholska, Aleksandra M.; Modrzejewski, Marcin; Moszynski, Robert
2014-09-01
Theory of one-electron transition density matrices has been formulated within the time-independent coupled cluster method for the polarization propagator [R. Moszynski, P. S. Żuchowski, and B. Jeziorski, Coll. Czech. Chem. Commun. 70, 1109 (2005)]. Working expressions have been obtained and implemented with the coupled cluster method limited to single, double, and linear triple excitations (CC3). Selected dipole and quadrupole transition probabilities of the alkali earth atoms, computed with the new transition density matrices are compared to the experimental data. Good agreement between theory and experiment is found. The results obtained with the new approach are of the same quality as the results obtained with the linear response coupled cluster theory. The one-electron density matrices for the ground state in the CC3 approximation have also been implemented. The dipole moments for a few representative diatomic molecules have been computed with several variants of the new approach, and the results are discussed to choose the approximation with the best balance between the accuracy and computational efficiency.
Novel schemes for measurement-based quantum computation.
Gross, D; Eisert, J
2007-06-01
We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
Message Passing vs. Shared Address Space on a Cluster of SMPs
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswas, Rupak
2000-01-01
The convergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has become an attractive platform for high end scientific computing. Currently, message-passing and shared address space (SAS) are the two leading programming paradigms for these systems. Message-passing has been standardized with MPI, and is the most common and mature programming approach. However message-passing code development can be extremely difficult, especially for irregular structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality, and high protocol overhead. In this paper, we compare the performance of and programming effort, required for six applications under both programming models on a 32 CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit high efficiency under shared memory programming. due to their high communication to computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications: however, on certain classes of problems SAS performance is competitive with MPI. We also present new algorithms for improving the PC cluster performance of MPI collective operations.
Potential Solution of a Hardware-Software System V-Cluster for Big Data Analysis
NASA Astrophysics Data System (ADS)
Morra, G.; Tufo, H.; Yuen, D. A.; Brown, J.; Zihao, S.
2017-12-01
Today it cannot be denied that the Big Data revolution is taking place and is replacing HPC and numerical simulation as the main driver in society. Outside the immediate scientific arena, the Big Data market encompass much more than the AGU. There are many sectors in society that Big Data can ably serve, such as governments finances, hospitals, tourism, and, last by not least, scientific and engineering problems. In many countries, education has not kept pace with the demands from students outside computer science to get into Big Data science. Ultimate Vision (UV) in Beijing attempts to address this need in China by focusing part of our energy on education and training outside the immediate university environment. UV plans a strategy to maximize profits in our beginning. Therefore, we will focus on growing markets such as provincial governments, medical sectors, mass media, and education. And will not address issues such as performance for scientific collaboration, such as seismic networks, where the market share and profits are small by comparison. We have developed a software-hardware system, called V-Cluster, built with the latest NVIDIA GPUs and Intel CPUs with ample amounts of RAM (over couple of Tbytes) and local storage. We have put in an internal network with high bandwidth (over 100 Gbits/sec) and each node of V-Cluster can run at around 40 Tflops. Our system can scale linearly with the number of codes. Our main strength in data analytics is the use of graph-computing paradigm for optimizing the transfer rate in collaborative efforts. We focus in training and education with our clients in order to gain experience in learning about new applications. We will present the philosophy of this second generation of our Data Analytic system, whose costs fall far below those offered elsewhere.
A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters
NASA Technical Reports Server (NTRS)
Mackowski, D. W.; Mishchenko, M. I.
2011-01-01
A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.
Simulating the Birth of Massive Star Clusters: Is Destruction Inevitable?
NASA Astrophysics Data System (ADS)
Rosen, Anna
2013-10-01
Very early in its operation, the Hubble Space Telescope {HST} opened an entirely new frontier: study of the demographics and properties of star clusters far beyond the Milky Way. However, interpretation of HST's observations has proven difficult, and has led to the development of two conflicting models. One view is that most massive star clusters are disrupted during their infancy by feedback from newly formed stars {i.e., "infant mortality"}, independent of cluster mass or environment. The other model is that most star clusters survive their infancy and are disrupted later by mass-dependent dynamical processes. Since observations at present have failed to discriminate between these views, we propose a theoretical investigation to provide new insight. We will perform radiation-hydrodynamic simulations of the formation of massive star clusters, including for the first time a realistic treatment of the most important stellar feedback processes. These simulations will elucidate the physics of stellar feedback, and allow us to determine whether cluster disruption is mass-dependent or -independent. We will also use our simulations to search for observational diagnostics that can distinguish bound from unbound clusters, and to predict how cluster disruption affects the cluster luminosity function in a variety of galactic environments.
NASA Astrophysics Data System (ADS)
Perez, Adrianna; Moreno, Jorge; Naiman, Jill; Ramirez-Ruiz, Enrico; Hopkins, Philip F.
2017-01-01
In this work, we analyze the environments surrounding star clusters of simulated merging galaxies. Our framework employs Feedback In Realistic Environments (FIRE) model (Hopkins et al., 2014). The FIRE project is a high resolution cosmological simulation that resolves star forming regions and incorporates stellar feedback in a physically realistic way. The project focuses on analyzing the properties of the star clusters formed in merging galaxies. The locations of these star clusters are identified with astrodendro.py, a publicly available dendrogram algorithm. Once star cluster properties are extracted, they will be used to create a sub-grid (smaller than the resolution scale of FIRE) of gas confinement in these clusters. Then, we can examine how the star clusters interact with these available gas reservoirs (either by accreting this mass or blowing it out via feedback), which will determine many properties of the cluster (star formation history, compact object accretion, etc). These simulations will further our understanding of star formation within stellar clusters during galaxy evolution. In the future, we aim to enhance sub-grid prescriptions for feedback specific to processes within star clusters; such as, interaction with stellar winds and gas accretion onto black holes and neutron stars.
Effect Sizes in Cluster-Randomized Designs
ERIC Educational Resources Information Center
Hedges, Larry V.
2007-01-01
Multisite research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Researchers would like to compute effect size indexes based on the standardized mean difference to compare the results of cluster-randomized studies (and corresponding quasi-experiments) with other studies and to…
Physical-depth architectural requirements for generating universal photonic cluster states
NASA Astrophysics Data System (ADS)
Morley-Short, Sam; Bartolucci, Sara; Gimeno-Segovia, Mercedes; Shadbolt, Pete; Cable, Hugo; Rudolph, Terry
2018-01-01
Most leading proposals for linear-optical quantum computing (LOQC) use cluster states, which act as a universal resource for measurement-based (one-way) quantum computation. In ballistic approaches to LOQC, cluster states are generated passively from small entangled resource states using so-called fusion operations. Results from percolation theory have previously been used to argue that universal cluster states can be generated in the ballistic approach using schemes which exceed the critical threshold for percolation, but these results consider cluster states with unbounded size. Here we consider how successful percolation can be maintained using a physical architecture with fixed physical depth, assuming that the cluster state is continuously generated and measured, and therefore that only a finite portion of it is visible at any one point in time. We show that universal LOQC can be implemented using a constant-size device with modest physical depth, and that percolation can be exploited using simple pathfinding strategies without the need for high-complexity algorithms.
2017-01-01
We report a computational fluid dynamics–discrete element method (CFD-DEM) simulation study on the interplay between mass transfer and a heterogeneous catalyzed chemical reaction in cocurrent gas-particle flows as encountered in risers. Slip velocity, axial gas dispersion, gas bypassing, and particle mixing phenomena have been evaluated under riser flow conditions to study the complex system behavior in detail. The most important factors are found to be directly related to particle cluster formation. Low air-to-solids flux ratios lead to more heterogeneous systems, where the cluster formation is more pronounced and mass transfer more influenced. Falling clusters can be partially circumvented by the gas phase, which therefore does not fully interact with the cluster particles, leading to poor gas–solid contact efficiencies. Cluster gas–solid contact efficiencies are quantified at several gas superficial velocities, reaction rates, and dilution factors in order to gain more insight regarding the influence of clustering phenomena on the performance of riser reactors. PMID:28553011
Carlos Varas, Álvaro E; Peters, E A J F; Kuipers, J A M
2017-05-17
We report a computational fluid dynamics-discrete element method (CFD-DEM) simulation study on the interplay between mass transfer and a heterogeneous catalyzed chemical reaction in cocurrent gas-particle flows as encountered in risers. Slip velocity, axial gas dispersion, gas bypassing, and particle mixing phenomena have been evaluated under riser flow conditions to study the complex system behavior in detail. The most important factors are found to be directly related to particle cluster formation. Low air-to-solids flux ratios lead to more heterogeneous systems, where the cluster formation is more pronounced and mass transfer more influenced. Falling clusters can be partially circumvented by the gas phase, which therefore does not fully interact with the cluster particles, leading to poor gas-solid contact efficiencies. Cluster gas-solid contact efficiencies are quantified at several gas superficial velocities, reaction rates, and dilution factors in order to gain more insight regarding the influence of clustering phenomena on the performance of riser reactors.
Tidal disruption of open clusters in their parent molecular clouds
NASA Technical Reports Server (NTRS)
Long, Kevin
1989-01-01
A simple model of tidal encounters has been applied to the problem of an open cluster in a clumpy molecular cloud. The parameters of the clumps are taken from the Blitz, Stark, and Long (1988) catalog of clumps in the Rosette molecular cloud. Encounters are modeled as impulsive, rectilinear collisions between Plummer spheres, but the tidal approximation is not invoked. Mass and binding energy changes during an encounter are computed by considering the velocity impulses given to individual stars in a random realization of a Plummer sphere. Mean rates of mass and binding energy loss are then computed by integrating over many encounters. Self-similar evolutionary calculations using these rates indicate that the disruption process is most sensitive to the cluster radius and relatively insensitive to cluster mass. The calculations indicate that clusters which are born in a cloud similar to the Rosette with a cluster radius greater than about 2.5 pc will not survive long enough to leave the cloud. The majority of clusters, however, have smaller radii and will survive the passage through their parent cloud.
Combinations of SNP genotypes from the Wellcome Trust Case Control Study of bipolar patients.
Mellerup, Erling; Jørgensen, Martin Balslev; Dam, Henrik; Møller, Gert Lykke
2018-04-01
Combinations of genetic variants are the basis for polygenic disorders. We examined combinations of SNP genotypes taken from the 446 729 SNPs in The Wellcome Trust Case Control Study of bipolar patients. Parallel computing by graphics processing units, cloud computing, and data mining tools were used to scan The Wellcome Trust data set for combinations. Two clusters of combinations were significantly associated with bipolar disorder. One cluster contained 68 combinations, each of which included five SNP genotypes. Of the 1998 patients, 305 had combinations from this cluster in their genome, but none of the 1500 controls had any of these combinations in their genome. The other cluster contained six combinations, each of which included five SNP genotypes. Of the 1998 patients, 515 had combinations from the cluster in their genome, but none of the 1500 controls had any of these combinations in their genome. Clusters of combinations of genetic variants can be considered general risk factors for polygenic disorders, whereas accumulation of combinations from the clusters in the genome of a patient can be considered a personal risk factor.
Moving Object Localization Based on UHF RFID Phase and Laser Clustering
Fu, Yulu; Wang, Changlong; Liang, Gaoli; Zhang, Hua; Ur Rehman, Shafiq
2018-01-01
RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m. PMID:29522458
Subspace Clustering via Learning an Adaptive Low-Rank Graph.
Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin
2018-08-01
By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.
Weak lensing study of 16 DAFT/FADA clusters: Substructures and filaments
NASA Astrophysics Data System (ADS)
Martinet, Nicolas; Clowe, Douglas; Durret, Florence; Adami, Christophe; Acebrón, Ana; Hernandez-García, Lorena; Márquez, Isabel; Guennou, Loic; Sarron, Florian; Ulmer, Mel
2016-05-01
While our current cosmological model places galaxy clusters at the nodes of a filament network (the cosmic web), we still struggle to detect these filaments at high redshifts. We perform a weak lensing study for a sample of 16 massive, medium-high redshift (0.4
NASA Astrophysics Data System (ADS)
Hafizi, Roohollah; Hashemifar, S. Javad; Alaei, Mojtaba; Jangrouei, MohammadReza; Akbarzadeh, Hadi
2016-12-01
In this paper, we employ an evolutionary algorithm along with the full-potential density functional theory (DFT) computations to perform a comprehensive search for the stable structures of stoichiometric (WS2)n nano-clusters (n = 1 - 9), within three different exchange-correlation functionals. Our results suggest that n = 5 and 8 are possible candidates for the low temperature magic sizes of WS2 nano-clusters while at temperatures above 500 Kelvin, n = 7 exhibits a comparable relative stability with n = 8. The electronic properties and energy gap of the lowest energy isomers were computed within several schemes, including semilocal Perdew-Burke-Ernzerhof and Becke-Lee-Yang-Parr functionals, hybrid B3LYP functional, many body based DFT+GW approach, ΔSCF method, and time dependent DFT calculations. Vibrational spectra of the lowest lying isomers, computed by the force constant method, are used to address IR spectra and thermal free energy of the clusters. Time dependent density functional calculation in a real time domain is applied to determine the full absorption spectra and optical gap of the lowest energy isomers of the WS2 nano-clusters.
Innovative HPC architectures for the study of planetary plasma environments
NASA Astrophysics Data System (ADS)
Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni
2016-04-01
DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. The simulations also give access to detailed information about the particle dynamics and their velocity distribution at locations that can be used for comparison with satellite data.
NASA Astrophysics Data System (ADS)
Wang, Ten-See
1993-07-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed. Preliminary results of the computed base pressure agreed reasonably well with that of the measurement. Basic base flow features such as the reverse jet, wall jet, recompression shock, and static pressure field in plane of impingement have been captured.