DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr
2010-10-28
Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung
Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined inmore » a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.« less
DOVIS: an implementation for high-throughput virtual screening using AutoDock.
Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques
2008-02-27
Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
Large-scale virtual screening on public cloud resources with Apache Spark.
Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola
2017-01-01
Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.
Sleep Enhances a Spatially Mediated Generalization of Learned Values
ERIC Educational Resources Information Center
Javadi, Amir-Homayoun; Tolat, Anisha; Spiers, Hugo J.
2015-01-01
Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered…
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-09-25
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-01-01
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731
Interactive exploration of coastal restoration modeling in virtual environments
NASA Astrophysics Data System (ADS)
Gerndt, Andreas; Miller, Robert; Su, Simon; Meselhe, Ehab; Cruz-Neira, Carolina
2009-02-01
Over the last decades, Louisiana has lost a substantial part of its coastal region to the Gulf of Mexico. The goal of the project depicted in this paper is to investigate the complex ecological and geophysical system not only to find solutions to reverse this development but also to protect the southern landscape of Louisiana for disastrous impacts of natural hazards like hurricanes. This paper sets a focus on the interactive data handling of the Chenier Plain which is only one scenario of the overall project. The challenge addressed is the interactive exploration of large-scale time-depending 2D simulation results and of terrain data with a high resolution that is available for this region. Besides data preparation, efficient visualization approaches optimized for the usage in virtual environments are presented. These are embedded in a complex framework for scientific visualization of time-dependent large-scale datasets. To provide a straightforward interface for rapid application development, a software layer called VRFlowVis has been developed. Several architectural aspects to encapsulate complex virtual reality aspects like multi-pipe vs. cluster-based rendering are discussed. Moreover, the distributed post-processing architecture is investigated to prove its efficiency for the geophysical domain. Runtime measurements conclude this paper.
Sleep enhances a spatially mediated generalization of learned values
Tolat, Anisha; Spiers, Hugo J.
2015-01-01
Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered objects, resulting in an impaired memory for the value of high-valued objects. Our results are consistent with (a) spatial context helping to bind items together in long-term memory and serve as a basis for generalizing across memories and (b) sleep mediating memory effects on salient/reward-related items. PMID:26373834
Gaussian mixture clustering and imputation of microarray data.
Ouyang, Ming; Welsh, William J; Georgopoulos, Panos
2004-04-12
In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
ERIC Educational Resources Information Center
Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2017-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
Large-Scale Networked Virtual Environments: Architecture and Applications
ERIC Educational Resources Information Center
Lamotte, Wim; Quax, Peter; Flerackers, Eddy
2008-01-01
Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…
NASA Astrophysics Data System (ADS)
Sawangwit, U.; Shanks, T.; Abdalla, F. B.; Cannon, R. D.; Croom, S. M.; Edge, A. C.; Ross, Nicholas P.; Wake, D. A.
2011-10-01
We present the angular correlation function measured from photometric samples comprising 1562 800 luminous red galaxies (LRGs). Three LRG samples were extracted from the Sloan Digital Sky Survey (SDSS) imaging data, based on colour-cut selections at redshifts, z≈ 0.35, 0.55 and 0.7 as calibrated by the spectroscopic surveys, SDSS-LRG, 2dF-SDSS LRG and QSO (quasi-stellar object) (2SLAQ) and the AAΩ-LRG survey. The galaxy samples cover ≈7600 deg2 of sky, probing a total cosmic volume of ≈5.5 h-3 Gpc3. The small- and intermediate-scale correlation functions generally show significant deviations from a single power-law fit with a well-detected break at ≈1 h-1 Mpc, consistent with the transition scale between the one- and two-halo terms in halo occupation models. For galaxy separations 1-20 h-1 Mpc and at fixed luminosity, we see virtually no evolution of the clustering with redshift and the data are consistent with a simple high peaks biasing model where the comoving LRG space density is constant with z. At fixed z, the LRG clustering amplitude increases with luminosity in accordance with the simple high peaks model, with a typical LRG dark matter halo mass 1013-1014 h-1 M⊙. For r < 1 h-1 Mpc, the evolution is slightly faster and the clustering decreases towards high redshift consistent with a virialized clustering model. However, assuming the halo occupation distribution (HOD) and Λ cold dark matter (ΛCDM) halo merger frameworks, ˜2-3 per cent/Gyr of the LRGs are required to merge in order to explain the small scales clustering evolution, consistent with previous results. At large scales, our result shows good agreement with the SDSS-LRG result of Eisenstein et al. but we find an apparent excess clustering signal beyond the baryon acoustic oscillations (BAO) scale. Angular power spectrum analyses of similar LRG samples also detect a similar apparent large-scale clustering excess but more data are required to check for this feature in independent galaxy data sets. Certainly, if the ΛCDM model were correct then we would have to conclude that this excess was caused by systematics at the level of Δw≈ 0.001-0.0015 in the photometric AAΩ-LRG sample.
Large-Angular-Scale Clustering as a Clue to the Source of UHECRs
NASA Astrophysics Data System (ADS)
Berlind, Andreas A.; Farrar, Glennys R.
We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation
NASA Astrophysics Data System (ADS)
Noh, Yookyung
The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2016-01-01
During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
NASA Astrophysics Data System (ADS)
Drwal, Malgorzata N.; Agama, Keli; Pommier, Yves; Griffith, Renate
2013-12-01
Purely structure-based pharmacophores (SBPs) are an alternative method to ligand-based approaches and have the advantage of describing the entire interaction capability of a binding pocket. Here, we present the development of SBPs for topoisomerase I, an anticancer target with an unusual ligand binding pocket consisting of protein and DNA atoms. Different approaches to cluster and select pharmacophore features are investigated, including hierarchical clustering and energy calculations. In addition, the performance of SBPs is evaluated retrospectively and compared to the performance of ligand- and complex-based pharmacophores. SBPs emerge as a valid method in virtual screening and a complementary approach to ligand-focussed methods. The study further reveals that the choice of pharmacophore feature clustering and selection methods has a large impact on the virtual screening hit lists. A prospective application of the SBPs in virtual screening reveals that they can be used successfully to identify novel topoisomerase inhibitors.
Exploring root symbiotic programs in the model legume Medicago truncatula using EST analysis.
Journet, Etienne-Pascal; van Tuinen, Diederik; Gouzy, Jérome; Crespeau, Hervé; Carreau, Véronique; Farmer, Mary-Jo; Niebel, Andreas; Schiex, Thomas; Jaillon, Olivier; Chatagnier, Odile; Godiard, Laurence; Micheli, Fabienne; Kahn, Daniel; Gianinazzi-Pearson, Vivienne; Gamas, Pascal
2002-12-15
We report on a large-scale expressed sequence tag (EST) sequencing and analysis program aimed at characterizing the sets of genes expressed in roots of the model legume Medicago truncatula during interactions with either of two microsymbionts, the nitrogen-fixing bacterium Sinorhizobium meliloti or the arbuscular mycorrhizal fungus Glomus intraradices. We have designed specific tools for in silico analysis of EST data, in relation to chimeric cDNA detection, EST clustering, encoded protein prediction, and detection of differential expression. Our 21 473 5'- and 3'-ESTs could be grouped into 6359 EST clusters, corresponding to distinct virtual genes, along with 52 498 other M.truncatula ESTs available in the dbEST (NCBI) database that were recruited in the process. These clusters were manually annotated, using a specifically developed annotation interface. Analysis of EST cluster distribution in various M.truncatula cDNA libraries, supported by a refined R test to evaluate statistical significance and by 'electronic northern' representation, enabled us to identify a large number of novel genes predicted to be up- or down-regulated during either symbiotic root interaction. These in silico analyses provide a first global view of the genetic programs for root symbioses in M.truncatula. A searchable database has been built and can be accessed through a public interface.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Fritz, John Floren
2013-08-27
Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
2009-05-01
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
[Parallel virtual reality visualization of extreme large medical datasets].
Tang, Min
2010-04-01
On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.
Neighbourhood typology based on virtual audit of environmental obesogenic characteristics.
Feuillet, T; Charreire, H; Roda, C; Ben Rebah, M; Mackenbach, J D; Compernolle, S; Glonti, K; Bárdos, H; Rutter, H; De Bourdeaudhuij, I; McKee, M; Brug, J; Lakerveld, J; Oppert, J-M
2016-01-01
Virtual audit (using tools such as Google Street View) can help assess multiple characteristics of the physical environment. This exposure assessment can then be associated with health outcomes such as obesity. Strengths of virtual audit include collection of large amount of data, from various geographical contexts, following standard protocols. Using data from a virtual audit of obesity-related features carried out in five urban European regions, the current study aimed to (i) describe this international virtual audit dataset and (ii) identify neighbourhood patterns that can synthesize the complexity of such data and compare patterns across regions. Data were obtained from 4,486 street segments across urban regions in Belgium, France, Hungary, the Netherlands and the UK. We used multiple factor analysis and hierarchical clustering on principal components to build a typology of neighbourhoods and to identify similar/dissimilar neighbourhoods, regardless of region. Four neighbourhood clusters emerged, which differed in terms of food environment, recreational facilities and active mobility features, i.e. the three indicators derived from factor analysis. Clusters were unequally distributed across urban regions. Neighbourhoods mostly characterized by a high level of outdoor recreational facilities were predominantly located in Greater London, whereas neighbourhoods characterized by high urban density and large amounts of food outlets were mostly located in Paris. Neighbourhoods in the Randstad conurbation, Ghent and Budapest appeared to be very similar, characterized by relatively lower residential densities, greener areas and a very low percentage of streets offering food and recreational facility items. These results provide multidimensional constructs of obesogenic characteristics that may help target at-risk neighbourhoods more effectively than isolated features. © 2016 World Obesity.
Sex differences in virtual navigation influenced by scale and navigation experience.
Padilla, Lace M; Creem-Regehr, Sarah H; Stefanucci, Jeanine K; Cashdan, Elizabeth A
2017-04-01
The Morris water maze is a spatial abilities test adapted from the animal spatial cognition literature and has been studied in the context of sex differences in humans. This is because its standard design, which manipulates proximal (close) and distal (far) cues, applies to human navigation. However, virtual Morris water mazes test navigation skills on a scale that is vastly smaller than natural human navigation. Many researchers have argued that navigating in large and small scales is fundamentally different, and small-scale navigation might not simulate natural human navigation. Other work has suggested that navigation experience could influence spatial skills. To address the question of how individual differences influence navigational abilities in differently scaled environments, we employed both a large- (146.4 m in diameter) and a traditional- (36.6 m in diameter) scaled virtual Morris water maze along with a novel measure of navigation experience (lifetime mobility). We found sex differences on the small maze in the distal cue condition only, but in both cue-conditions on the large maze. Also, individual differences in navigation experience modulated navigation performance on the virtual water maze, showing that higher mobility was related to better performance with proximal cues for only females on the small maze, but for both males and females on the large maze.
NASA Astrophysics Data System (ADS)
Frank, Marius S.; Hättig, Christof
2018-04-01
We present a pair natural orbital (PNO)-based implementation of coupled cluster singles and doubles (CCSD) excitation energies that builds upon the previously proposed state-specific PNO approach to the excited state eigenvalue problem. We construct the excited state PNOs for each state separately in a truncated orbital specific virtual basis and use a local density-fitting approximation to achieve an at most quadratic scaling of the computational costs for the PNO construction. The earlier reported excited state PNO construction is generalized such that a smooth convergence of the results for charge transfer states is ensured for general coupled cluster methods. We investigate the accuracy of our implementation by applying it to a large and diverse test set comprising 153 singlet excitations in organic molecules. Already moderate PNO thresholds yield mean absolute errors below 0.01 eV. The performance of the implementation is investigated through the calculations on alkene chains and reveals an at most cubic cost-scaling for the CCSD iterations with the system size.
NASA Technical Reports Server (NTRS)
Carey, Sean J.; Shipman, R. F.; Clark, F. O.
1996-01-01
We present large scale images of the infrared emission of the region around the Pleiades using the ISSA data product from the IRAS mission. Residual Zodiacal background and a discontinuity in the image due to the scanning strategy of the satellite necessitated special background subtraction methods. The 60/100 color image clearly shows the heating of the ambient interstellar medium by the cluster. The 12/100 and 25/100 images peak on the cluster as expected for exposure of small dust grains to an enhanced UV radiation field; however, the 25/100 color declines to below the average interstellar value at the periphery of the cluster. Potential causes of the color deficit are discussed. A new method of identifying dense molecular material through infrared emission properties is presented. The difference between the 100 micron flux density and the 60 micron flux density scaled by the average interstellar 60/100 color ratio (Delta I(sub 100) is a sensitive diagnostic of material with embedded heating sources (Delta I(sub 100) less than 0) and cold, dense cores (Delta I(sub 100) greater than 0). The dense cores of the Taurus cloud complex as well as Lynds 1457 are clearly identified by this method, while the IR bright but diffuse Pleiades molecular cloud is virtually indistinguishable from the nearby infrared cirrus.
Cloud-enabled large-scale land surface model simulations with the NASA Land Information System
NASA Astrophysics Data System (ADS)
Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.
2017-12-01
Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and describe the potential deployment of this information technology with other NASA applications.
Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio
2016-09-30
We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.
González-González, Ana Isabel; Orrego, Carola; Perestelo-Perez, Lilisbeth; Bermejo-Caja, Carlos Jesús; Mora, Nuria; Koatz, Débora; Ballester, Marta; Del Pino, Tasmania; Pérez-Ramos, Jeannet; Toledo-Chavarri, Ana; Robles, Noemí; Pérez-Rivas, Francisco Javier; Ramírez-Puerta, Ana Belén; Canellas-Criado, Yolanda; Del Rey-Granado, Yolanda; Muñoz-Balsa, Marcos José; Becerril-Rojas, Beatriz; Rodríguez-Morales, David; Sánchez-Perruca, Luis; Vázquez, José Ramón; Aguirre, Armando
2017-10-30
Communities of practice are based on the idea that learning involves a group of people exchanging experiences and knowledge. The e-MPODERA project aims to assess the effectiveness of a virtual community of practice aimed at improving primary healthcare professional attitudes to the empowerment of patients with chronic diseases. This paper describes the protocol for a cluster randomized controlled trial. We will randomly assign 18 primary-care practices per participating region of Spain (Catalonia, Madrid and Canary Islands) to a virtual community of practice or to usual training. The primary-care practice will be the randomization unit and the primary healthcare professional will be the unit of analysis. We will need a sample of 270 primary healthcare professionals (general practitioners and nurses) and 1382 patients. We will perform randomization after professionals and patients are selected. We will ask the intervention group to participate for 12 months in a virtual community of practice based on a web 2.0 platform. We will measure the primary outcome using the Patient-Provider Orientation Scale questionnaire administered at baseline and after 12 months. Secondary outcomes will be the sociodemographic characteristics of health professionals, sociodemographic and clinical characteristics of patients, the Patient Activation Measure questionnaire for patient activation and outcomes regarding use of the virtual community of practice. We will calculate a linear mixed-effects regression to estimate the effect of participating in the virtual community of practice. This cluster randomized controlled trial will show whether a virtual intervention for primary healthcare professionals improves attitudes to the empowerment of patients with chronic diseases. ClicalTrials.gov, NCT02757781 . Registered on 25 April 2016. Protocol Version. PI15.01 22 January 2016.
2014-01-01
Introduction Translating government-funded cancer research into clinical practice can be accomplished via virtual communities of practice that include key players in the process: researchers, health care practitioners, and intermediaries. This study, conducted from November 2012 through January 2013, examined issues that key stakeholders believed should be addressed to create and sustain government-sponsored virtual communities of practice to integrate cancer control research, practice, and policy and demonstrates how concept mapping can be used to present relevant issues. Methods Key stakeholders brainstormed statements describing what is needed to create and sustain virtual communities of practice for moving cancer control research into practice. Participants rated them on importance and feasibility, selected most relevant statements, and sorted them into clusters. I used concept mapping to examine the issues identified and multidimensional scaling analyses to create a 2-dimensional conceptual map of the statement clusters. Results Participants selected 70 statements and sorted them into 9 major clusters related to creating and sustaining virtual communities of practice: 1) standardization of best practices, 2) external validity, 3) funding and resources, 4) social learning and collaboration, 5) cooperation, 6) partnerships, 7) inclusiveness, 8) social determinants and cultural competency, and 9) preparing the environment. Researchers, health care practitioners, and intermediaries were in relative agreement regarding issues of importance for creating these communities. Conclusion Virtual communities of practice can be created to address the needs of researchers, health care practitioners, and intermediaries by using input from these key stakeholders. Increasing linkages between these subgroups can improve the translation of research into practice. Similarities and differences between groups can provide valuable information to assist the government in developing virtual communities of practice. PMID:24762532
Vinson, Cynthia A
2014-04-24
Translating government-funded cancer research into clinical practice can be accomplished via virtual communities of practice that include key players in the process: researchers, health care practitioners, and intermediaries. This study, conducted from November 2012 through January 2013, examined issues that key stakeholders believed should be addressed to create and sustain government-sponsored virtual communities of practice to integrate cancer control research, practice, and policy and demonstrates how concept mapping can be used to present relevant issues. Key stakeholders brainstormed statements describing what is needed to create and sustain virtual communities of practice for moving cancer control research into practice. Participants rated them on importance and feasibility, selected most relevant statements, and sorted them into clusters. I used concept mapping to examine the issues identified and multidimensional scaling analyses to create a 2-dimensional conceptual map of the statement clusters. Participants selected 70 statements and sorted them into 9 major clusters related to creating and sustaining virtual communities of practice: 1) standardization of best practices, 2) external validity, 3) funding and resources, 4) social learning and collaboration, 5) cooperation, 6) partnerships, 7) inclusiveness, 8) social determinants and cultural competency, and 9) preparing the environment. Researchers, health care practitioners, and intermediaries were in relative agreement regarding issues of importance for creating these communities. Virtual communities of practice can be created to address the needs of researchers, health care practitioners, and intermediaries by using input from these key stakeholders. Increasing linkages between these subgroups can improve the translation of research into practice. Similarities and differences between groups can provide valuable information to assist the government in developing virtual communities of practice.
NASA Astrophysics Data System (ADS)
Sieradzan, Adam K.; Makowski, Mariusz; Augustynowicz, Antoni; Liwo, Adam
2017-03-01
A general and systematic method for the derivation of the functional expressions for the effective energy terms in coarse-grained force fields of polymer chains is proposed. The method is based on the expansion of the potential of mean force of the system studied in the cluster-cumulant series and expanding the all-atom energy in the Taylor series in the squares of interatomic distances about the squares of the distances between coarse-grained centers, to obtain approximate analytical expressions for the cluster cumulants. The primary degrees of freedom to average about are the angles for collective rotation of the atoms contained in the coarse-grained interaction sites about the respective virtual-bond axes. The approach has been applied to the revision of the virtual-bond-angle, virtual-bond-torsional, and backbone-local-and-electrostatic correlation potentials for the UNited RESidue (UNRES) model of polypeptide chains, demonstrating the strong dependence of the torsional and correlation potentials on virtual-bond angles, not considered in the current UNRES. The theoretical considerations are illustrated with the potentials calculated from the ab initio potential-energy surface of terminally blocked alanine by numerical integration and with the statistical potentials derived from known protein structures. The revised torsional potentials correctly indicate that virtual-bond angles close to 90° result in the preference for the turn and helical structures, while large virtual-bond angles result in the preference for polyproline II and extended backbone geometry. The revised correlation potentials correctly reproduce the preference for the formation of β-sheet structures for large values of virtual-bond angles and for the formation of α-helical structures for virtual-bond angles close to 90°.
Large-scale exact diagonalizations reveal low-momentum scales of nuclei
NASA Astrophysics Data System (ADS)
Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.
2018-03-01
Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.
SmallTool - a toolkit for realizing shared virtual environments on the Internet
NASA Astrophysics Data System (ADS)
Broll, Wolfgang
1998-09-01
With increasing graphics capabilities of computers and higher network communication speed, networked virtual environments have become available to a large number of people. While the virtual reality modelling language (VRML) provides users with the ability to exchange 3D data, there is still a lack of appropriate support to realize large-scale multi-user applications on the Internet. In this paper we will present SmallTool, a toolkit to support shared virtual environments on the Internet. The toolkit consists of a VRML-based parsing and rendering library, a device library, and a network library. This paper will focus on the networking architecture, provided by the network library - the distributed worlds transfer and communication protocol (DWTP). DWTP provides an application-independent network architecture to support large-scale multi-user environments on the Internet.
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.
Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques
2008-09-08
Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
Dynamic Extension of a Virtualized Cluster by using Cloud Resources
NASA Astrophysics Data System (ADS)
Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter
2012-12-01
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Lauer, Frank
This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less
Send, Robert; Kaila, Ville R. I.; Sundholm, Dage
2011-01-01
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV . By using a RVS energy threshold of 50 eV , the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2∕TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics∕molecular mechanics separation schemes. PMID:21663351
Send, Robert; Kaila, Ville R I; Sundholm, Dage
2011-06-07
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV. By using a RVS energy threshold of 50 eV, the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2/TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics/molecular mechanics separation schemes. © 2011 American Institute of Physics
Characterising large-scale structure with the REFLEX II cluster survey
NASA Astrophysics Data System (ADS)
Chon, Gayoung
2016-10-01
We study the large-scale structure with superclusters from the REFLEX X-ray cluster survey together with cosmological N-body simulations. It is important to construct superclusters with criteria such that they are homogeneous in their properties. We lay out our theoretical concept considering future evolution of superclusters in their definition, and show that the X-ray luminosity and halo mass functions of clusters in superclusters are found to be top-heavy, different from those of clusters in the field. We also show a promising aspect of using superclusters to study the local cluster bias and mass scaling relation with simulations.
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2015-01-01
During inactive phases of Madden-Julian Oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES (Clouds and the Earth's Radiant Energy System) observations between July 2006 and June 2010 for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index, which assigns the tropics to one of the eight MJO phases each day. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The criteria for defining these cloud types are overcast footprints and cloud top pressures less than 400 hPa, but DC has higher cloud optical depths (=10) than those of CS (<10). The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation speeds/directions.
Dynamically allocated virtual clustering management system
NASA Astrophysics Data System (ADS)
Marcus, Kelvin; Cannata, Jess
2013-05-01
The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.
NASA Technical Reports Server (NTRS)
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
Formation of large-scale structure from cosmic-string loops and cold dark matter
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Scherrer, Robert J.
1987-01-01
Some results from a numerical simulation of the formation of large-scale structure from cosmic-string loops are presented. It is found that even though G x mu is required to be lower than 2 x 10 to the -6th (where mu is the mass per unit length of the string) to give a low enough autocorrelation amplitude, there is excessive power on smaller scales, so that galaxies would be more dense than observed. The large-scale structure does not include a filamentary or connected appearance and shares with more conventional models based on Gaussian perturbations the lack of cluster-cluster correlation at the mean cluster separation scale as well as excessively small bulk velocities at these scales.
Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen
2018-01-01
There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing.
Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen
2018-01-01
There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing. PMID:29867318
Large Scale Structure Studies: Final Results from a Rich Cluster Redshift Survey
NASA Astrophysics Data System (ADS)
Slinglend, K.; Batuski, D.; Haase, S.; Hill, J.
1995-12-01
The results from the COBE satellite show the existence of structure on scales on the order of 10% or more of the horizon scale of the universe. Rich clusters of galaxies from the Abell-ACO catalogs show evidence of structure on scales of 100 Mpc and hold the promise of confirming structure on the scale of the COBE result. Unfortunately, until now, redshift information has been unavailable for a large percentage of these clusters, so present knowledge of their three dimensional distribution has quite large uncertainties. Our approach in this effort has been to use the MX multifiber spectrometer on the Steward 2.3m to measure redshifts of at least ten galaxies in each of 88 Abell cluster fields with richness class R>= 1 and mag10 <= 16.8 (estimated z<= 0.12) and zero or one measured redshifts. This work has resulted in a deeper, 95% complete and more reliable sample of 3-D positions of rich clusters. The primary intent of this survey has been to constrain theoretical models for the formation of the structure we see in the universe today through 2-pt. spatial correlation function and other analyses of the large scale structures traced by these clusters. In addition, we have obtained enough redshifts per cluster to greatly improve the quality and size of the sample of reliable cluster velocity dispersions available for use in other studies of cluster properties. This new data has also allowed the construction of an updated and more reliable supercluster candidate catalog. Our efforts have resulted in effectively doubling the volume traced by these clusters. Presented here is the resulting 2-pt. spatial correlation function, as well as density plots and several other figures quantifying the large scale structure from this much deeper and complete sample. Also, with 10 or more redshifts in most of our cluster fields, we have investigated the extent of projection effects within the Abell catalog in an effort to quantify and understand how this may effect the Abell sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki
2015-06-01
Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.
IR wireless cluster synapses of HYDRA very large neural networks
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas
2008-04-01
RF/IR wireless (virtual) synapses are critical components of HYDRA (Hyper-Distributed Robotic Autonomy) neural networks, already discussed in two earlier papers. The HYDRA network has the potential to be very large, up to 10 11-neurons and 10 18-synapses, based on already established technologies (cellular RF telephony and IR-wireless LANs). It is organized into almost fully connected IR-wireless clusters. The HYDRA neurons and synapses are very flexible, simple, and low-cost. They can be modified into a broad variety of biologically-inspired brain-like computing capabilities. In this third paper, we focus on neural hardware in general, and on IR-wireless synapses in particular. Such synapses, based on LED/LD-connections, dominate the HYDRA neural cluster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crussell, Jonathan; Erickson, Jeremy; Fritz, David
minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and
Cluster Based Location-Aided Routing Protocol for Large Scale Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Wang, Yi; Dong, Liang; Liang, Taotao; Yang, Xinyu; Zhang, Deyun
Routing algorithms with low overhead, stable link and independence of the total number of nodes in the network are essential for the design and operation of the large-scale wireless mobile ad hoc networks (MANET). In this paper, we develop and analyze the Cluster Based Location-Aided Routing Protocol for MANET (C-LAR), a scalable and effective routing algorithm for MANET. C-LAR runs on top of an adaptive cluster cover of the MANET, which can be created and maintained using, for instance, the weight-based distributed algorithm. This algorithm takes into consideration the node degree, mobility, relative distance, battery power and link stability of mobile nodes. The hierarchical structure stabilizes the end-to-end communication paths and improves the networks' scalability such that the routing overhead does not become tremendous in large scale MANET. The clusterheads form a connected virtual backbone in the network, determine the network's topology and stability, and provide an efficient approach to minimizing the flooding traffic during route discovery and speeding up this process as well. Furthermore, it is fascinating and important to investigate how to control the total number of nodes participating in a routing establishment process so as to improve the network layer performance of MANET. C-LAR is to use geographical location information provided by Global Position System to assist routing. The location information of destination node is used to predict a smaller rectangle, isosceles triangle, or circle request zone, which is selected according to the relative location of the source and the destination, that covers the estimated region in which the destination may be located. Thus, instead of searching the route in the entire network blindly, C-LAR confines the route searching space into a much smaller estimated range. Simulation results have shown that C-LAR outperforms other protocols significantly in route set up time, routing overhead, mean delay and packet collision, and simultaneously maintains low average end-to-end delay, high success delivery ratio, low control overhead, as well as low route discovery frequency.
Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories
ERIC Educational Resources Information Center
Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher
2009-01-01
Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…
Virtual Environments Supporting Learning and Communication in Special Needs Education
ERIC Educational Resources Information Center
Cobb, Sue V. G.
2007-01-01
Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…
NASA Astrophysics Data System (ADS)
Cheon, M.; Chang, I.
1999-04-01
The scaling behavior for a binary fragmentation of critical percolation clusters is investigated by a large-cell Monte Carlo real-space renormalization group method in two and three dimensions. We obtain accurate values of critical exponents λ and phi describing the scaling of fragmentation rate and the distribution of fragments' masses produced by a binary fragmentation. Our results for λ and phi show that the fragmentation rate is proportional to the size of mother cluster, and the scaling relation σ = 1 + λ - phi conjectured by Edwards et al. to be valid for all dimensions is satisfied in two and three dimensions, where σ is the crossover exponent of the average cluster number in percolation theory, which excludes the other scaling relations.
Virtualization for the LHCb Online system
NASA Astrophysics Data System (ADS)
Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko
2011-12-01
Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.
ALICE HLT Cluster operation during ALICE Run 2
NASA Astrophysics Data System (ADS)
Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.;
2017-10-01
ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.
ACStor: Optimizing Access Performance of Virtual Disk Images in Clouds
Wu, Song; Wang, Yihong; Luo, Wei; ...
2017-03-02
In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less
Approaches for scalable modeling and emulation of cyber systems : LDRD final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.
2009-09-01
The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminarymore » theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.« less
Observations of a nearby filament of galaxy clusters with the Sardinia Radio Telescope
NASA Astrophysics Data System (ADS)
Vacca, Valentina; Murgia, M.; Loi, F. Govoni F.; Vazza, F.; Finoguenov, A.; Carretti, E.; Feretti, L.; Giovannini, G.; Concu, R.; Melis, A.; Gheller, C.; Paladino, R.; Poppi, S.; Valente, G.; Bernardi, G.; Boschin, W.; Brienza, M.; Clarke, T. E.; Colafrancesco, S.; Enßlin, T.; Ferrari, C.; de Gasperin, F.; Gastaldello, F.; Girardi, M.; Gregorini, L.; Johnston-Hollitt, M.; Junklewitz, H.; Orrù, E.; Parma, P.; Perley, R.; Taylor, G. B.
2018-05-01
We report the detection of diffuse radio emission which might be connected to a large-scale filament of the cosmic web covering a 8° × 8° area in the sky, likely associated with a z≈0.1 over-density traced by nine massive galaxy clusters. In this work, we present radio observations of this region taken with the Sardinia Radio Telescope. Two of the clusters in the field host a powerful radio halo sustained by violent ongoing mergers and provide direct proof of intra-cluster magnetic fields. In order to investigate the presence of large-scale diffuse radio synchrotron emission in and beyond the galaxy clusters in this complex system, we combined the data taken at 1.4 GHz with the Sardinia Radio Telescope with higher resolution data taken with the NRAO VLA Sky Survey. We found 28 candidate new sources with a size larger and X-ray emission fainter than known diffuse large-scale synchrotron cluster sources for a given radio power. This new population is potentially the tip of the iceberg of a class of diffuse large-scale synchrotron sources associated with the filaments of the cosmic web. In addition, we found in the field a candidate new giant radio galaxy.
NGScloud: RNA-seq analysis of non-model species using cloud computing.
Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai
2018-05-03
RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.
Implementing the Liquid Curriculum: The Impact of Virtual World Learning on Higher Education
ERIC Educational Resources Information Center
Steils, Nicole; Tombs, Gemma; Mawer, Matt; Savin-Baden, Maggi; Wimpenny, Katherine
2015-01-01
This paper presents findings from a large-scale study which explored the socio-political impact of teaching and learning in virtual worlds on UK higher education. Three key themes emerged with regard to constructing curricula for virtual world teaching and learning, namely designing courses, framing practice and locating specific student needs.…
Lorenz, D; Armbruster, W; Vogelgesang, C; Hoffmann, H; Pattar, A; Schmidt, D; Volk, T; Kubulus, D
2016-09-01
Chief emergency physicians are regarded as an important element in the care of the injured and sick following mass casualty accidents. Their education is very theoretical; practical content in contrast often falls short. Limitations are usually the very high costs of realistic (large-scale) exercises, poor reproducibility of the scenarios, and poor corresponding results. To substantially improve the educational level because of the complexity of mass casualty accidents, modified training concepts are required that teach the not only the theoretical but above all the practical skills considerably more intensively than at present. Modern training concepts should make it possible for the learner to realistically simulate decision processes. This article examines how interactive virtual environments are applicable for the education of emergency personnel and how they could be designed. Virtual simulation and training environments offer the possibility of simulating complex situations in an adequately realistic manner. The so-called virtual reality (VR) used in this context is an interface technology that enables free interaction in addition to a stereoscopic and spatial representation of virtual large-scale emergencies in a virtual environment. Variables in scenarios such as the weather, the number wounded, and the availability of resources, can be changed at any time. The trainees are able to practice the procedures in many virtual accident scenes and act them out repeatedly, thereby testing the different variants. With the aid of the "InSitu" project, it is possible to train in a virtual reality with realistically reproduced accident situations. These integrated, interactive training environments can depict very complex situations on a scale of 1:1. Because of the highly developed interactivity, the trainees can feel as if they are a direct part of the accident scene and therefore identify much more with the virtual world than is possible with desktop systems. Interactive, identifiable, and realistic training environments based on projector systems could in future enable a repetitive exercise with changes within a decision tree, in reproducibility, and within different occupational groups. With a hard- and software environment numerous accident situations can be depicted and practiced. The main expense is the creation of the virtual accident scenes. As the appropriate city models and other three-dimensional geographical data are already available, this expenditure is very low compared with the planning costs of a large-scale exercise.
NASA Astrophysics Data System (ADS)
Zhang, Jilin; Sha, Chaoqun; Wu, Yusen; Wan, Jian; Zhou, Li; Ren, Yongjian; Si, Huayou; Yin, Yuyu; Jing, Ya
2017-02-01
GPU not only is used in the field of graphic technology but also has been widely used in areas needing a large number of numerical calculations. In the energy industry, because of low carbon, high energy density, high duration and other characteristics, the development of nuclear energy cannot easily be replaced by other energy sources. Management of core fuel is one of the major areas of concern in a nuclear power plant, and it is directly related to the economic benefits and cost of nuclear power. The large-scale reactor core expansion equation is large and complicated, so the calculation of the diffusion equation is crucial in the core fuel management process. In this paper, we use CUDA programming technology on a GPU cluster to run the LU-SGS parallel iterative calculation against the background of the diffusion equation of the reactor. We divide one-dimensional and two-dimensional mesh into a plurality of domains, with each domain evenly distributed on the GPU blocks. A parallel collision scheme is put forward that defines the virtual boundary of the grid exchange information and data transmission by non-stop collision. Compared with the serial program, the experiment shows that GPU greatly improves the efficiency of program execution and verifies that GPU is playing a much more important role in the field of numerical calculations.
An Analysis of Rich Cluster Redshift Survey Data for Large Scale Structure Studies
NASA Astrophysics Data System (ADS)
Slinglend, K.; Batuski, D.; Haase, S.; Hill, J.
1994-12-01
The results from the COBE satellite show the existence of structure on scales on the order of 10% or more of the horizon scale of the universe. Rich clusters of galaxies from Abell's catalog show evidence of structure on scales of 100 Mpc and may hold the promise of confirming structure on the scale of the COBE result. However, many Abell clusters have zero or only one measured redshift, so present knowledge of their three dimensional distribution has quite large uncertainties. The shortage of measured redshifts for these clusters may also mask a problem of projection effects corrupting the membership counts for the clusters. Our approach in this effort has been to use the MX multifiber spectrometer on the Steward 2.3m to measure redshifts of at least ten galaxies in each of 80 Abell cluster fields with richness class R>= 1 and mag10 <= 16.8 (estimated z<= 0.12) and zero or one measured redshifts. This work will result in a deeper, more complete (and reliable) sample of positions of rich clusters. Our primary intent for the sample is for two-point correlation and other studies of the large scale structure traced by these clusters in an effort to constrain theoretical models for structure formation. We are also obtaining enough redshifts per cluster so that a much better sample of reliable cluster velocity dispersions will be available for other studies of cluster properties. To date, we have collected such data for 64 clusters, and for most of them, we have seven or more cluster members with redshifts, allowing for reliable velocity dispersion calculations. Velocity histograms and stripe density plots for several interesting cluster fields are presented, along with summary tables of cluster redshift results. Also, with 10 or more redshifts in most of our cluster fields (30({') } square, just about an `Abell diameter' at z ~ 0.1) we have investigated the extent of projection effects within the Abell catalog in an effort to quantify and understand how this may effect the Abell sample.
WeaVR: a self-contained and wearable immersive virtual environment simulation system.
Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James
2015-03-01
We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.
The Universe at Moderate Redshift
NASA Technical Reports Server (NTRS)
Cen, Renyue; Ostriker, Jeremiah P.
1997-01-01
The report covers the work done in the past year and a wide range of fields including properties of clusters of galaxies; topological properties of galaxy distributions in terms of galaxy types; patterns of gravitational nonlinear clustering process; development of a ray tracing algorithm to study the gravitational lensing phenomenon by galaxies, clusters and large-scale structure, one of whose applications being the effects of weak gravitational lensing by large-scale structure on the determination of q(0); the origin of magnetic fields on the galactic and cluster scales; the topological properties of Ly(alpha) clouds the Ly(alpha) optical depth distribution; clustering properties of Ly(alpha) clouds; and a determination (lower bound) of Omega(b) based on the observed Ly(alpha) forest flux distribution. In the coming year, we plan to continue the investigation of Ly(alpha) clouds using larger dynamic range (about a factor of two) and better simulations (with more input physics included) than what we have now. We will study the properties of galaxies on 1 - 100h(sup -1) Mpc scales using our state-of-the-art large scale galaxy formation simulations of various cosmological models, which will have a resolution about a factor of 5 (in each dimension) better than our current, best simulations. We will plan to study the properties of X-ray clusters using unprecedented, very high dynamic range (20,000) simulations which will enable us to resolve the cores of clusters while keeping the simulation volume sufficiently large to ensure a statistically fair sample of the objects of interest. The details of the last year's works are now described.
NASA Astrophysics Data System (ADS)
Bjorklund, E.
1994-12-01
In the 1970s, when computers were memory limited, operating system designers created the concept of "virtual memory", which gave users the ability to address more memory than physically existed. In the 1990s, many large control systems have the potential of becoming data limited. We propose that many of the principles behind virtual memory systems (working sets, locality, caching and clustering) can also be applied to data-limited systems, creating, in effect, "virtual data systems". At the Los Alamos National Laboratory's Clinton P. Anderson Meson Physics Facility (LAMPF), we have applied these principles to a moderately sized (10 000 data points) data acquisition and control system. To test the principles, we measured the system's performance during tune-up, production, and maintenance periods. In this paper, we present a general discussion of the principles of a virtual data system along with some discussion of our own implementation and the results of our performance measurements.
Cluster galaxy dynamics and the effects of large-scale environment
NASA Astrophysics Data System (ADS)
White, Martin; Cohn, J. D.; Smit, Renske
2010-11-01
Advances in observational capabilities have ushered in a new era of multi-wavelength, multi-physics probes of galaxy clusters and ambitious surveys are compiling large samples of cluster candidates selected in different ways. We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters (e.g. richness, lensing, Compton distortion and velocity dispersion). We pay particular attention to velocity dispersions, matching galaxies to subhaloes which are explicitly tracked in the simulation. We find that not only do haloes persist as subhaloes when they fall into a larger host, but groups of subhaloes retain their identity for long periods within larger host haloes. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and give illustrative examples. Such a large variance suggests that velocity dispersion estimators will work better in an ensemble sense than for any individual cluster, which may inform strategies for obtaining redshifts of cluster members. We similarly find that the ability of substructure indicators to find kinematic substructures is highly viewing angle dependent. While groups of subhaloes which merge with a larger host halo can retain their identity for many Gyr, they are only sporadically picked up by substructure indicators. We discuss the effects of correlated scatter on scaling relations estimated through stacking, both analytically and in the simulations, showing that the strong correlation of measures with mass and the large scatter in mass at fixed observable mitigate line-of-sight projections.
Classifying proteins into functional groups based on all-versus-all BLAST of 10 million proteins.
Kolker, Natali; Higdon, Roger; Broomall, William; Stanberry, Larissa; Welch, Dean; Lu, Wei; Haynes, Winston; Barga, Roger; Kolker, Eugene
2011-01-01
To address the monumental challenge of assigning function to millions of sequenced proteins, we completed the first of a kind all-versus-all sequence alignments using BLAST for 9.9 million proteins in the UniRef100 database. Microsoft Windows Azure produced over 3 billion filtered records in 6 days using 475 eight-core virtual machines. Protein classification into functional groups was then performed using Hive and custom jars implemented on top of Apache Hadoop utilizing the MapReduce paradigm. First, using the Clusters of Orthologous Genes (COG) database, a length normalized bit score (LNBS) was determined to be the best similarity measure for classification of proteins. LNBS achieved sensitivity and specificity of 98% each. Second, out of 5.1 million bacterial proteins, about two-thirds were assigned to significantly extended COG groups, encompassing 30 times more assigned proteins. Third, the remaining proteins were classified into protein functional groups using an innovative implementation of a single-linkage algorithm on an in-house Hadoop compute cluster. This implementation significantly reduces the run time for nonindexed queries and optimizes efficient clustering on a large scale. The performance was also verified on Amazon Elastic MapReduce. This clustering assigned nearly 2 million proteins to approximately half a million different functional groups. A similar approach was applied to classify 2.8 million eukaryotic sequences resulting in over 1 million proteins being assign to existing KOG groups and the remainder clustered into 100,000 functional groups.
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.
Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik
2008-03-01
Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.
Size dependent fragmentation of argon clusters in the soft x-ray ionization regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gisselbrecht, Mathieu; Lindgren, Andreas; Burmeister, Florian
Photofragmentation of argon clusters of average size ranging from 10 up to 1000 atoms is studied using soft x-ray radiation below the 2p threshold and multicoincidence mass spectroscopy technique. For small clusters (
Preliminary Evidence for a Virial Shock around the Coma Galaxy Cluster
NASA Astrophysics Data System (ADS)
Keshet, Uri; Kushnir, Doron; Loeb, Abraham; Waxman, Eli
2017-08-01
Galaxy clusters, the largest gravitationally bound objects in the universe, are thought to grow by accreting mass from their surroundings through large-scale virial shocks. Due to electron acceleration in such a shock, it should appear as a γ-ray, hard X-ray, and radio ring, elongated toward the large-scale filaments feeding the cluster, coincident with a cutoff in the thermal Sunyaev-Zel’dovich (SZ) signal. However, no such signature was found until now, and the very existence of cluster virial shocks has remained a theory. We find preliminary evidence for a large γ-ray ring of ˜ 5 {Mpc} minor axis around the Coma cluster, elongated toward the large-scale filament connecting Coma and Abell 1367, detected at the nominal 2.7σ confidence level (5.1σ using control signal simulations). The γ-ray ring correlates both with a synchrotron signal and with the SZ cutoff, but not with Galactic tracers. The γ-ray and radio signatures agree with analytic and numerical predictions if the shock deposits ˜ 1 % of the thermal energy in relativistic electrons over a Hubble time and ˜ 1 % in magnetic fields. The implied inverse Compton and synchrotron cumulative emission from similar shocks can contribute significantly to the diffuse extragalactic γ-ray and low-frequency radio backgrounds. Our results, if confirmed, reveal the prolate structure of the hot gas in Coma, the feeding pattern of the cluster, and properties of the surrounding large-scale voids and filaments. The anticipated detection of such shocks around other clusters would provide a powerful new cosmological probe.
Mosaic construction, processing, and review of very large electron micrograph composites
NASA Astrophysics Data System (ADS)
Vogt, Robert C., III; Trenkle, John M.; Harmon, Laurel A.
1996-11-01
A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.
The Galics Project: Virtual Galaxy: from Cosmological N-body Simulations
NASA Astrophysics Data System (ADS)
Guiderdoni, B.
The GalICS project develops extensive semi-analytic post-processing of large cosmological simulations to describe hierarchical galaxy formation. The multiwavelength statistical properties of high-redshift and local galaxies are predicted within the large-scale structures. The fake catalogs and mock images that are generated from the outputs are used for the analysis and preparation of deep surveys. The whole set of results is now available in an on-line database that can be easily queried. The GalICS project represents a first step towards a 'Virtual Observatory of virtual galaxies'.
The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey
NASA Astrophysics Data System (ADS)
Squires, Gordon K.; Lubin, L. M.; Gal, R. R.
2007-05-01
We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.
A Survey on Virtualization of Wireless Sensor Networks
Islam, Md. Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam
2012-01-01
Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization. PMID:22438759
A survey on virtualization of Wireless Sensor Networks.
Islam, Md Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam
2012-01-01
Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.
NASA Astrophysics Data System (ADS)
Konno, Yohko; Suzuki, Keiji
This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.
Large-scale structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Park, Changbom; Spergel, David N.; Turok, Nail
1991-01-01
This paper studies the formation of large-scale structure by global texture in a flat universe dominated by cold dark matter. A code for evolution of the texture fields was combined with an N-body code for evolving the dark matter. The results indicate some promising aspects: with only one free parameter, the observed galaxy-galaxy correlation function is reproduced, clusters of galaxies are found to be significantly clustered on a scale of 20-50/h Mpc, and coherent structures of over 50/h Mpc in the galaxy distribution were found. The large-scale streaming motions observed are in good agreement with the observations: the average magnitude of the velocity field smoothed over 30/h Mpc is 430 km/sec. Global texture produces a cosmic Mach number that is compatible with observation. Also, significant evolution of clusters at low redshift was seen. Possible problems for the theory include too high velocity dispersions in clusters, and voids which are not as empty as those observed.
An efficient and near linear scaling pair natural orbital based local coupled cluster method.
Riplinger, Christoph; Neese, Frank
2013-01-21
In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009)]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 10(5)-10(6) relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below <0.05%, which implies typically 15-20 but occasionally up to 30 atoms per domain on average. The new method has been given the acronym DLPNO-CCSD ("domain based LPNO-CCSD"). The method is nearly linear scaling with respect to system size. The original LPNO-CCSD method had three adjustable truncation thresholds that were chosen conservatively and do not need to be changed for actual applications. In the present treatment, no additional truncation parameters have been introduced. Any additional truncation is performed on the basis of the three original thresholds. There are no real-space cutoffs. Single excitations are truncated using singles-specific natural orbitals. Pairs are prescreened according to a multipole expansion of a pair correlation energy estimate based on local orbital specific virtual orbitals (LOSVs). Like its LPNO-CCSD predecessor, the method is completely of black box character and does not require any user adjustments. It is shown here that DLPNO-CCSD is as accurate as LPNO-CCSD while leading to computational savings exceeding one order of magnitude for larger systems. The largest calculations reported here featured >8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Song; Wang, Yihong; Luo, Wei
In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks
Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.
2010-01-01
Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less
Large scale structure in universes dominated by cold dark matter
NASA Technical Reports Server (NTRS)
Bond, J. Richard
1986-01-01
The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.
Gas Dynamics in the Fornax Cluster: Viscosity, turbulence, and sloshing
NASA Astrophysics Data System (ADS)
Kraft, Ralph; Su, Yuanyuan; Sheardown, Alexander; Roediger, Elke; Nulsen, Paul; Forman, William; Jones, Christine; Churazov, Eugene
2018-01-01
We present results from deep Chandra and XMM-Newton observations of the ICM in the Fornax cluster, and combine these data with specifically-tailored hydrodynamic simulations for an unprecedented view of the gas dynamics in this nearby cluster. We report the detection of four sloshing fronts (Su+2017). Based on our simulations, all four of these fronts can plausibly be attributed to the infall of the early-type galaxy NGC 1404 into the cluster potential. We argue that the presence of these sloshing cold fronts, the lack of its own extended gas halo, and the approximately transonic infall velocity indicate that this must be at least the second core passage for NGC 1404. Additionally, there is virtually no stripped tail of cool gas behind NGC 1404, conclusively demonstrating that the stripped gas is efficiently mixed with the cluster ICM. This mixing most likely occurs via small-scale Kelvin-Helmholtz instabilities formed in the high Reynolds number flow.
Scaling of cluster growth for coagulating active particles
NASA Astrophysics Data System (ADS)
Cremer, Peet; Löwen, Hartmut
2014-02-01
Cluster growth in a coagulating system of active particles (such as microswimmers in a solvent) is studied by theory and simulation. In contrast to passive systems, the net velocity of a cluster can have various scalings dependent on the propulsion mechanism and alignment of individual particles. Additionally, the persistence length of the cluster trajectory typically increases with size. As a consequence, a growing cluster collects neighboring particles in a very efficient way and thus amplifies its growth further. This results in unusual large growth exponents for the scaling of the cluster size with time and, for certain conditions, even leads to "explosive" cluster growth where the cluster becomes macroscopic in a finite amount of time.
ERIC Educational Resources Information Center
Garrett Dikkers, Amy
2015-01-01
This mixed-method study reports perspectives of virtual school teachers on the impact of online teaching on their face-to-face practice. Data from a large-scale survey of teachers in the North Carolina Virtual Public School (n = 214), focus groups (n = 7), and interviews (n = 5) demonstrate multiple intersections between online and face-to-face…
The performance of low-cost commercial cloud computing as an alternative in computational chemistry.
Thackston, Russell; Fortenberry, Ryan C
2015-05-05
The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.
A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.
Shankaranarayanan, Avinas; Amaldas, Christine
2010-11-01
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.
Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin
2018-01-01
Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...
2018-01-05
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Naim, Eli; Krapivsky, Paul
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Handheld Micromanipulation with Vision-Based Virtual Fixtures
Becker, Brian C.; MacLachlan, Robert A.; Hager, Gregory D.; Riviere, Cameron N.
2011-01-01
Precise movement during micromanipulation becomes difficult in submillimeter workspaces, largely due to the destabilizing influence of tremor. Robotic aid combined with filtering techniques that suppress tremor frequency bands increases performance; however, if knowledge of the operator's goals is available, virtual fixtures have been shown to greatly improve micromanipulator precision. In this paper, we derive a control law for position-based virtual fixtures within the framework of an active handheld micromanipulator, where the fixtures are generated in real-time from microscope video. Additionally, we develop motion scaling behavior centered on virtual fixtures as a simple and direct extension to our formulation. We demonstrate that hard and soft (motion-scaled) virtual fixtures outperform state-of-the-art tremor cancellation performance on a set of artificial but medically relevant tasks: holding, move-and-hold, curve tracing, and volume restriction. PMID:23275860
Geiger, M F; Herder, F; Monaghan, M T; Almada, V; Barbieri, R; Bariche, M; Berrebi, P; Bohlen, J; Casal-Lopez, M; Delmastro, G B; Denys, G P J; Dettai, A; Doadrio, I; Kalogianni, E; Kärst, H; Kottelat, M; Kovačić, M; Laporte, M; Lorenzoni, M; Marčić, Z; Özuluğ, M; Perdices, A; Perea, S; Persat, H; Porcelotti, S; Puzzi, C; Robalo, J; Šanda, R; Schneider, M; Šlechtová, V; Stoumboudi, M; Walter, S; Freyhof, J
2014-11-01
Incomplete knowledge of biodiversity remains a stumbling block for conservation planning and even occurs within globally important Biodiversity Hotspots (BH). Although technical advances have boosted the power of molecular biodiversity assessments, the link between DNA sequences and species and the analytics to discriminate entities remain crucial. Here, we present an analysis of the first DNA barcode library for the freshwater fish fauna of the Mediterranean BH (526 spp.), with virtually complete species coverage (498 spp., 98% extant species). In order to build an identification system supporting conservation, we compared species determination by taxonomists to multiple clustering analyses of DNA barcodes for 3165 specimens. The congruence of barcode clusters with morphological determination was strongly dependent on the method of cluster delineation, but was highest with the general mixed Yule-coalescent (GMYC) model-based approach (83% of all species recovered as GMYC entity). Overall, genetic morphological discontinuities suggest the existence of up to 64 previously unrecognized candidate species. We found reduced identification accuracy when using the entire DNA-barcode database, compared with analyses on databases for individual river catchments. This scale effect has important implications for barcoding assessments and suggests that fairly simple identification pipelines provide sufficient resolution in local applications. We calculated Evolutionarily Distinct and Globally Endangered scores in order to identify candidate species for conservation priority and argue that the evolutionary content of barcode data can be used to detect priority species for future IUCN assessments. We show that large-scale barcoding inventories of complex biotas are feasible and contribute directly to the evaluation of conservation priorities. © 2014 John Wiley & Sons Ltd.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Performance/price estimates for cortex-scale hardware: a design space exploration.
Zaveri, Mazad S; Hammerstrom, Dan
2011-04-01
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Riechers, Dominik A.; Bolatto, Alberto D.; Carilli, Chris; Casey, Caitlin M.; Decarli, Roberto; Murphy, Eric Joseph; Narayanan, Desika; Walter, Fabian; ngVLA Galaxy Assembly through Cosmic Time Science Working Group, ngVLA Galaxy Ecosystems Science Working Group
2018-01-01
The Next Generation Very Large Array (ngVLA) will fundamentally advance our understanding of the formation processes that lead to the assembly of galaxies throughout cosmic history. The combination of large bandwidth with unprecedented sensitivity to the critical low-level CO lines over virtually the entire redshift range will open up the opportunity to conduct large-scale, deep cold molecular gas surveys, mapping the fuel for star formation in galaxies over substantial cosmic volumes. Imaging of the sub-kiloparsec scale distribution and kinematic structure of molecular gas in both normal main-sequence galaxies and large starbursts back to early cosmic epochs will reveal the physical processes responsible for star formation and black hole growth in galaxies over a broad range in redshifts. In the nearby universe, the ngVLA has the capability to survey the structure of the cold, star-forming interstellar medium at parsec-resolution out to the Virgo cluster. A range of molecular tracers will be accessible to map the motion, distribution, and physical and chemical state of the gas as it flows in from the outer disk, assembles into clouds, and experiences feedback due to star formation or accretion into central super-massive black holes. These investigations will crucially complement studies of the star formation and stellar mass histories with the Large UV/Optical/Infrared Surveyor and the Origins Space Telescope, providing the means to obtain a comprehensive picture of galaxy evolution through cosmic times.
The Origin of Clusters and Large-Scale Structures: Panoramic View of the High-z Universe
NASA Astrophysics Data System (ADS)
Ouchi, Masami
We will report results of our on-going survey for proto-clusters and large-scale structures at z=3-6. We carried out very wide and deep optical imaging down to i=27 for a 1 deg^2 field of the Subaru/XMM Deep Field with 8.2m Subaru Telescope. We obtain maps of the Universe traced by ~1,000 Ly-a galaxies at z=3, 4, and 6 and by ~10,000 Lyman break galaxies at z=3-6. These cosmic maps have a transverse dimension of ~150 Mpc x 150 Mpc in comoving units at these redshifts, and provide us, for the first time, a panoramic view of the high-z Universe from the scales of galaxies, clusters to large-scale structures. Major results and implications will be presented in our talk. (Part of this work is subject to press embargo.)
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Mapping Dark Matter in Simulated Galaxy Clusters
NASA Astrophysics Data System (ADS)
Bowyer, Rachel
2018-01-01
Galaxy clusters are the most massive bound objects in the Universe with most of their mass being dark matter. Cosmological simulations of structure formation show that clusters are embedded in a cosmic web of dark matter filaments and large scale structure. It is thought that these filaments are found preferentially close to the long axes of clusters. We extract galaxy clusters from the simulations "cosmo-OWLS" in order to study their properties directly and also to infer their properties from weak gravitational lensing signatures. We investigate various stacking procedures to enhance the signal of the filaments and large scale structure surrounding the clusters to better understand how the filaments of the cosmic web connect with galaxy clusters. This project was supported in part by the NSF REU grant AST-1358980 and by the Nantucket Maria Mitchell Association.
Self-similar hierarchical energetics in the ICM of massive galaxy clusters
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Beresnyak, Andrey
Massive galaxy clusters (GC) are filled with a hot, turbulent and magnetised intra-cluster medium (ICM). They are still forming under the action of gravitational instability, which drives supersonic mass accretion flows. These partially dissipate into heat through a complex network of large scale shocks, and partly excite giant turbulent eddies and cascade. Turbulence dissipation not only contributes to heating of the ICM but also amplifies magnetic energy by way of dynamo action. The pattern of gravitational energy turning into kinetic, thermal, turbulent and magnetic is a fundamental feature of GC hydrodynamics but quantitative modelling has remained a challenge. In this contribution we present results from a recent high resolution, fully cosmological numerical simulation of a massive Coma-like galaxy cluster in which the time dependent turbulent motions of the ICM are resolved (Miniati 2014) and their statistical properties are quantified for the first time (Miniati 2015, Beresnyak & Miniati 2015). We combine these results with independent state-of-the art numerical simulations of MHD turbulence (Beresnyak 2012), which shows that in the nonlinear regime of turbulent dynamo (for magnetic Prandtl numbers>~ 1) the growth rate of the magnetic energy corresponds to a fraction CE ~= 4 - 5 × 10-2 of the turbulent dissipation rate. We thus determine without adjustable parameters the thermal, turbulent and magnetic history of giant GC (Miniati & Beresnyak 2015). We find that the energy components of the ICM are ordered according to a permanent hierarchy, in which the sonic Mach number at the turbulent injection scale is of order unity, the beta of the plasma of order forty and the ratio of turbulent injection scale to Alfvén scale is of order one hundred. These dimensionless numbers remain virtually unaltered throughout the cluster's history, despite evolution of each individual component and the drive towards equipartition of the turbulent dynamo, thus revealing a new type of self-similarity in cosmology. Their specific values, while consistent with current data, indicate that thermal energy dominates the ICM energetics and the turbulent dynamo is always far from saturation, unlike the condition in other familiar astrophysical fluids (stars, interstellar medium of galaxies, compact objects, etc.). In addition, they have important physical meaning as their specific values encodes information about the efficiency of turbulent heating (the fraction of ICM thermal energy produced by turbulent dissipation) and the efficiency of dynamo action in the ICM (CE ).
Using XMM-OM UV Data to Study Cluster Galaxy Evolution
NASA Astrophysics Data System (ADS)
Miller, Neal A.; O'Steen, R.
2010-01-01
The XMM-Newton satellite includes an Optical Monitor (XMM-OM) for the simultaneous observation of its X-ray targets at UV and optical wavelengths. On account of XMM's excellent characteristics for the observation of the hot intracluster medium, a large number of galaxy clusters have been observed by XMM and there is consequently a large and virtually unused database of XMM-OM UV data for galaxies in the cores of these clusters. We have begun a program to capitalize on such data, and describe here our efforts on a subsample of ten nearby clusters having XMM-OM, GALEX, and SDSS data. We present our methods for photometry and calibration of the XMM-OM UV data, and briefly present some applications including galaxy color magnitude diagrams (and identification of the red sequence, blue cloud, and green valley) and SED fitting (and galaxy stellar masses and star formation histories). Support for this work is provided by NASA Award Number NNX09AC76G.
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks
Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.
2014-01-01
Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877
NASA Astrophysics Data System (ADS)
Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.
2012-12-01
The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.
The stable clustering ansatz, consistency relations and gravity dual of large-scale structure
NASA Astrophysics Data System (ADS)
Munshi, Dipak
2018-02-01
Gravitational clustering in the nonlinear regime remains poorly understood. Gravity dual of gravitational clustering has recently been proposed as a means to study the nonlinear regime. The stable clustering ansatz remains a key ingredient to our understanding of gravitational clustering in the highly nonlinear regime. We study certain aspects of violation of the stable clustering ansatz in the gravity dual of Large Scale Structure (LSS). We extend the recent studies of gravitational clustering using AdS gravity dual to take into account possible departure from the stable clustering ansatz and to arbitrary dimensions. Next, we extend the recently introduced consistency relations to arbitrary dimensions. We use the consistency relations to test the commonly used models of gravitational clustering including the halo models and hierarchical ansätze. In particular we establish a tower of consistency relations for the hierarchical amplitudes: Q, Ra, Rb, Sa,Sb,Sc etc. as a functions of the scaled peculiar velocity h. We also study the variants of popular halo models in this context. In contrast to recent claims, none of these models, in their simplest incarnation, seem to satisfy the consistency relations in the soft limit.
Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
Tracing Large Scale Structure with a Redshift Survey of Rich Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Batuski, D.; Slinglend, K.; Haase, S.; Hill, J. M.
1993-12-01
Rich clusters of galaxies from Abell's catalog show evidence of structure on scales of 100 Mpc and hold promise of confirming the existence of structure in the more immediate universe on scales corresponding to COBE results (i.e., on the order of 10% or more of the horizon size of the universe). However, most Abell clusters do not as yet have measured redshifts (or, in the case of most low redshift clusters, have only one or two galaxies measured), so present knowledge of their three dimensional distribution has quite large uncertainties. The shortage of measured redshifts for these clusters may also mask a problem of projection effects corrupting the membership counts for the clusters, perhaps even to the point of spurious identifications of some of the clusters themselves. Our approach in this effort has been to use the MX multifiber spectrometer to measure redshifts of at least ten galaxies in each of about 80 Abell cluster fields with richness class R>= 1 and mag10 <= 16.8. This work will result in a somewhat deeper, much more complete (and reliable) sample of positions of rich clusters. Our primary use for the sample is for two-point correlation and other studies of the large scale structure traced by these clusters. We are also obtaining enough redshifts per cluster so that a much better sample of reliable cluster velocity dispersions will be available for other studies of cluster properties. To date, we have collected such data for 40 clusters, and for most of them, we have seven or more cluster members with redshifts, allowing for reliable velocity dispersion calculations. Velocity histograms for several interesting cluster fields are presented, along with summary tables of cluster redshift results. Also, with 10 or more redshifts in most of our cluster fields (30({') } square, just about an `Abell diameter' at z ~ 0.1) we have investigated the extent of projection effects within the Abell catalog in an effort to quantify and understand how this may effect the Abell sample.
The Virtual Observatory Powered PhD Thesis
NASA Astrophysics Data System (ADS)
Zolotukhin, I. Yu.
2010-12-01
The Virtual Observatory has reached sufficient maturity for its routine scientific exploitation by astronomers. To prove this statement, here I present a brief description of the complete VO-powered PhD thesis entitled “Galactic and extragalactic research with modern surveys and the Virtual Observatory” comprising 4 science cases covering various aspects of astrophysical research. These comprize: (1) homogeneous search and measurement of main physical parameters of Galactic open star clusters in huge multi-band photometric surveys; (2) study of optical-to-NIR galaxy colors using a large homogeneous dataset including spectroscopy and photometry from SDSS and UKIDSS; (3) study of faint low-mass X-ray binary population in modern observational archives; (4) search for optical counterparts of unidentified X-ray objects with large positional uncertainties in the Galactic Plane. All these projects make heavy use of the VO technologies and tools and would not be achievable without them. So refereed papers published in the frame of this thesis can undoubtedly be added to the growing list of VO-based research works.
Galaxy clusters in local Universe simulations without density constraints: a long uphill struggle
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.
2018-06-01
Galaxy clusters are excellent cosmological probes provided that their formation and evolution within the large scale environment are precisely understood. Therefore studies with simulated galaxy clusters have flourished. However detailed comparisons between simulated and observed clusters and their population - the galaxies - are complicated by the diversity of clusters and their surrounding environment. An original way initiated by Bertschinger as early as 1987, to legitimize the one-to-one comparison exercise down to the details, is to produce simulations constrained to resemble the cluster under study within its large scale environment. Subsequently several methods have emerged to produce simulations that look like the local Universe. This paper highlights one of these methods and its essential steps to get simulations that not only resemble the local Large Scale Structure but also that host the local clusters. It includes a new modeling of the radial peculiar velocity uncertainties to remove the observed correlation between the decreases of the simulated cluster masses and of the amount of data used as constraints with the distance from us. This method has the particularity to use solely radial peculiar velocities as constraints: no additional density constraints are required to get local cluster simulacra. The new resulting simulations host dark matter halos that match the most prominent local clusters such as Coma. Zoom-in simulations of the latter and of a volume larger than the 30h-1 Mpc radius inner sphere become now possible to study local clusters and their effects. Mapping the local Sunyaev-Zel'dovich and Sachs-Wolfe effects can follow.
Cross-correlating the γ-ray Sky with Catalogs of Galaxy Clusters
NASA Astrophysics Data System (ADS)
Branchini, Enzo; Camera, Stefano; Cuoco, Alessandro; Fornengo, Nicolao; Regis, Marco; Viel, Matteo; Xia, Jun-Qing
2017-01-01
We report the detection of a cross-correlation signal between Fermi Large Area Telescope diffuse γ-ray maps and catalogs of clusters. In our analysis, we considered three different catalogs: WHL12, redMaPPer, and PlanckSZ. They all show a positive correlation with different amplitudes, related to the average mass of the objects in each catalog, which also sets the catalog bias. The signal detection is confirmed by the results of a stacking analysis. The cross-correlation signal extends to rather large angular scales, around 1°, that correspond, at the typical redshift of the clusters in these catalogs, to a few to tens of megaparsecs, I.e., the typical scale-length of the large-scale structures in the universe. Most likely this signal is contributed by the cumulative emission from active galactic nuclei (AGNs) associated with the filamentary structures that converge toward the high peaks of the matter density field in which galaxy clusters reside. In addition, our analysis reveals the presence of a second component, more compact in size and compatible with a point-like emission from within individual clusters. At present, we cannot distinguish between the two most likely interpretations for such a signal, I.e., whether it is produced by AGNs inside clusters or if it is a diffuse γ-ray emission from the intracluster medium. We argue that this latter, intriguing, hypothesis might be tested by applying this technique to a low-redshift large-mass cluster sample.
Merging Clusters, Cluster Outskirts, and Large Scale Filaments
NASA Astrophysics Data System (ADS)
Randall, Scott; Alvarez, Gabriella; Bulbul, Esra; Jones, Christine; Forman, William; Su, Yuanyuan; Miller, Eric D.; Bourdin, Herve; Scott Randall
2018-01-01
Recent X-ray observations of the outskirts of clusters show that entropy profiles of the intracluster medium (ICM) generally flatten and lie below what is expected from purely gravitational structure formation near the cluster's virial radius. Possible explanations include electron/ion non-equilibrium, accretion shocks that weaken during cluster formation, and the presence of unresolved cool gas clumps. Some of these mechanisms are expected to correlate with large scale structure (LSS), such that the entropy is lower in regions where the ICM interfaces with LSS filaments and, presumably, the warm-hot intergalactic medium (WHIM). Major, binary cluster mergers are expected to take place at the intersection of LSS filaments, with the merger axis initially oriented along a filament. We present results from deep X-ray observations of the virialization regions of binary, early-stage merging clusters, including a possible detection of the dense end of the WHIM along a LSS filament.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
NASA Astrophysics Data System (ADS)
Rushforth, R.; Ruddell, B. L.
2014-12-01
Water footprints have been proposed as potential sustainability indicators, but these analyses have thus far focused at the country-level or regional scale. However, for many countries, especially the United States, the most relevant level of water decision-making is the city. For water footprinting to inform urban sustainability, the boundaries for analysis must match the relevant boundaries for decision-making and economic development. Initial studies into city-level water footprints have provided insight into how large cities across the globe—Delhi, Lagos, Berlin, Beijing, York—create virtual water trade linkages with distant hinterlands. This study hypothesizes that for large cities the most direct and manageable virtual water flows exist at the metropolitan area scale and thus should provide the most policy-relevant information. This study represents an initial attempt at quantifying intra-metropolitan area virtual water flows. A modified commodity-by-industry input-output model was used to determine virtual water flows destined to, occurring within, and emanating from the Phoenix metropolitan area (PMA). Virtual water flows to and from the PMA were calculated for each PMA city using water consumption data as well as economic and industry statistics. Intra-PMA virtual water trade was determined using county-level traffic flow data, water consumption data, and economic and industry statistics. The findings show that there are archetypal cities within metropolitan areas and that each type of city has a distinct water footprint profile that is related to the value added economic processes occuring within their boundaries. These findings can be used to inform local water managers about the resilience of outsourced water supplies.
Computer Science Techniques Applied to Parallel Atomistic Simulation
NASA Astrophysics Data System (ADS)
Nakano, Aiichiro
1998-03-01
Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.
Turbulence and fossil turbulence lead to life in the universe
NASA Astrophysics Data System (ADS)
Gibson, Carl H.
2013-07-01
Turbulence is defined as an eddy-like state of fluid motion where the inertial-vortex forces of the eddies are larger than all the other forces that tend to damp the eddies out. Fossil turbulence is a perturbation produced by turbulence that persists after the fluid ceases to be turbulent at the scale of the perturbation. Because vorticity is produced at small scales, turbulence must cascade from small scales to large, providing a consistent physical basis for Kolmogorovian universal similarity laws. Oceanic and astrophysical mixing and diffusion are dominated by fossil turbulence and fossil turbulent waves. Observations from space telescopes show turbulence and vorticity existed in the beginning of the universe and that their fossils persist. Fossils of big bang turbulence include spin and the dark matter of galaxies: clumps of ∼1012 frozen hydrogen planets that make globular star clusters as seen by infrared and microwave space telescopes. When the planets were hot gas, they hosted the formation of life in a cosmic soup of hot-water oceans as they merged to form the first stars and chemicals. Because spontaneous life formation according to the standard cosmological model is virtually impossible, the existence of life falsifies the standard cosmological model.
Turbulence and Fossil Turbulence lead to Life in the Universe
NASA Astrophysics Data System (ADS)
Gibson, Carl H.
2012-03-01
Turbulence is defined as an eddy-like state of fluid motion where the inertial-vortex forces of the eddies are larger than all the other forces that tend to damp the eddies out. Fossil turbulence is a perturbation produced by turbulence that persists after the fluid ceases to be turbulent at the scale of the perturbation. Because vorticity is produced at small scales, turbulence must cascade from small scales to large, providing a consistent physical basis for Kolmogorovian universal similarity laws. Oceanic and astrophysical mixing and diffusion are dominated by fossil turbulence and fossil turbulent waves. Observations from space telescopes show turbulence and vorticity existed in the beginning of the universe and that their fossils persist. Fossils of big bang turbulence include spin and the dark matter of galaxies: clumps of ~ 1012 frozen hydrogen planets that make globular star clusters as seen by infrared and microwave space telescopes. When the planets were hot gas, they hosted the formation of life in a cosmic soup of hot- water oceans as they merged to form the first stars and chemicals. Because spontaneous life formation according to the standard cosmological model is virtually impossible, the existence of life falsifies the standard cosmological model.
Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.
2014-01-01
As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704
Interpreting Observations of Large-Scale Traveling Ionospheric Disturbances by Ionospheric Sounders
NASA Astrophysics Data System (ADS)
Pederick, L. H.; Cervera, M. A.; Harris, T. J.
2017-12-01
From July to October 2015, the Australian Defence Science and Technology Group conducted an experiment during which a vertical incidence sounder (VIS) was set up at Alice Springs Airport. During September 2015 this VIS observed the passage of many large-scale traveling ionospheric disturbances (TIDs). By plotting the measured virtual heights across multiple frequencies as a function of time, the passage of the TID can be clearly displayed. Using this plotting method, we show that all the TIDs observed during the campaign by the VIS at Alice Springs show an apparent downward phase progression of the crests and troughs. The passage of the TID can be more clearly interpreted by plotting the true height of iso-ionic contours across multiple plasma frequencies; the true heights can be obtained by inverting each ionogram to obtain an electron density profile. These plots can be used to measure the vertical phase speed of a TID and also reveal a time lag between events seen in true height compared to virtual height. To the best of our knowledge, this style of analysis has not previously been applied to other swept-frequency sounder observations. We develop a simple model to investigate the effect of the passage of a large-scale TID on a VIS. The model confirms that for a TID with a downward vertical phase progression, the crests and troughs will appear earlier in virtual height than in true height and will have a smaller apparent speed in true height than in virtual height.
ERIC Educational Resources Information Center
Andrews, Dee H.; Dineen, Toni; Bell, Herbert H.
1999-01-01
Discusses the use of constructive modeling and virtual simulation in team training; describes a military application of constructive modeling, including technology issues and communication protocols; considers possible improvements; and discusses applications in team-learning environments other than military, including industry and education. (LRW)
Galaxy clusters and cold dark matter - A low-density unbiased universe?
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue
1992-01-01
Large-scale simulations of a universe dominated by cold dark matter (CDM) are tested against two fundamental properties of clusters of galaxies: the cluster mass function and the cluster correlation function. We find that standard biased CDM models are inconsistent with these observations for any bias parameter b. A low-density, low-bias CDM-type model, with or without a cosmological constant, appears to be consistent with both the cluster mass function and the cluster correlations. The low-density model agrees well with the observed correlation function of the Abell, Automatic Plate Measuring Facility (APM), and Edinburgh-Durham cluster catalogs. The model is in excellent agreement with the observed dependence of the correlation strength on cluster mean separation, reproducing the measured universal dimensionless cluster correlation. The low-density model is also consistent with other large-scale structure observations, including the APM angular galaxy-correlations, and for lambda = 1-Omega with the COBE results of the microwave background radiation fluctuations.
Kinetics of Aggregation with Choice
Ben-Naim, Eli; Krapivsky, Paul
2016-12-01
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.
2005-01-01
By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.
Dynamically Allocated Virtual Clustering Management System Users Guide
2016-11-01
provides usage instructions for the DAVC version 2.0 web application. 15. SUBJECT TERMS DAVC, Dynamically Allocated Virtual Clustering...This report provides usage instructions for the DAVC version 2.0 web application. This report is separated into the following sections, which detail
The dynamics and evolution of clusters of galaxies
NASA Technical Reports Server (NTRS)
Geller, Margaret; Huchra, John P.
1987-01-01
Research was undertaken to produce a coherent picture of the formation and evolution of large-scale structures in the universe. The program is divided into projects which examine four areas: the relationship between individual galaxies and their environment; the structure and evolution of individual rich clusters of galaxies; the nature of superclusters; and the large-scale distribution of individual galaxies. A brief review of results in each area is provided.
Visual based laser speckle pattern recognition method for structural health monitoring
NASA Astrophysics Data System (ADS)
Park, Kyeongtaek; Torbol, Marco
2017-04-01
This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.
Visualizing vascular structures in virtual environments
NASA Astrophysics Data System (ADS)
Wischgoll, Thomas
2013-01-01
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Giusarma, Elena
2017-10-01
Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.
A link between nonlinear self-organization and dissipation in drift-wave turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manz, P.; Birkenmeier, G.; Stroth, U.
Structure formation and self-organization in two-dimensional drift-wave turbulence show up in many different faces. Fluctuation data from a magnetized plasma are analyzed and three mechanisms transferring kinetic energy to large-scale structures are identified. Beside the common vortex merger, clustering of vortices constituting a large-scale strain field and vortex thinning, where due to the interactions of vortices of different scales larger vortices are amplified by the smaller ones, are observed. The vortex thinning mechanism appears to be the most efficient one to generate large scale structures in drift-wave turbulence. Vortex merging as well as vortex clustering are accompanied by strong energymore » transfer to small-scale noncoherent fluctuations (dissipation) balancing the negative entropy generation due to the self-organization process.« less
Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás
2010-12-20
Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation.
Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás
2010-01-01
Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation. PMID:21187920
Radovich, Milan; Solzak, Jeffrey P; Hancock, Bradley A; Conces, Madison L; Atale, Rutuja; Porter, Ryan F; Zhu, Jin; Glasscock, Jarret; Kesler, Kenneth A; Badve, Sunil S; Schneider, Bryan P; Loehrer, Patrick J
2016-02-16
Thymomas are one of the most rarely diagnosed malignancies. To better understand its biology and to identify therapeutic targets, we performed next-generation RNA sequencing. The RNA was sequenced from 13 thymic malignancies and 3 normal thymus glands. Validation of microRNA expression was performed on a separate set of 35 thymic malignancies. For cell-based studies, a thymoma cell line was used. Hierarchical clustering revealed 100% concordance between gene expression clusters and WHO subtype. A substantial differentiator was a large microRNA cluster on chr19q13.42 that was significantly overexpressed in all A and AB tumours and whose expression was virtually absent in the other thymomas and normal tissues. Overexpression of this microRNA cluster activates the PI3K/AKT/mTOR pathway. Treatment of a thymoma AB cell line with a panel of PI3K/AKT/mTOR inhibitors resulted in marked reduction of cell viability. A large microRNA cluster on chr19q13.42 is a transcriptional hallmark of type A and AB thymomas. Furthermore, this cluster activates the PI3K pathway, suggesting the possible exploration of PI3K inhibitors in patients with these subtypes of tumour. This work has led to the initiation of a phase II clinical trial of PI3K inhibition in relapsed or refractory thymomas (http://clinicaltrials.gov/ct2/show/NCT02220855).
Radovich, Milan; Solzak, Jeffrey P; Hancock, Bradley A; Conces, Madison L; Atale, Rutuja; Porter, Ryan F; Zhu, Jin; Glasscock, Jarret; Kesler, Kenneth A; Badve, Sunil S; Schneider, Bryan P; Loehrer, Patrick J
2016-01-01
Background: Thymomas are one of the most rarely diagnosed malignancies. To better understand its biology and to identify therapeutic targets, we performed next-generation RNA sequencing. Methods: The RNA was sequenced from 13 thymic malignancies and 3 normal thymus glands. Validation of microRNA expression was performed on a separate set of 35 thymic malignancies. For cell-based studies, a thymoma cell line was used. Results: Hierarchical clustering revealed 100% concordance between gene expression clusters and WHO subtype. A substantial differentiator was a large microRNA cluster on chr19q13.42 that was significantly overexpressed in all A and AB tumours and whose expression was virtually absent in the other thymomas and normal tissues. Overexpression of this microRNA cluster activates the PI3K/AKT/mTOR pathway. Treatment of a thymoma AB cell line with a panel of PI3K/AKT/mTOR inhibitors resulted in marked reduction of cell viability. Conclusions: A large microRNA cluster on chr19q13.42 is a transcriptional hallmark of type A and AB thymomas. Furthermore, this cluster activates the PI3K pathway, suggesting the possible exploration of PI3K inhibitors in patients with these subtypes of tumour. This work has led to the initiation of a phase II clinical trial of PI3K inhibition in relapsed or refractory thymomas (http://clinicaltrials.gov/ct2/show/NCT02220855). PMID:26766736
Managing a Statewide Virtual Reference Service: How Q and A NJ Works.
ERIC Educational Resources Information Center
Bromberg, Peter
2003-01-01
Describes the live virtual reference service, Q and A NJ (Question and Answer New Jersey), strategies used to meet the challenges of day-to-day management, scaled growth and quality control. Describes how it began; how long it took; how to manage a large project (constant communication; training and practice; transcript analysis and privacy;…
NASA Technical Reports Server (NTRS)
Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara
2000-01-01
We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.
STAR FORMATION AND SUPERCLUSTER ENVIRONMENT OF 107 NEARBY GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Seth A.; Hickox, Ryan C.; Wegner, Gary A.
We analyze the relationship between star formation (SF), substructure, and supercluster environment in a sample of 107 nearby galaxy clusters using data from the Sloan Digital Sky Survey. Previous works have investigated the relationships between SF and cluster substructure, and cluster substructure and supercluster environment, but definitive conclusions relating all three of these variables has remained elusive. We find an inverse relationship between cluster SF fraction ( f {sub SF}) and supercluster environment density, calculated using the Galaxy luminosity density field at a smoothing length of 8 h {sup −1} Mpc (D8). The slope of f {sub SF} versus D8more » is −0.008 ± 0.002. The f {sub SF} of clusters located in low-density large-scale environments, 0.244 ± 0.011, is higher than for clusters located in high-density supercluster cores, 0.202 ± 0.014. We also divide superclusters, according to their morphology, into filament- and spider-type systems. The inverse relationship between cluster f {sub SF} and large-scale density is dominated by filament- rather than spider-type superclusters. In high-density cores of superclusters, we find a higher f {sub SF} in spider-type superclusters, 0.229 ± 0.016, than in filament-type superclusters, 0.166 ± 0.019. Using principal component analysis, we confirm these results and the direct correlation between cluster substructure and SF. These results indicate that cluster SF is affected by both the dynamical age of the cluster (younger systems exhibit higher amounts of SF); the large-scale density of the supercluster environment (high-density core regions exhibit lower amounts of SF); and supercluster morphology (spider-type superclusters exhibit higher amounts of SF at high densities).« less
Spatial-temporal clustering of tornadoes
NASA Astrophysics Data System (ADS)
Malamud, Bruce D.; Turcotte, Donald L.; Brooks, Harold E.
2016-12-01
The standard measure of the intensity of a tornado is the Enhanced Fujita scale, which is based qualitatively on the damage caused by a tornado. An alternative measure of tornado intensity is the tornado path length, L. Here we examine the spatial-temporal clustering of severe tornadoes, which we define as having path lengths L ≥ 10 km. Of particular concern are tornado outbreaks, when a large number of severe tornadoes occur in a day in a restricted region. We apply a spatial-temporal clustering analysis developed for earthquakes. We take all pairs of severe tornadoes in observed and modelled outbreaks, and for each pair plot the spatial lag (distance between touchdown points) against the temporal lag (time between touchdown points). We apply our spatial-temporal lag methodology to the intense tornado outbreaks in the central United States on 26 and 27 April 2011, which resulted in over 300 fatalities and produced 109 severe (L ≥ 10 km) tornadoes. The patterns of spatial-temporal lag correlations that we obtain for the 2 days are strikingly different. On 26 April 2011, there were 45 severe tornadoes and our clustering analysis is dominated by a complex sequence of linear features. We associate the linear patterns with the tornadoes generated in either a single cell thunderstorm or a closely spaced cluster of single cell thunderstorms moving at a near-constant velocity. Our study of a derecho tornado outbreak of six severe tornadoes on 4 April 2011 along with modelled outbreak scenarios confirms this association. On 27 April 2011, there were 64 severe tornadoes and our clustering analysis is predominantly random with virtually no embedded linear patterns. We associate this pattern with a large number of interacting supercell thunderstorms generating tornadoes randomly in space and time. In order to better understand these associations, we also applied our approach to the Great Plains tornado outbreak of 3 May 1999. Careful studies by others have associated individual tornadoes with specified supercell thunderstorms. Our analysis of the 3 May 1999 tornado outbreak directly associated linear features in the largely random spatial-temporal analysis with several supercell thunderstorms, which we then confirmed using model scenarios of synthetic tornado outbreaks. We suggest that it may be possible to develop a semi-automated modelling of tornado touchdowns to match the type of observations made on the 3 May 1999 outbreak.
Spatial-Temporal Clustering of Tornadoes
NASA Astrophysics Data System (ADS)
Malamud, Bruce D.; Turcotte, Donald L.; Brooks, Harold E.
2017-04-01
The standard measure of the intensity of a tornado is the Enhanced Fujita scale, which is based qualitatively on the damage caused by a tornado. An alternative measure of tornado intensity is the tornado path length, L. Here we examine the spatial-temporal clustering of severe tornadoes, which we define as having path lengths L ≥ 10 km. Of particular concern are tornado outbreaks, when a large number of severe tornadoes occur in a day in a restricted region. We apply a spatial-temporal clustering analysis developed for earthquakes. We take all pairs of severe tornadoes in observed and modelled outbreaks, and for each pair plot the spatial lag (distance between touchdown points) against the temporal lag (time between touchdown points). We apply our spatial-temporal lag methodology to the intense tornado outbreaks in the central United States on 26 and 27 April 2011, which resulted in over 300 fatalities and produced 109 severe (L ≥ 10 km) tornadoes. The patterns of spatial-temporal lag correlations that we obtain for the 2 days are strikingly different. On 26 April 2011, there were 45 severe tornadoes and our clustering analysis is dominated by a complex sequence of linear features. We associate the linear patterns with the tornadoes generated in either a single cell thunderstorm or a closely spaced cluster of single cell thunderstorms moving at a near-constant velocity. Our study of a derecho tornado outbreak of six severe tornadoes on 4 April 2011 along with modelled outbreak scenarios confirms this association. On 27 April 2011, there were 64 severe tornadoes and our clustering analysis is predominantly random with virtually no embedded linear patterns. We associate this pattern with a large number of interacting supercell thunderstorms generating tornadoes randomly in space and time. In order to better understand these associations, we also applied our approach to the Great Plains tornado outbreak of 3 May 1999. Careful studies by others have associated individual tornadoes with specified supercell thunderstorms. Our analysis of the 3 May 1999 tornado outbreak directly associated linear features in the largely random spatial-temporal analysis with several supercell thunderstorms, which we then confirmed using model scenarios of synthetic tornado outbreaks. We suggest that it may be possible to develop a semi-automated modelling of tornado touchdowns to match the type of observations made on the 3 May 1999 outbreak.
Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.
Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun
2015-01-01
In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.
Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis
Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun
2015-01-01
In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
CROSS-CORRELATING THE γ-RAY SKY WITH CATALOGS OF GALAXY CLUSTERS
Branchini, Enzo; Camera, Stefano; Cuoco, Alessandro; ...
2017-01-18
In this article, we report the detection of a cross-correlation signal between Fermi Large Area Telescope diffuse γ-ray maps and catalogs of clusters. In our analysis, we considered three different catalogs: WHL12, redMaPPer, and PlanckSZ. They all show a positive correlation with different amplitudes, related to the average mass of the objects in each catalog, which also sets the catalog bias. The signal detection is confirmed by the results of a stacking analysis. The cross-correlation signal extends to rather large angular scales, around 1°, that correspond, at the typical redshift of the clusters in these catalogs, to a few tomore » tens of megaparsecs, i.e., the typical scale-length of the large-scale structures in the universe. Most likely this signal is contributed by the cumulative emission from active galactic nuclei (AGNs) associated with the filamentary structures that converge toward the high peaks of the matter density field in which galaxy clusters reside. In addition, our analysis reveals the presence of a second component, more compact in size and compatible with a point-like emission from within individual clusters. At present, we cannot distinguish between the two most likely interpretations for such a signal, i.e., whether it is produced by AGNs inside clusters or if it is a diffuse γ-ray emission from the intracluster medium. Lastly, we argue that this latter, intriguing, hypothesis might be tested by applying this technique to a low-redshift large-mass cluster sample.« less
NASA Astrophysics Data System (ADS)
Miller, Christopher J. Miller
2012-03-01
There are many examples of clustering in astronomy. Stars in our own galaxy are often seen as being gravitationally bound into tight globular or open clusters. The Solar System's Trojan asteroids cluster at the gravitational Langrangian in front of Jupiter’s orbit. On the largest of scales, we find gravitationally bound clusters of galaxies, the Virgo cluster (in the constellation of Virgo at a distance of ˜50 million light years) being a prime nearby example. The Virgo cluster subtends an angle of nearly 8◦ on the sky and is known to contain over a thousand member galaxies. Galaxy clusters play an important role in our understanding of theUniverse. Clusters exist at peaks in the three-dimensional large-scale matter density field. Their sky (2D) locations are easy to detect in astronomical imaging data and their mean galaxy redshifts (redshift is related to the third spatial dimension: distance) are often better (spectroscopically) and cheaper (photometrically) when compared with the entire galaxy population in large sky surveys. Photometric redshift (z) [Photometric techniques use the broad band filter magnitudes of a galaxy to estimate the redshift. Spectroscopic techniques use the galaxy spectra and emission/absorption line features to measure the redshift] determinations of galaxies within clusters are accurate to better than delta_z = 0.05 [7] and when studied as a cluster population, the central galaxies form a line in color-magnitude space (called the the E/S0 ridgeline and visible in Figure 16.3) that contains galaxies with similar stellar populations [15]. The shape of this E/S0 ridgeline enables astronomers to measure the cluster redshift to within delta_z = 0.01 [23]. The most accurate cluster redshift determinations come from spectroscopy of the member galaxies, where only a fraction of the members need to be spectroscopically observed [25,42] to get an accurate redshift to the whole system. If light traces mass in the Universe, then the locations of galaxy clusters will be at locations of the peaks in the true underlying (mostly) dark matter density field. Kaiser (1984) [19] called this the high-peak model, which we demonstrate in Figure 16.1. We show a two-dimensional representation of a density field created by summing plane-waves with a predetermined power and with random wave-vector directions. In the left panel, we plot only the largest modes, where we see the density peaks (black) and valleys (white) in the combined field. In the right panel, we allow for smaller modes. You can see that the highest density peaks in the left panel contain smaller-scale, but still high-density peaks. These are the locations of future galaxy clusters. The bottom panel shows just these cluster-scale peaks. As you can see, the peaks themselves are clustered, and instead of just one large high-density peak in the original density field (see the left panel), the smaller modes show that six peaks are "born" within the broader, underlying large-scale density modes. This exemplifies the "bias" or amplified structure that is traced by galaxy clusters [19]. Clusters are rare, easy to find, and their member galaxies provide good distance estimates. In combination with their amplified clustering signal described above, galaxy clusters are considered an efficient and precise tracer of the large-scale matter density field in the Universe. Galaxy clusters can also be used to measure the baryon content of the Universe [43]. They can be used to identify gravitational lenses [38] and map the distribution of matter in clusters. The number and spatial distribution of galaxy clusters can be used to constrain cosmological parameters, like the fraction of the energy density in the Universe due to matter (Omega_matter) or the variation in the density field on fixed physical scales (sigma_8) [26,33]. The individual clusters act as “Island Universes” and as such are laboratories here we can study the evolution of the properties of the cluster, like the hot, gaseous intra-cluster medium or shapes, colors, and star-formation histories of the member galaxies [17].
The properties of the disk system of globular clusters
NASA Technical Reports Server (NTRS)
Armandroff, Taft E.
1989-01-01
A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branchini, Enzo; Camera, Stefano; Cuoco, Alessandro
In this article, we report the detection of a cross-correlation signal between Fermi Large Area Telescope diffuse γ-ray maps and catalogs of clusters. In our analysis, we considered three different catalogs: WHL12, redMaPPer, and PlanckSZ. They all show a positive correlation with different amplitudes, related to the average mass of the objects in each catalog, which also sets the catalog bias. The signal detection is confirmed by the results of a stacking analysis. The cross-correlation signal extends to rather large angular scales, around 1°, that correspond, at the typical redshift of the clusters in these catalogs, to a few tomore » tens of megaparsecs, i.e., the typical scale-length of the large-scale structures in the universe. Most likely this signal is contributed by the cumulative emission from active galactic nuclei (AGNs) associated with the filamentary structures that converge toward the high peaks of the matter density field in which galaxy clusters reside. In addition, our analysis reveals the presence of a second component, more compact in size and compatible with a point-like emission from within individual clusters. At present, we cannot distinguish between the two most likely interpretations for such a signal, i.e., whether it is produced by AGNs inside clusters or if it is a diffuse γ-ray emission from the intracluster medium. Lastly, we argue that this latter, intriguing, hypothesis might be tested by applying this technique to a low-redshift large-mass cluster sample.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branchini, Enzo; Camera, Stefano; Cuoco, Alessandro
We report the detection of a cross-correlation signal between Fermi Large Area Telescope diffuse γ -ray maps and catalogs of clusters. In our analysis, we considered three different catalogs: WHL12, redMaPPer, and PlanckSZ. They all show a positive correlation with different amplitudes, related to the average mass of the objects in each catalog, which also sets the catalog bias. The signal detection is confirmed by the results of a stacking analysis. The cross-correlation signal extends to rather large angular scales, around 1°, that correspond, at the typical redshift of the clusters in these catalogs, to a few to tens ofmore » megaparsecs, i.e., the typical scale-length of the large-scale structures in the universe. Most likely this signal is contributed by the cumulative emission from active galactic nuclei (AGNs) associated with the filamentary structures that converge toward the high peaks of the matter density field in which galaxy clusters reside. In addition, our analysis reveals the presence of a second component, more compact in size and compatible with a point-like emission from within individual clusters. At present, we cannot distinguish between the two most likely interpretations for such a signal, i.e., whether it is produced by AGNs inside clusters or if it is a diffuse γ -ray emission from the intracluster medium. We argue that this latter, intriguing, hypothesis might be tested by applying this technique to a low-redshift large-mass cluster sample.« less
NASA Astrophysics Data System (ADS)
Okumura, Teppei; Takada, Masahiro; More, Surhud; Masaki, Shogo
2017-07-01
The peculiar velocity field measured by redshift-space distortions (RSD) in galaxy surveys provides a unique probe of the growth of large-scale structure. However, systematic effects arise when including satellite galaxies in the clustering analysis. Since satellite galaxies tend to reside in massive haloes with a greater halo bias, the inclusion boosts the clustering power. In addition, virial motions of the satellite galaxies cause a significant suppression of the clustering power due to non-linear RSD effects. We develop a novel method to recover the redshift-space power spectrum of haloes from the observed galaxy distribution by minimizing the contamination of satellite galaxies. The cylinder-grouping method (CGM) we study effectively excludes satellite galaxies from a galaxy sample. However, we find that this technique produces apparent anisotropies in the reconstructed halo distribution over all the scales which mimic RSD. On small scales, the apparent anisotropic clustering is caused by exclusion of haloes within the anisotropic cylinder used by the CGM. On large scales, the misidentification of different haloes in the large-scale structures, aligned along the line of sight, into the same CGM group causes the apparent anisotropic clustering via their cross-correlation with the CGM haloes. We construct an empirical model for the CGM halo power spectrum, which includes correction terms derived using the CGM window function at small scales as well as the linear matter power spectrum multiplied by a simple anisotropic function at large scales. We apply this model to a mock galaxy catalogue at z = 0.5, designed to resemble Sloan Digital Sky Survey-III Baryon Oscillation Spectroscopic Survey (BOSS) CMASS galaxies, and find that our model can predict both the monopole and quadrupole power spectra of the host haloes up to k < 0.5 {{h Mpc^{-1}}} to within 5 per cent.
NASA Technical Reports Server (NTRS)
Yanai, M.; Esbensen, S.; Chu, J.
1972-01-01
The bulk properties of tropical cloud clusters, as the vertical mass flux, the excess temperature, and moisture and the liquid water content of the clouds, are determined from a combination of the observed large-scale heat and moisture budgets over an area covering the cloud cluster, and a model of a cumulus ensemble which exchanges mass, heat, vapor and liquid water with the environment through entrainment and detrainment. The method also provides an understanding of how the environmental air is heated and moistened by the cumulus convection. An estimate of the average cloud cluster properties and the heat and moisture balance of the environment, obtained from 1956 Marshall Islands data, is presented.
Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Patrick G.
2015-02-01
In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less
On the linearity of tracer bias around voids
NASA Astrophysics Data System (ADS)
Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro
2017-07-01
The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Matyska, Ludek; Ruda, Miroslav; Toth, Simon
For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.
Infrastructures for Distributed Computing: the case of BESIII
NASA Astrophysics Data System (ADS)
Pellegrino, J.
2018-05-01
The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.
Weak lensing calibration of mass bias in the REFLEX+BCS X-ray galaxy cluster catalogue
NASA Astrophysics Data System (ADS)
Simet, Melanie; Battaglia, Nicholas; Mandelbaum, Rachel; Seljak, Uroš
2017-04-01
The use of large, X-ray-selected Galaxy cluster catalogues for cosmological analyses requires a thorough understanding of the X-ray mass estimates. Weak gravitational lensing is an ideal method to shed light on such issues, due to its insensitivity to the cluster dynamical state. We perform a weak lensing calibration of 166 galaxy clusters from the REFLEX and BCS cluster catalogue and compare our results to the X-ray masses based on scaled luminosities from that catalogue. To interpret the weak lensing signal in terms of cluster masses, we compare the lensing signal to simple theoretical Navarro-Frenk-White models and to simulated cluster lensing profiles, including complications such as cluster substructure, projected large-scale structure and Eddington bias. We find evidence of underestimation in the X-ray masses, as expected, with
Exploratory Item Classification Via Spectral Graph Clustering
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2017-01-01
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476
Can standard cosmological models explain the observed Abell cluster bulk flow?
NASA Technical Reports Server (NTRS)
Strauss, Michael A.; Cen, Renyue; Ostriker, Jeremiah P.; Laure, Tod R.; Postman, Marc
1995-01-01
Lauer and Postman (LP) observed that all Abell clusters with redshifts less than 15,000 km/s appear to be participating in a bulk flow of 689 km/s with respect to the cosmic microwave background. We find this result difficult to reconcile with all popular models for large-scale structure formation that assume Gaussian initial conditions. This conclusion is based on Monte Carlo realizations of the LP data, drawn from large particle-mesh N-body simulations for six different models of the initial power spectrum (standard, tilted, and Omega(sub 0) = 0.3 cold dark matter, and two variants of the primordial baryon isocurvature model). We have taken special care to treat properly the longest-wavelength components of the power spectra. The simulations are sampled, 'observed,' and analyzed as identically as possible to the LP cluster sample. Large-scale bulk flows as measured from clusters in the simulations are in excellent agreement with those measured from the grid: the clusters do not exhibit any strong velocity bias on large scales. Bulk flows with amplitude as large as that reported by LP are not uncommon in the Monte Carlo data stes; the distribution of measured bulk flows before error bias subtraction is rougly Maxwellian, with a peak around 400 km/s. However the chi squared of the observed bulk flow, taking into account the anisotropy of the error ellipsoid, is much more difficult to match in the simulations. The models examined are ruled out at confidence levels between 94% and 98%.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
Costanzo, Michelle E; Leaman, Suzanne; Jovanovic, Tanja; Norrholm, Seth D; Rizzo, Albert A; Taylor, Patricia; Roy, Michael J
2014-01-01
Subthreshold posttraumatic stress disorder (PTSD) has garnered recent attention because of the significant distress and functional impairment associated with the symptoms as well as the increased risk of progression to full PTSD. However, the clinical presentation of subthreshold PTSD can vary widely and therefore is not clearly defined, nor is there an evidence-based treatment approach. Thus, we aim to further the understanding of subthreshold PTSD symptoms by reporting the use of a virtual combat environment in eliciting distinctive psychophysiological responses associated with PTSD symptoms in a sample of subthreshold recently deployed US service members. Heart rate, skin conductance, electromyography (startle), respiratory rate, and blood pressure were monitored during three unique combat-related virtual reality scenarios as a novel procedure to assess subthreshold symptoms in a sample of 78 service members. The Clinician-Administered PTSD Scale was administered, and linear regression analyses were used to investigate the relationship between symptom clusters and physiological variables. Among the range of psychophysiological measures that were studied, regression analysis revealed heart rate as most strongly associated with Clinician-Administered PTSD Scale-based measures hyperarousal (R = 0.11, p = .035,) reexperiencing (R = 0.24, p = .001), and global PTSD symptoms (R = 0.17, p = .003). Our findings support the use of a virtual reality environment in eliciting physiological responses associated with subthreshold PTSD symptoms.
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks
2012-01-01
SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...of agents, and each agent attempts to form a coalition with its most profitable partner. The second algorithm builds upon the Shapley for- mula [37...ters at the second layer. These Category Layer clusters each represent a single resource, and agents join one or more clusters based on their
2013-01-01
Background There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data aggregation. Conclusion High resolution spatial scales seem more appropriate as data base for cancer cluster testing and monitoring than the commonly used aggregated scales. We suggest the development of a two-stage approach that combines methods with high detection rates as a first-line screening with methods of higher predictive ability at the second stage. PMID:24314148
Muñoz-Ramírez, Zilia Y.; Mendez-Tenorio, Alfonso; Kato, Ikuko; Bravo, Maria M.; Rizzato, Cosmeri; Thorell, Kaisa; Torres, Roberto; Aviles-Jimenez, Francisco; Camorlinga, Margarita; Canzian, Federico; Torres, Javier
2017-01-01
Helicobacter pylori (HP) genetics may determine its clinical outcomes. Despite high prevalence of HP infection in Latin America (LA), there have been no phylogenetic studies in the region. We aimed to understand the structure of HP populations in LA mestizo individuals, where gastric cancer incidence remains high. The genome of 107 HP strains from Mexico, Nicaragua and Colombia were analyzed with 59 publicly available worldwide genomes. To study bacterial relationship on whole genome level we propose a virtual hybridization technique using thousands of high-entropy 13 bp DNA probes to generate fingerprints. Phylogenetic virtual genome fingerprint (VGF) was compared with Multi Locus Sequence Analysis (MLST) and with phylogenetic analyses of cagPAI virulence island sequences. With MLST some Nicaraguan and Mexican strains clustered close to Africa isolates, whereas European isolates were spread without clustering and intermingled with LA isolates. VGF analysis resulted in increased resolution of populations, separating European from LA strains. Furthermore, clusters with exclusively Colombian, Mexican, or Nicaraguan strains were observed, where the Colombian cluster separated from Europe, Asia, and Africa, while Nicaraguan and Mexican clades grouped close to Africa. In addition, a mixed large LA cluster including Mexican, Colombian, Nicaraguan, Peruvian, and Salvadorian strains was observed; all LA clusters separated from the Amerind clade. With cagPAI sequence analyses LA clades clearly separated from Europe, Asia and Amerind, and Colombian strains formed a single cluster. A NeighborNet analyses suggested frequent and recent recombination events particularly among LA strains. Results suggests that in the new world, H. pylori has evolved to fit mestizo LA populations, already 500 years after the Spanish colonization. This co-adaption may account for regional variability in gastric cancer risk. PMID:28293542
Robustness of serial clustering of extra-tropical cyclones to the choice of tracking method
NASA Astrophysics Data System (ADS)
Pinto, Joaquim G.; Ulbrich, Sven; Karremann, Melanie K.; Stephenson, David B.; Economou, Theodoros; Shaffrey, Len C.
2016-04-01
Cyclone families are a frequent synoptic weather feature in the Euro-Atlantic area in winter. Given appropriate large-scale conditions, the occurrence of such series (clusters) of storms may lead to large socio-economic impacts and cumulative losses. Recent studies analyzing Reanalysis data using single cyclone tracking methods have shown that serial clustering of cyclones occurs on both flanks and downstream regions of the North Atlantic storm track. This study explores the sensitivity of serial clustering to the choice of tracking method. With this aim, the IMILAST cyclone track database based on ERA-interim data is analysed. Clustering is estimated by the dispersion (ratio of variance to mean) of winter (DJF) cyclones passages near each grid point over the Euro-Atlantic area. Results indicate that while the general pattern of clustering is identified for all methods, there are considerable differences in detail. This can primarily be attributed to the differences in the variance of cyclone counts between the methods, which range up to one order of magnitude. Nevertheless, clustering over the Eastern North Atlantic and Western Europe can be identified for all methods and can thus be generally considered as a robust feature. The statistical links between large-scale patterns like the NAO and clustering are obtained for all methods, though with different magnitudes. We conclude that the occurrence of cyclone clustering over the Eastern North Atlantic and Western Europe is largely independent from the choice of tracking method and hence from the definition of a cyclone.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Distributions of Gas and Galaxies from Galaxy Clusters to Larger Scales
NASA Astrophysics Data System (ADS)
Patej, Anna
2017-01-01
We address the distributions of gas and galaxies on three scales: the outskirts of galaxy clusters, the clustering of galaxies on large scales, and the extremes of the galaxy distribution. In the outskirts of galaxy clusters, long-standing analytical models of structure formation and recent simulations predict the existence of density jumps in the gas and dark matter profiles. We use these features to derive models for the gas density profile, obtaining a simple fiducial model that is in agreement with both observations of cluster interiors and simulations of the outskirts. We next consider the galaxy density profiles of clusters; under the assumption that the galaxies in cluster outskirts follow similar collisionless dynamics as the dark matter, their distribution should show a steep jump as well. We examine the profiles of a low-redshift sample of clusters and groups, finding evidence for the jump in some of these clusters. Moving to larger scales where massive galaxies of different types are expected to trace the same large-scale structure, we present a test of this prediction by measuring the clustering of red and blue galaxies at z 0.6, finding low stochasticity between the two populations. These results address a key source of systematic uncertainty - understanding how target populations of galaxies trace large-scale structure - in galaxy redshift surveys. Such surveys use baryon acoustic oscillations (BAO) as a cosmological probe, but are limited by the expense of obtaining sufficiently dense spectroscopy. With the intention of leveraging upcoming deep imaging data, we develop a new method of detecting the BAO in sparse spectroscopic samples via cross-correlation with a dense photometric catalog. This method will permit the extension of BAO measurements to higher redshifts than possible with the existing spectroscopy alone. Lastly, we connect galaxies near and far: the Local Group dwarfs and the high redshift galaxies observed by Hubble and Spitzer. We examine how the local dwarfs may have appeared in the past and compare their properties to the detection limits of the upcoming James Webb Space Telescope (JWST), finding that JWST should be able to detect galaxies similar to the progenitors of a few of the brightest of the local galaxies, revealing a hitherto unobserved population of galaxies at high redshifts.
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.
Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan
2014-02-01
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Unpacking Frames of Reference to Inform the Design of Virtual World Learning in Higher Education
ERIC Educational Resources Information Center
Wimpenny, Katherine; Savin-Baden, Maggi; Mawer, Matt; Steils, Nicole; Tombs, Gemma
2012-01-01
In the changing context of globalised higher education, a series of pedagogical shifts have occurred, and with them, a number of interactive learning approaches have emerged. This article reports on findings taken from a large-scale study that explored the socio-political impact of virtual world learning on higher education in the UK, specifically…
Siemerkus, Jakob; Irle, Eva; Schmidt-Samoa, Carsten; Dechent, Peter; Weniger, Godehard
2012-01-01
Psychotic symptoms in schizophrenia are related to disturbed self-recognition and to disturbed experience of agency. Possibly, these impairments contribute to first-person large-scale egocentric learning deficits. Sixteen inpatients with schizophrenia and 16 matched healthy comparison subjects underwent functional magnetic resonance imaging (fMRI) while finding their way in a virtual maze. The virtual maze presented a first-person view, lacked any topographical landmarks and afforded egocentric navigation strategies. The participants with schizophrenia showed impaired performance in the virtual maze when compared with controls, and showed a similar but weaker pattern of activity changes during egocentric learning when compared with controls. Especially the activity of task-relevant brain regions (precuneus and posterior cingulate and retrosplenial cortex) differed from that of controls across all trials of the task. Activity increase within the right-sided precuneus was related to worse virtual maze performance and to stronger positive symptoms in participants with schizophrenia. We suggest that psychotic symptoms in schizophrenia are related to aberrant neural activity within the precuneus. Possibly, first-person large-scale egocentric navigation and learning designs may be a feasible tool for the assessment and treatment of cognitive deficits related to self-recognition in patients with schizophrenia. PMID:24179748
Evaluating tests of virialization and substructure using galaxy clusters in the ORELSE survey
NASA Astrophysics Data System (ADS)
Rumbaugh, N.; Lemaux, B. C.; Tomczak, A. R.; Shen, L.; Pelliccia, D.; Lubin, L. M.; Kocevski, D. D.; Wu, P.-F.; Gal, R. R.; Mei, S.; Fassnacht, C. D.; Squires, G. K.
2018-07-01
We evaluated the effectiveness of different indicators of cluster virialization using 12 large-scale structures in the Observations of Redshift Evolution in Large-Scale Environments survey spanning from 0.7
Weak lensing by galaxy troughs in DES Science Verification data
Gruen, D.; Friedrich, O.; Amara, A.; ...
2015-11-29
In this study, we measure the weak lensing shear around galaxy troughs, i.e. the radial alignment of background galaxies relative to underdensities in projections of the foreground galaxy field over a wide range of redshift in Science Verification data from the Dark Energy Survey. Our detection of the shear signal is highly significant (10σ–15σ for the smallest angular scales) for troughs with the redshift range z ϵ [0.2, 0.5] of the projected galaxy field and angular diameters of 10 arcmin…1°. These measurements probe the connection between the galaxy, matter density, and convergence fields. By assuming galaxies are biased tracers ofmore » the matter density with Poissonian noise, we find agreement of our measurements with predictions in a fiducial Λ cold dark matter model. The prediction for the lensing signal on large trough scales is virtually independent of the details of the underlying model for the connection of galaxies and matter. Our comparison of the shear around troughs with that around cylinders with large galaxy counts is consistent with a symmetry between galaxy and matter over- and underdensities. In addition, we measure the two-point angular correlation of troughs with galaxies which, in contrast to the lensing signal, is sensitive to galaxy bias on all scales. The lensing signal of troughs and their clustering with galaxies is therefore a promising probe of the statistical properties of matter underdensities and their connection to the galaxy field.« less
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Impact of SZ cluster residuals in CMB maps and CMB-LSS cross-correlations
NASA Astrophysics Data System (ADS)
Chen, T.; Remazeilles, M.; Dickinson, C.
2018-06-01
Residual foreground contamination in cosmic microwave background (CMB) maps, such as the residual contamination from thermal Sunyaev-Zeldovich (SZ) effect in the direction of galaxy clusters, can bias the cross-correlation measurements between CMB and large-scale structure optical surveys. It is thus essential to quantify those residuals and, if possible, to null out SZ cluster residuals in CMB maps. We quantify for the first time the amount of SZ cluster contamination in the released Planck 2015 CMB maps through (i) the stacking of CMB maps in the direction of the clusters, and (ii) the computation of cross-correlation power spectra between CMB maps and the SDSS-IV large-scale structure data. Our cross-power spectrum analysis yields a 30σ detection at the cluster scale (ℓ = 1500-2500) and a 39σ detection on larger scales (ℓ = 500-1500) due to clustering of SZ clusters, giving an overall 54σ detection of SZ cluster residuals in the Planck CMB maps. The Planck 2015 NILC CMB map is shown to have 44 ± 4% of thermal SZ foreground emission left in it. Using the 'Constrained ILC' component separation technique, we construct an alternative Planck CMB map, the 2D-ILC map, which is shown to have negligible SZ contamination, at the cost of being slightly more contaminated by Galactic foregrounds and noise. We also discuss the impact of the SZ residuals in CMB maps on the measurement of the ISW effect, which is shown to be negligible based on our analysis.
Spatial Configuration of Drought Disturbance and Forest Gap Creation across Environmental Gradients
Andrew, Margaret E.; Ruthrof, Katinka X.; Matusick, George; Hardy, Giles E. St. J.
2016-01-01
Climate change is increasing the risk of drought to forested ecosystems. Although drought impacts are often anecdotally noted to occur in discrete patches of high canopy mortality, the landscape effects of drought disturbances have received virtually no study. This study characterized the landscape configuration of drought impact patches and investigated the relationships between patch characteristics, as indicators of drought impact intensity, and environmental gradients related to water availability to determine factors influencing drought vulnerability. Drought impact patches were delineated from aerial surveys following an extreme drought in 2011 in southwestern Australia, which led to patchy canopy dieback of the Northern Jarrah Forest, a Mediterranean forest ecosystem. On average, forest gaps produced by drought-induced dieback were moderate in size (6.6 ± 9.7 ha, max = 85.7 ha), compact in shape, and relatively isolated from each other at the scale of several kilometers. However, there was considerable spatial variation in the size, shape, and clustering of forest gaps. Drought impact patches were larger and more densely clustered in xeric areas, with significant relationships observed with topographic wetness index, meteorological variables, and stand height. Drought impact patch clustering was more strongly associated with the environmental factors assessed (R2 = 0.32) than was patch size (R2 = 0.21); variation in patch shape remained largely unexplained (R2 = 0.02). There is evidence that the xeric areas with more intense drought impacts are ‘chronic disturbance patches’ susceptible to recurrent drought disturbance. The spatial configuration of drought disturbances is likely to influence ecological processes including forest recovery and interacting disturbances such as fire. Regime shifts to an alternate, non-forested ecosystem may occur preferentially in areas with large or clustered drought impact patches. Improved understanding of drought impacts and their patterning in space and time will expand our knowledge of forest ecosystems and landscape processes, informing management of these dynamic systems in an uncertain future. PMID:27275744
Discovery of a large-scale clumpy structure of the Lynx supercluster at z[similar]1.27
NASA Astrophysics Data System (ADS)
Nakata, Fumiaki; Kodama, Tadayuki; Shimasaku, Kazuhiro; Doi, Mamoru; Furusawa, Hisanori; Hamabe, Masaru; Kimura, Masahiko; Komiyama, Yutaka; Miyazaki, Satoshi; Okamura, Sadanori; Ouchi, Masami; Sekiguchi, Maki; Yagi, Masafumi; Yasuda, Naoki
2004-07-01
We report the discovery of a probable large-scale structure composed of many galaxy clumps around the known twin clusters at z=1.26 and z=1.27 in the Lynx region. Our analysis is based on deep, panoramic, and multi-colour imaging with the Suprime-Cam on the 8.2 m Subaru telescope. We apply a photometric redshift technique to extract plausible cluster members at z˜1.27 down to ˜ M*+2.5. From the 2-D distribution of these photometrically selected galaxies, we newly identify seven candidates of galaxy groups or clusters where the surface density of red galaxies is significantly high (>5σ), in addition to the two known clusters, comprising the largest most distant supercluster ever identified.
NASA Astrophysics Data System (ADS)
Eftekharzadeh, S.; Myers, A. D.; Hennawi, J. F.; Djorgovski, S. G.; Richards, G. T.; Mahabal, A. A.; Graham, M. J.
2017-06-01
We present the most precise estimate to date of the clustering of quasars on very small scales, based on a sample of 47 binary quasars with magnitudes of g < 20.85 and proper transverse separations of ˜25 h-1 kpc. Our sample of binary quasars, which is about six times larger than any previous spectroscopically confirmed sample on these scales, is targeted using a kernel density estimation (KDE) technique applied to Sloan Digital Sky Survey (SDSS) imaging over most of the SDSS area. Our sample is 'complete' in that all of the KDE target pairs with 17.0 ≲ R ≲ 36.2 h-1 kpc in our area of interest have been spectroscopically confirmed from a combination of previous surveys and our own long-slit observational campaign. We catalogue 230 candidate quasar pairs with angular separations of <8 arcsec, from which our binary quasars were identified. We determine the projected correlation function of quasars (\\bar{W}_p) in four bins of proper transverse scale over the range 17.0 ≲ R ≲ 36.2 h-1 kpc. The implied small-scale quasar clustering amplitude from the projected correlation function, integrated across our entire redshift range, is A = 24.1 ± 3.6 at ˜26.6 h-1 kpc. Our sample is the first spectroscopically confirmed sample of quasar pairs that is sufficiently large to study how quasar clustering evolves with redshift at ˜25 h-1 kpc. We find that empirical descriptions of how quasar clustering evolves with redshift at ˜25 h-1 Mpc also adequately describe the evolution of quasar clustering at ˜25 h-1 kpc.
Spatial organization and drivers of the virtual water trade: a community-structure analysis
NASA Astrophysics Data System (ADS)
D'Odorico, Paolo; Carr, Joel; Laio, Francesco; Ridolfi, Luca
2012-09-01
The trade of agricultural commodities can be associated with a virtual transfer of the local freshwater resources used for the production of these goods. Thus, trade of food products virtually transfers large amounts of water from areas of food production to far consumption regions, a process termed the ‘globalization of water’. We consider the (time-varying) community structure of the virtual water network for the years 1986-2008. The communities are groups of countries with dense internal connections, while the connections are sparser among different communities. Between 1986 and 2008, the ratio between virtual water flows within communities and the total global trade of virtual water has continuously increased, indicating the existence of well defined clusters of virtual water transfers. In some cases (e.g. Central and North America and Europe in recent years) the virtual water communities correspond to geographically coherent regions, suggesting the occurrence of an ongoing process of regionalization of water resources. However, most communities also include countries located on different ‘sides’ of the world. As such, geographic proximity only partly explains the community structure of virtual water trade. Similarly, the global distribution of people and wealth, whose effect on the virtual water trade is expressed through simple ‘gravity models’, is unable to explain the strength of virtual water communities observed in the past few decades. A gravity model based on the availability of and demand for virtual water in different countries has higher explanatory power, but the drivers of the virtual water fluxes are yet to be adequately identified.
Garyfallidis, Eleftherios; Côté, Marc-Alexandre; Rheault, Francois; Sidhu, Jasmeen; Hau, Janice; Petit, Laurent; Fortin, David; Cunanne, Stephen; Descoteaux, Maxime
2018-04-15
Virtual dissection of diffusion MRI tractograms is cumbersome and needs extensive knowledge of white matter anatomy. This virtual dissection often requires several inclusion and exclusion regions-of-interest that make it a process that is very hard to reproduce across experts. Having automated tools that can extract white matter bundles for tract-based studies of large numbers of people is of great interest for neuroscience and neurosurgical planning. The purpose of our proposed method, named RecoBundles, is to segment white matter bundles and make virtual dissection easier to perform. This can help explore large tractograms from multiple persons directly in their native space. RecoBundles leverages latest state-of-the-art streamline-based registration and clustering to recognize and extract bundles using prior bundle models. RecoBundles uses bundle models as shape priors for detecting similar streamlines and bundles in tractograms. RecoBundles is 100% streamline-based, is efficient to work with millions of streamlines and, most importantly, is robust and adaptive to incomplete data and bundles with missing components. It is also robust to pathological brains with tumors and deformations. We evaluated our results using multiple bundles and showed that RecoBundles is in good agreement with the neuroanatomical experts and generally produced more dense bundles. Across all the different experiments reported in this paper, RecoBundles was able to identify the core parts of the bundles, independently from tractography type (deterministic or probabilistic) or size. Thus, RecoBundles can be a valuable method for exploring tractograms and facilitating tractometry studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Temporal Clustering of Regional-Scale Extreme Precipitation Events in Southern Switzerland
NASA Astrophysics Data System (ADS)
Barton, Yannick; Giannakaki, Paraskevi; Von Waldow, Harald; Chevalier, Clément; Pfhal, Stephan; Martius, Olivia
2017-04-01
Temporal clustering of extreme precipitation events on subseasonal time scales is a form of compound extremes and is of crucial importance for the formation of large-scale flood events. Here, the temporal clustering of regional-scale extreme precipitation events in southern Switzerland is studied. These precipitation events are relevant for the flooding of lakes in southern Switzerland and northern Italy. This research determines whether temporal clustering is present and then identifies the dynamics that are responsible for the clustering. An observation-based gridded precipitation dataset of Swiss daily rainfall sums and ECMWF reanalysis datasets are used. To analyze the clustering in the precipitation time series a modified version of Ripley's K function is used. It determines the average number of extreme events in a time period, to characterize temporal clustering on subseasonal time scales and to determine the statistical significance of the clustering. Significant clustering of regional-scale precipitation extremes is found on subseasonal time scales during the fall season. Four high-impact clustering episodes are then selected and the dynamics responsible for the clustering are examined. During the four clustering episodes, all heavy precipitation events were associated with an upperlevel breaking Rossby wave over western Europe and in most cases strong diabatic processes upstream over the Atlantic played a role in the amplification of these breaking waves. Atmospheric blocking downstream over eastern Europe supported this wave breaking during two of the clustering episodes. During one of the clustering periods, several extratropical transitions of tropical cyclones in the Atlantic contributed to the formation of high-amplitude ridges over the Atlantic basin and downstream wave breaking. During another event, blocking over Alaska assisted the phase locking of the Rossby waves downstream over the Atlantic.
MaRaCluster: A Fragment Rarity Metric for Clustering Fragment Spectra in Shotgun Proteomics.
The, Matthew; Käll, Lukas
2016-03-04
Shotgun proteomics experiments generate large amounts of fragment spectra as primary data, normally with high redundancy between and within experiments. Here, we have devised a clustering technique to identify fragment spectra stemming from the same species of peptide. This is a powerful alternative method to traditional search engines for analyzing spectra, specifically useful for larger scale mass spectrometry studies. As an aid in this process, we propose a distance calculation relying on the rarity of experimental fragment peaks, following the intuition that peaks shared by only a few spectra offer more evidence than peaks shared by a large number of spectra. We used this distance calculation and a complete-linkage scheme to cluster data from a recent large-scale mass spectrometry-based study. The clusterings produced by our method have up to 40% more identified peptides for their consensus spectra compared to those produced by the previous state-of-the-art method. We see that our method would advance the construction of spectral libraries as well as serve as a tool for mining large sets of fragment spectra. The source code and Ubuntu binary packages are available at https://github.com/statisticalbiotechnology/maracluster (under an Apache 2.0 license).
Adaptive Scaling of Cluster Boundaries for Large-Scale Social Media Data Clustering.
Meng, Lei; Tan, Ah-Hwee; Wunsch, Donald C
2016-12-01
The large scale and complex nature of social media data raises the need to scale clustering techniques to big data and make them capable of automatically identifying data clusters with few empirical settings. In this paper, we present our investigation and three algorithms based on the fuzzy adaptive resonance theory (Fuzzy ART) that have linear computational complexity, use a single parameter, i.e., the vigilance parameter to identify data clusters, and are robust to modest parameter settings. The contribution of this paper lies in two aspects. First, we theoretically demonstrate how complement coding, commonly known as a normalization method, changes the clustering mechanism of Fuzzy ART, and discover the vigilance region (VR) that essentially determines how a cluster in the Fuzzy ART system recognizes similar patterns in the feature space. The VR gives an intrinsic interpretation of the clustering mechanism and limitations of Fuzzy ART. Second, we introduce the idea of allowing different clusters in the Fuzzy ART system to have different vigilance levels in order to meet the diverse nature of the pattern distribution of social media data. To this end, we propose three vigilance adaptation methods, namely, the activation maximization (AM) rule, the confliction minimization (CM) rule, and the hybrid integration (HI) rule. With an initial vigilance value, the resulting clustering algorithms, namely, the AM-ART, CM-ART, and HI-ART, can automatically adapt the vigilance values of all clusters during the learning epochs in order to produce better cluster boundaries. Experiments on four social media data sets show that AM-ART, CM-ART, and HI-ART are more robust than Fuzzy ART to the initial vigilance value, and they usually achieve better or comparable performance and much faster speed than the state-of-the-art clustering algorithms that also do not require a predefined number of clusters.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.
Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A
2016-01-01
In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.
Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe
NASA Astrophysics Data System (ADS)
Pinto, Joaquim G.; Gómara, Iñigo; Masato, Giacomo; Dacre, Helen F.; Woollings, Tim; Caballero, Rodrigo
2015-04-01
Some recent winters in Western Europe have been characterized by the occurrence of multiple extratropical cyclones following a similar path. The occurrence of such cyclone clusters leads to large socio-economic impacts due to damaging winds, storm surges, and floods. Recent studies have statistically characterized the clustering of extratropical cyclones over the North Atlantic and Europe and hypothesized potential physical mechanisms responsible for their formation. Here we analyze 4 months characterized by multiple cyclones over Western Europe (February 1990, January 1993, December 1999, and January 2007). The evolution of the eddy driven jet stream, Rossby wave-breaking, and upstream/downstream cyclone development are investigated to infer the role of the large-scale flow and to determine if clustered cyclones are related to each other. Results suggest that optimal conditions for the occurrence of cyclone clusters are provided by a recurrent extension of an intensified eddy driven jet toward Western Europe lasting at least 1 week. Multiple Rossby wave-breaking occurrences on both the poleward and equatorward flanks of the jet contribute to the development of these anomalous large-scale conditions. The analysis of the daily weather charts reveals that upstream cyclone development (secondary cyclogenesis, where new cyclones are generated on the trailing fronts of mature cyclones) is strongly related to cyclone clustering, with multiple cyclones developing on a single jet streak. The present analysis permits a deeper understanding of the physical reasons leading to the occurrence of cyclone families over the North Atlantic, enabling a better estimation of the associated cumulative risk over Europe.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouhard, Margaret MEG G; Steed, Chad A; Hahn, Steven E
In this paper, we propose strategies and objectives for immersive data visualization with applications in materials science using the Oculus Rift virtual reality headset. We provide background on currently available analysis tools for neutron scattering data and other large-scale materials science projects. In the context of the current challenges facing scientists, we discuss immersive virtual reality visualization as a potentially powerful solution. We introduce a prototype immersive visual- ization system, developed in conjunction with materials scientists at the Spallation Neutron Source, which we have used to explore large crystal structures and neutron scattering data. Finally, we offer our perspective onmore » the greatest challenges that must be addressed to build effective and intuitive virtual reality analysis tools that will be useful for scientists in a wide range of fields.« less
A comparison of regional flood frequency analysis approaches in a simulation framework
NASA Astrophysics Data System (ADS)
Ganora, D.; Laio, F.
2016-07-01
Regional frequency analysis (RFA) is a well-established methodology to provide an estimate of the flood frequency curve at ungauged (or scarcely gauged) sites. Different RFA approaches exist, depending on the way the information is transferred to the site of interest, but it is not clear in the literature if a specific method systematically outperforms the others. The aim of this study is to provide a framework wherein carrying out the intercomparison by building up a virtual environment based on synthetically generated data. The considered regional approaches include: (i) a unique regional curve for the whole region; (ii) a multiple-region model where homogeneous subregions are determined through cluster analysis; (iii) a Region-of-Influence model which defines a homogeneous subregion for each site; (iv) a spatially smooth estimation procedure where the parameters of the regional model vary continuously along the space. Virtual environments are generated considering different patterns of heterogeneity, including step change and smooth variations. If the region is heterogeneous, with the parent distribution changing continuously within the region, the spatially smooth regional approach outperforms the others, with overall errors 10-50% lower than the other methods. In the case of a step-change, the spatially smooth and clustering procedures perform similarly if the heterogeneity is moderate, while clustering procedures work better when the step-change is severe. To extend our findings, an extensive sensitivity analysis has been performed to investigate the effect of sample length, number of virtual stations, return period of the predicted quantile, variability of the scale parameter of the parent distribution, number of predictor variables and different parent distribution. Overall, the spatially smooth approach appears as the most robust approach as its performances are more stable across different patterns of heterogeneity, especially when short records are considered.
NASA Astrophysics Data System (ADS)
Mackey, A. D.; Gilmore, G. F.
2003-01-01
We have compiled a pseudo-snapshot data set of two-colour observations from the Hubble Space Telescope archive for a sample of 53 rich LMC clusters with ages of 106-1010 yr. We present surface brightness profiles for the entire sample, and derive structural parameters for each cluster, including core radii, and luminosity and mass estimates. Because we expect the results presented here to form the basis for several further projects, we describe in detail the data reduction and surface brightness profile construction processes, and compare our results with those of previous ground-based studies. The surface brightness profiles show a large amount of detail, including irregularities in the profiles of young clusters (such as bumps, dips and sharp shoulders), and evidence for both double clusters and post-core-collapse (PCC) clusters. In particular, we find power-law profiles in the inner regions of several candidate PCC clusters, with slopes of approximately -0.7, but showing considerable variation. We estimate that 20 +/- 7 per cent of the old cluster population of the Large Magellanic Cloud (LMC) has entered PCC evolution, a similar fraction to that for the Galactic globular cluster system. In addition, we examine the profile of R136 in detail and show that it is probably not a PCC cluster. We also observe a trend in core radius with age that has been discovered and discussed in several previous publications by different authors. Our diagram has better resolution, however, and appears to show a bifurcation at several hundred Myr. We argue that this observed relationship reflects true physical evolution in LMC clusters, with some experiencing small-scale core expansion owing to mass loss, and others large-scale expansion owing to some unidentified characteristic or physical process.
The Relationship Between Galaxies and the Large-Scale Structure of the Universe
NASA Astrophysics Data System (ADS)
Coil, Alison L.
2018-06-01
I will describe our current understanding of the relationship between galaxies and the large-scale structure of the Universe, often called the galaxy-halo connection. Galaxies are thought to form and evolve in the centers of dark matter halos, which grow along with the galaxies they host. Large galaxy redshift surveys have revealed clear observational signatures of connections between galaxy properties and their clustering properties on large scales. For example, older, quiescent galaxies are known to cluster more strongly than younger, star-forming galaxies, which are more likely to be found in galactic voids and filaments rather than the centers of galaxy clusters. I will show how cosmological numerical simulations have aided our understanding of this galaxy-halo connection and what is known from a statistical point of view about how galaxies populate dark matter halos. This knowledge both helps us learn about galaxy evolution and is fundamental to our ability to use galaxy surveys to reveal cosmological information. I will talk briefly about some of the current open questions in the field, including galactic conformity and assembly bias.
Mpc-scale diffuse radio emission in two massive cool-core clusters of galaxies
NASA Astrophysics Data System (ADS)
Sommer, Martin W.; Basu, Kaustuv; Intema, Huib; Pacaud, Florian; Bonafede, Annalisa; Babul, Arif; Bertoldi, Frank
2017-04-01
Radio haloes are diffuse synchrotron sources on scales of ˜1 Mpc that are found in merging clusters of galaxies, and are believed to be powered by electrons re-accelerated by merger-driven turbulence. We present measurements of extended radio emission on similarly large scales in two clusters of galaxies hosting cool cores: Abell 2390 and Abell 2261. The analysis is based on interferometric imaging with the Karl G. Jansky Very Large Array, Very Large Array and Giant Metrewave Radio Telescope. We present detailed radio images of the targets, subtract the compact emission components and measure the spectral indices for the diffuse components. The radio emission in A2390 extends beyond a known sloshing-like brightness discontinuity, and has a very steep in-band spectral slope at 1.5 GHz that is similar to some known ultrasteep spectrum radio haloes. The diffuse signal in A2261 is more extended than in A2390 but has lower luminosity. X-ray morphological indicators, derived from XMM-Newton X-ray data, place these clusters in the category of relaxed or regular systems, although some asymmetric features that can indicate past minor mergers are seen in the X-ray brightness images. If these two Mpc-scale radio sources are categorized as giant radio haloes, they question the common assumption of radio haloes occurring exclusively in clusters undergoing violent merging activity, in addition to commonly used criteria for distinguishing between radio haloes and minihaloes.
Relative dispersion of clustered drifters in a small micro-tidal estuary
NASA Astrophysics Data System (ADS)
Suara, Kabir; Chanson, Hubert; Borgas, Michael; Brown, Richard J.
2017-07-01
Small tide-dominated estuaries are affected by large scale flow structures which combine with the underlying bed generated smaller scale turbulence to significantly increase the magnitude of horizontal diffusivity. Field estimates of horizontal diffusivity and its associated scales are however rare due to limitations in instrumentation. Data from multiple deployments of low and high resolution clusters of GPS-drifters are used to examine the dynamics of a surface flow in a small micro-tidal estuary through relative dispersion analyses. During the field study, cluster diffusivity, which combines both large- and small-scale processes ranged between, 0.01 and 3.01 m2/s for spreading clusters and, -0.06 and -4.2 m2/s for contracting clusters. Pair-particle dispersion, Dp2, was scale dependent and grew as Dp2 ∼ t1.83 in streamwise and Dp2 ∼ t0.8 in cross-stream directions. At small separation scale, pair-particle (d < 0.5 m) relative diffusivity followed the Richardson's 4/3 power law and became weaker as separation scale increases. Pair-particle diffusivity was described as Kp ∼ d1.01 and Kp ∼ d0.85 in the streamwise and cross-stream directions, respectively for separation scales ranging from 0.1 to 10 m. Two methods were used to identify the mechanism responsible for dispersion within the channel. The results clearly revealed the importance of strain fields (stretching and shearing) in the spreading of particles within a small micro-tidal channel. The work provided input for modelling dispersion of passive particle in shallow micro-tidal estuaries where these were not previously experimentally studied.
Data Integration: Charting a Path Forward to 2035
2011-02-14
New York, NY: Gotham Books, 2004. Seligman , Len. Mitre Corporation, e-mail interview, 6 Dec 2010. Singer, P.W. Wired for War: The Robotics...articles.aspx (accessed 4 Dec 2010). Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie...Virtualization?‖ 1. 41 Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie Mellon Software
Does lower Omega allow a resolution of the large-scale structure problem?
NASA Technical Reports Server (NTRS)
Silk, Joseph; Vittorio, Nicola
1987-01-01
The intermediate angular scale anisotropy of the cosmic microwave background, peculiar velocities, density correlations, and mass fluctuations for both neutrino and baryon-dominated universes with Omega less than one are evaluated. The large coherence length associated with a low-Omega, hot dark matter-dominated universe provides substantial density fluctuations on scales up to 100 Mpc: there is a range of acceptable models that are capable of producing large voids and superclusters of galaxies and the clustering of galaxy clusters, with Omega roughly 0.3, without violating any observational constraint. Low-Omega, cold dark matter-dominated cosmologies are also examined. All of these models may be reconciled with the inflationary requirement of a flat universe by introducing a cosmological constant 1-Omega.
Virtual Network Configuration Management System for Data Center Operations and Management
NASA Astrophysics Data System (ADS)
Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken
Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.
Open star clusters and Galactic structure
NASA Astrophysics Data System (ADS)
Joshi, Yogesh C.
2018-04-01
In order to understand the Galactic structure, we perform a statistical analysis of the distribution of various cluster parameters based on an almost complete sample of Galactic open clusters yet available. The geometrical and physical characteristics of a large number of open clusters given in the MWSC catalogue are used to study the spatial distribution of clusters in the Galaxy and determine the scale height, solar offset, local mass density and distribution of reddening material in the solar neighbourhood. We also explored the mass-radius and mass-age relations in the Galactic open star clusters. We find that the estimated parameters of the Galactic disk are largely influenced by the choice of cluster sample.
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
First results from the IllustrisTNG simulations: matter and galaxy clustering
NASA Astrophysics Data System (ADS)
Springel, Volker; Pakmor, Rüdiger; Pillepich, Annalisa; Weinberger, Rainer; Nelson, Dylan; Hernquist, Lars; Vogelsberger, Mark; Genel, Shy; Torrey, Paul; Marinacci, Federico; Naiman, Jill
2018-03-01
Hydrodynamical simulations of galaxy formation have now reached sufficient volume to make precision predictions for clustering on cosmologically relevant scales. Here, we use our new IllustrisTNG simulations to study the non-linear correlation functions and power spectra of baryons, dark matter, galaxies, and haloes over an exceptionally large range of scales. We find that baryonic effects increase the clustering of dark matter on small scales and damp the total matter power spectrum on scales up to k ˜ 10 h Mpc-1 by 20 per cent. The non-linear two-point correlation function of the stellar mass is close to a power-law over a wide range of scales and approximately invariant in time from very high redshift to the present. The two-point correlation function of the simulated galaxies agrees well with Sloan Digital Sky Survey at its mean redshift z ≃ 0.1, both as a function of stellar mass and when split according to galaxy colour, apart from a mild excess in the clustering of red galaxies in the stellar mass range of109-1010 h-2 M⊙. Given this agreement, the TNG simulations can make valuable theoretical predictions for the clustering bias of different galaxy samples. We find that the clustering length of the galaxy autocorrelation function depends strongly on stellar mass and redshift. Its power-law slope γ is nearly invariant with stellar mass, but declines from γ ˜ 1.8 at redshift z = 0 to γ ˜ 1.6 at redshift z ˜ 1, beyond which the slope steepens again. We detect significant scale dependences in the bias of different observational tracers of large-scale structure, extending well into the range of the baryonic acoustic oscillations and causing nominal (yet fortunately correctable) shifts of the acoustic peaks of around ˜ 5 per cent.
What drives the formation of massive stars and clusters?
NASA Astrophysics Data System (ADS)
Ochsendorf, Bram; Meixner, Margaret; Roman-Duval, Julia; Evans, Neal J., II; Rahman, Mubdi; Zinnecker, Hans; Nayak, Omnarayani; Bally, John; Jones, Olivia C.; Indebetouw, Remy
2018-01-01
Galaxy-wide surveys allow to study star formation in unprecedented ways. In this talk, I will discuss our analysis of the Large Magellanic Cloud (LMC) and the Milky Way, and illustrate how studying both the large and small scale structure of galaxies are critical in addressing the question: what drives the formation of massive stars and clusters?I will show that ‘turbulence-regulated’ star formation models do not reproduce massive star formation properties of GMCs in the LMC and Milky Way: this suggests that theory currently does not capture the full complexity of star formation on small scales. I will also report on the discovery of a massive star forming complex in the LMC, which in many ways manifests itself as an embedded twin of 30 Doradus: this may shed light on the formation of R136 and 'Super Star Clusters' in general. Finally, I will highlight what we can expect in the next years in the field of star formation with large-scale sky surveys, ALMA, and our JWST-GTO program.
Virtual interface environment workstations
NASA Technical Reports Server (NTRS)
Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.
1988-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.
NASA Astrophysics Data System (ADS)
Moran, Sean; Smith, G.; Haines, C.; Egami, E.; Hardegree-Ullman, E.; Heckman, T.
2010-01-01
We present results from LoCuSS, the Local Cluster Substructure Survey, on the distribution and abundance of cluster galaxies showing signatures of recently quenched star formation, within a sample of 15 z 0.2 clusters. Combining LoCuSS' wide-field UV through NIR photometry with weak-lensing derived mass maps for these clusters, we identify passive galaxies that have undergone recent quenching via both rapid (100Myr) and slow (1Gyr) mechanisms. By studying their abundance in a statistically significant sample of z 0.2 clusters, we explore how the effectiveness of environmental quenching of star formation varies as a function of the level of cluster substructure, in addition to global cluster characteristics such as mass or X-ray luminosity and temperature, with the aim of understanding the role that pre-processing of galaxies within groups and filaments plays in the overall buildup of the morphology-density and SFR-density relations. We find that clusters with large levels of substructure indicative of recent assembly or cluster-cluster mergers host a higher fraction of galaxies with signs of recent ram-pressure stripping by the hot intra-cluster gas. In addition, we find that the fraction of post-starburst galaxies increases with cluster mass (M500), but fractions of optically-selected AGN and GALEX-defined "Green Valley" galaxies show the opposite trend, being most abundant in rather low-mass clusters. These trends suggest a picture where quenching of star formation occurs most vigorously in actively assembling structures, with comparatively little activity in the most massive structures where most of the nearby large-scale structure has already been accreted and Virialized into the main cluster body.
A numerical study of Coulomb interaction effects on 2D hopping transport.
Kinkhabwala, Yusuf A; Sverdlov, Viktor A; Likharev, Konstantin K
2006-02-15
We have extended our supercomputer-enabled Monte Carlo simulations of hopping transport in completely disordered 2D conductors to the case of substantial electron-electron Coulomb interaction. Such interaction may not only suppress the average value of hopping current, but also affect its fluctuations rather substantially. In particular, the spectral density S(I)(f) of current fluctuations exhibits, at sufficiently low frequencies, a 1/f-like increase which approximately follows the Hooge scaling, even at vanishing temperature. At higher f, there is a crossover to a broad range of frequencies in which S(I)(f) is nearly constant, hence allowing characterization of the current noise by the effective Fano factor [Formula: see text]. For sufficiently large conductor samples and low temperatures, the Fano factor is suppressed below the Schottky value (F = 1), scaling with the length L of the conductor as F = (L(c)/L)(α). The exponent α is significantly affected by the Coulomb interaction effects, changing from α = 0.76 ± 0.08 when such effects are negligible to virtually unity when they are substantial. The scaling parameter L(c), interpreted as the average percolation cluster length along the electric field direction, scales as [Formula: see text] when Coulomb interaction effects are negligible and [Formula: see text] when such effects are substantial, in good agreement with estimates based on the theory of directed percolation.
Platinum clusters with precise numbers of atoms for preparative-scale catalysis.
Imaoka, Takane; Akanuma, Yuki; Haruta, Naoki; Tsuchiya, Shogo; Ishihara, Kentaro; Okayasu, Takeshi; Chun, Wang-Jae; Takahashi, Masaki; Yamamoto, Kimihisa
2017-09-25
Subnanometer noble metal clusters have enormous potential, mainly for catalytic applications. Because a difference of only one atom may cause significant changes in their reactivity, a preparation method with atomic-level precision is essential. Although such a precision with enough scalability has been achieved by gas-phase synthesis, large-scale preparation is still at the frontier, hampering practical applications. We now show the atom-precise and fully scalable synthesis of platinum clusters on a milligram scale from tiara-like platinum complexes with various ring numbers (n = 5-13). Low-temperature calcination of the complexes on a carbon support under hydrogen stream affords monodispersed platinum clusters, whose atomicity is equivalent to that of the precursor complex. One of the clusters (Pt 10 ) exhibits high catalytic activity in the hydrogenation of styrene compared to that of the other clusters. This method opens an avenue for the application of these clusters to preparative-scale catalysis.The catalytic activity of a noble metal nanocluster is tied to its atomicity. Here, the authors report an atom-precise, fully scalable synthesis of platinum clusters from molecular ring precursors, and show that a variation of only one atom can dramatically change a cluster's reactivity.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
The impact of Lyman-α radiative transfer on large-scale clustering in the Illustris simulation
NASA Astrophysics Data System (ADS)
Behrens, C.; Byrohl, C.; Saito, S.; Niemeyer, J. C.
2018-06-01
Context. Lyman-α emitters (LAEs) are a promising probe of the large-scale structure at high redshift, z ≳ 2. In particular, the Hobby-Eberly Telescope Dark Energy Experiment aims at observing LAEs at 1.9 < z < 3.5 to measure the baryon acoustic oscillation (BAO) scale and the redshift-space distortion (RSD). However, it has been pointed out that the complicated radiative transfer (RT) of the resonant Lyman-α emission line generates an anisotropic selection bias in the LAE clustering on large scales, s ≳ 10 Mpc. This effect could potentially induce a systematic error in the BAO and RSD measurements. Also, there exists a recent claim to have observational evidence of the effect in the Lyman-α intensity map, albeit statistically insignificant. Aims: We aim at quantifying the impact of the Lyman-α RT on the large-scale galaxy clustering in detail. For this purpose, we study the correlations between the large-scale environment and the ratio of an apparent Lyman-α luminosity to an intrinsic one, which we call the "observed fraction", at 2 < z < 6. Methods: We apply our Lyman-α RT code by post-processing the full Illustris simulations. We simply assume that the intrinsic luminosity of the Lyman-α emission is proportional to the star formation rate of galaxies in Illustris, yielding a sufficiently large sample of LAEs to measure the anisotropic selection bias. Results: We find little correlation between large-scale environment and the observed fraction induced by the RT, and hence a smaller anisotropic selection bias than has previously been claimed. We argue that the anisotropy was overestimated in previous work due to insufficient spatial resolution; it is important to keep the resolution such that it resolves the high-density region down to the scale of the interstellar medium, that is, 1 physical kpc. We also find that the correlation can be further enhanced by assumptions in modeling intrinsic Lyman-α emission.
Suppressed star formation by a merging cluster system
Mansheim, A. S.; Lemaux, B. C.; Tomczak, A. R.; ...
2017-03-24
We examine the effects of an impending cluster merger on galaxies in the large scale structure (LSS) RX J0910 at z =1.105. Using multi-wavelength data, including 102 spectral members drawn from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) survey and precise photometric redshifts, we calculate star formation rates and map the specific star formation rate density of the LSS galaxies. These analyses along with an investigation of the color-magnitude properties of LSS galaxies indicate lower levels of star formation activity in the region between the merging clusters relative to the outskirts of the system. We suggest thatmore » gravitational tidal forces due to the potential of the merging halos may be the physical mechanism responsible for the observed suppression of star formation in galaxies caught between the merging clusters.« less
Humbeck, Lina; Weigang, Sebastian; Schäfer, Till; Mutzel, Petra; Koch, Oliver
2018-03-20
A common issue during drug design and development is the discovery of novel scaffolds for protein targets. On the one hand the chemical space of purchasable compounds is rather limited; on the other hand artificially generated molecules suffer from a grave lack of accessibility in practice. Therefore, we generated a novel virtual library of small molecules which are synthesizable from purchasable educts, called CHIPMUNK (CHemically feasible In silico Public Molecular UNiverse Knowledge base). Altogether, CHIPMUNK covers over 95 million compounds and encompasses regions of the chemical space that are not covered by existing databases. The coverage of CHIPMUNK exceeds the chemical space spanned by the Lipinski rule of five to foster the exploration of novel and difficult target classes. The analysis of the generated property space reveals that CHIPMUNK is well suited for the design of protein-protein interaction inhibitors (PPIIs). Furthermore, a recently developed structural clustering algorithm (StruClus) for big data was used to partition the sub-libraries into meaningful subsets and assist scientists to process the large amount of data. These clustered subsets also contain the target space based on ChEMBL data which was included during clustering. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sea snakes rarely venture far from home
Lukoschek, Vimoksalehi; Shine, Richard
2012-01-01
The extent to which populations are connected by dispersal influences all aspects of their biology and informs the spatial scale of optimal conservation strategies. Obtaining direct estimates of dispersal is challenging, particularly in marine systems, with studies typically relying on indirect approaches to evaluate connectivity. To overcome this challenge, we combine information from an eight-year mark-recapture study with high-resolution genetic data to demonstrate extremely low dispersal and restricted gene flow at small spatial scales for a large, potentially mobile marine vertebrate, the turtleheaded sea snake (Emydocephalus annulatus). Our mark-recapture study indicated that adjacent bays in New Caledonia (<1.15 km apart) contain virtually separate sea snake populations. Sea snakes could easily swim between bays but rarely do so. Of 817 recaptures of marked snakes, only two snakes had moved between bays. We genotyped 136 snakes for 11 polymorphic microsatellite loci and found statistically significant genetic divergence between the two bays (FST= 0.008, P < 0.01). Bayesian clustering analyses detected low mixed ancestry within bays and genetic relatedness coefficients were higher, on average, within than between bays. Our results indicate that turtleheaded sea snakes rarely venture far from home, which has strong implications for their ecology, evolution, and conservation. PMID:22833788
Sea snakes rarely venture far from home.
Lukoschek, Vimoksalehi; Shine, Richard
2012-06-01
The extent to which populations are connected by dispersal influences all aspects of their biology and informs the spatial scale of optimal conservation strategies. Obtaining direct estimates of dispersal is challenging, particularly in marine systems, with studies typically relying on indirect approaches to evaluate connectivity. To overcome this challenge, we combine information from an eight-year mark-recapture study with high-resolution genetic data to demonstrate extremely low dispersal and restricted gene flow at small spatial scales for a large, potentially mobile marine vertebrate, the turtleheaded sea snake (Emydocephalus annulatus). Our mark-recapture study indicated that adjacent bays in New Caledonia (<1.15 km apart) contain virtually separate sea snake populations. Sea snakes could easily swim between bays but rarely do so. Of 817 recaptures of marked snakes, only two snakes had moved between bays. We genotyped 136 snakes for 11 polymorphic microsatellite loci and found statistically significant genetic divergence between the two bays (F(ST)= 0.008, P < 0.01). Bayesian clustering analyses detected low mixed ancestry within bays and genetic relatedness coefficients were higher, on average, within than between bays. Our results indicate that turtleheaded sea snakes rarely venture far from home, which has strong implications for their ecology, evolution, and conservation.
Parameter studies on the energy balance closure problem using large-eddy simulation
NASA Astrophysics Data System (ADS)
De Roo, Frederik; Banerjee, Tirtha; Mauder, Matthias
2017-04-01
The imbalance of the surface energy budget in eddy-covariance measurements is still a pending problem. A possible cause is the presence of land surface heterogeneity. Heterogeneities of the boundary layer scale or larger are most effective in influencing the boundary layer turbulence, and large-eddy simulations have shown that secondary circulations within the boundary layer can affect the surface energy budget. However, the precise influence of the surface characteristics on the energy imbalance and its partitioning is still unknown. To investigate the influence of surface variables on all the components of the flux budget under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, and we focus on idealized heterogeneity by considering spatially variable surface fluxes. The surface fluxes vary locally in intensity and these patches have different length scales. The main focus lies on heterogeneities of length scales of the kilometer scale and one decade smaller. For each simulation, virtual measurement towers are positioned at functionally different positions. We discriminate between the locally homogeneous towers, located within land use patches, with respect to the more heterogeneous towers, and find, among others, that the flux-divergence and the advection are strongly linearly related within each class. Furthermore, we seek correlators for the energy balance ratio and the energy residual in the simulations. Besides the expected correlation with measurable atmospheric quantities such as the friction velocity, boundary-layer depth and temperature and moisture gradients, we have also found an unexpected correlation with the temperature difference between sonic temperature and surface temperature. In additional simulations with a large number of virtual towers, we investigate higher order correlations, which can be linked to secondary circulations. In a companion presentation (EGU2017-2130) these correlations are investigated and confirmed with the help of micrometeorological measurements from the TERENO sites where the effects of landscape scale surface heterogeneities are deemed to be important.
The impact of baryonic matter on gravitational lensing by galaxy clusters
NASA Astrophysics Data System (ADS)
Lee, Brandyn E.; King, Lindsay; Applegate, Douglas; McCarthy, Ian
2017-01-01
Since the bulk of the matter comprising galaxy clusters exists in the form of dark matter, gravitational N-body simulations have historically been an effective way to investigate large scale structure formation and the astrophysics of galaxy clusters. However, upcoming telescopes such as the Large Synoptic Survey Telescope are expected to have lower systematic errors than older generations, reducing measurement uncertainties and requiring that astrophysicists better quantify the impact of baryonic matter on the cluster lensing signal. Here we outline the effects of baryonic processes on cluster density profiles and on weak lensing mass and concentration estimates. Our analysis is done using clusters grown in the suite of cosmological hydrodynamical simulations known as cosmo-OWLS.
Witnessing the growth of the nearest galaxy cluster: thermodynamics of the Virgo Cluster outskirts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simionescu, A.; Werner, N.; Mantz, A.
Here, we present results from Suzaku Key Project observations of the Virgo Cluster, the nearest galaxy cluster to us, mapping its X-ray properties along four long ‘arms’ extending beyond the virial radius. The entropy profiles along all four azimuths increase with radius, then level out beyond ~0.5r 200, while the average pressure at large radii exceeds Planck Sunyaev–Zel'dovich measurements. These results can be explained by enhanced gas density fluctuations (clumping) in the cluster's outskirts. Using a standard Navarro, Frenk and White model, we estimate a virial mass, radius and concentration parameter of M 200 = 1.05 ± 0.02 × 10more » 14 M⊙, r 200 = 974.1 ± 5.7 kpc and c = 8.8 ± 0.2, respectively. The inferred cumulative baryon fraction exceeds the cosmic mean at r ~r 200 along the major axis, suggesting enhanced gas clumping possibly sourced by a candidate large-scale structure filament along the north–south direction. The Suzaku data reveal a large-scale sloshing pattern, with two new cold fronts detected at radii of 233 and 280 kpc along the western and southern arms, respectively. Two high-temperature regions are also identified 1 Mpc towards the south and 605 kpc towards the west of M87, likely representing shocks associated with the ongoing cluster growth. Although systematic uncertainties in measuring the metallicity for low-temperature plasma remain, the data at large radii appear consistent with a uniform metal distribution on scales of ~90 × 180 kpc and larger, providing additional support for the early chemical enrichment scenario driven by galactic winds at redshifts of 2–3.« less
Witnessing the growth of the nearest galaxy cluster: thermodynamics of the Virgo Cluster outskirts
Simionescu, A.; Werner, N.; Mantz, A.; ...
2017-04-17
Here, we present results from Suzaku Key Project observations of the Virgo Cluster, the nearest galaxy cluster to us, mapping its X-ray properties along four long ‘arms’ extending beyond the virial radius. The entropy profiles along all four azimuths increase with radius, then level out beyond ~0.5r 200, while the average pressure at large radii exceeds Planck Sunyaev–Zel'dovich measurements. These results can be explained by enhanced gas density fluctuations (clumping) in the cluster's outskirts. Using a standard Navarro, Frenk and White model, we estimate a virial mass, radius and concentration parameter of M 200 = 1.05 ± 0.02 × 10more » 14 M⊙, r 200 = 974.1 ± 5.7 kpc and c = 8.8 ± 0.2, respectively. The inferred cumulative baryon fraction exceeds the cosmic mean at r ~r 200 along the major axis, suggesting enhanced gas clumping possibly sourced by a candidate large-scale structure filament along the north–south direction. The Suzaku data reveal a large-scale sloshing pattern, with two new cold fronts detected at radii of 233 and 280 kpc along the western and southern arms, respectively. Two high-temperature regions are also identified 1 Mpc towards the south and 605 kpc towards the west of M87, likely representing shocks associated with the ongoing cluster growth. Although systematic uncertainties in measuring the metallicity for low-temperature plasma remain, the data at large radii appear consistent with a uniform metal distribution on scales of ~90 × 180 kpc and larger, providing additional support for the early chemical enrichment scenario driven by galactic winds at redshifts of 2–3.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slepian, Zachary; Slosar, Anze; Eisenstein, Daniel J.
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv <0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of Baryon Acoustic Oscillation (BAO) method measurements of the cosmic distance scale using the 2-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3% rms in the distance scale inferred from the BAO feature in the BOSS 2-point clustering, well belowmore » the 1% statistical error of this measurement. In conclusion, this constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as DESI to self-protect against the relative velocity as a possible systematic.« less
The void spectrum in two-dimensional numerical simulations of gravitational clustering
NASA Technical Reports Server (NTRS)
Kauffmann, Guinevere; Melott, Adrian L.
1992-01-01
An algorithm for deriving a spectrum of void sizes from two-dimensional high-resolution numerical simulations of gravitational clustering is tested, and it is verified that it produces the correct results where those results can be anticipated. The method is used to study the growth of voids as clustering proceeds. It is found that the most stable indicator of the characteristic void 'size' in the simulations is the mean fractional area covered by voids of diameter d, in a density field smoothed at its correlation length. Very accurate scaling behavior is found in power-law numerical models as they evolve. Eventually, this scaling breaks down as the nonlinearity reaches larger scales. It is shown that this breakdown is a manifestation of the undesirable effect of boundary conditions on simulations, even with the very large dynamic range possible here. A simple criterion is suggested for deciding when simulations with modest large-scale power may systematically underestimate the frequency of larger voids.
Slepian, Zachary; Slosar, Anze; Eisenstein, Daniel J.; ...
2017-10-24
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv <0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of Baryon Acoustic Oscillation (BAO) method measurements of the cosmic distance scale using the 2-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3% rms in the distance scale inferred from the BAO feature in the BOSS 2-point clustering, well belowmore » the 1% statistical error of this measurement. In conclusion, this constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as DESI to self-protect against the relative velocity as a possible systematic.« less
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.; Blazek, Jonathan A.; Brownstein, Joel R.; Chuang, Chia-Hsun; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McEwen, Joseph E.; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Seo, Hee-Jong; Slosar, Anže; Vargas-Magaña, Mariana
2018-02-01
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv < 0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of baryon acoustic oscillation (BAO) method measurements of the cosmic distance scale using the two-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3 per cent rms in the distance scale inferred from the BAO feature in the BOSS two-point clustering, well below the 1 per cent statistical error of this measurement. This constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as the Dark Energy Spectroscopic Instrument (DESI) to self-protect against the relative velocity as a possible systematic.
NASA Astrophysics Data System (ADS)
Xu, K.; Sühring, M.; Metzger, S.; Desai, A. R.
2017-12-01
Most eddy covariance (EC) flux towers suffer from footprint bias. This footprint not only varies rapidly in time, but is smaller than the resolution of most earth system models, leading to a systemic scale mismatch in model-data comparison. Previous studies have suggested this problem can be mitigated (1) with multiple towers, (2) by building a taller tower with a large flux footprint, and (3) by applying advanced scaling methods. Here we ask: (1) How many flux towers are needed to sufficiently sample the flux mean and variation across an Earth system model domain? (2) How tall is tall enough for a single tower to represent the Earth system model domain? (3) Can we reduce the requirements derived from the first two questions with advanced scaling methods? We test these questions with output from large eddy simulations (LES) and application of the environmental response function (ERF) upscaling method. PALM LES (Maronga et al. 2015) was set up over a domain of 12 km x 16 km x 1.8 km at 7 m spatial resolution and produced 5 hours of output at a time step of 0.3 s. The surface Bowen ratio alternated between 0.2 and 1 among a series of 3 km wide stripe-like surface patches, with horizontal wind perpendicular to the surface heterogeneity. A total of 384 virtual towers were arranged on a regular grid across the LES domain, recording EC observations at 18 vertical levels. We use increasing height of a virtual flux tower and increasing numbers of virtual flux towers in the domain to compute energy fluxes. Initial results show a large (>25) number of towers is needed sufficiently sample the mean domain energy flux. When the ERF upscaling method was applied to the virtual towers in the LES environment, we were able to map fluxes over the domain to within 20% precision with a significantly smaller number of towers. This was achieved by relating sub-hourly turbulent fluxes to meteorological forcings and surface properties. These results demonstrate how advanced scaling techniques can decrease the number of towers, and thus experimental expense, required for domain-scaling over heterogeneous surface.
Large-scale P2P network based distributed virtual geographic environment (DVGE)
NASA Astrophysics Data System (ADS)
Tan, Xicheng; Yu, Liang; Bian, Fuling
2007-06-01
Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.
The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface
Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.
2014-01-01
Redirected Free Exploration with Distractors (RFED) is a large-scale real-walking locomotion interface developed to enable people to walk freely in virtual environments that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to walking-in-place and joystick interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users’ cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size. PMID:22184262
Blocked inverted indices for exact clustering of large chemical spaces.
Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver
2014-09-22
The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.
The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey
NASA Technical Reports Server (NTRS)
Burg, R.; Giacconi, R.; Forman, W.; Jones, C.
1994-01-01
We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.
EMBEDDED CLUSTERS IN THE LARGE MAGELLANIC CLOUD USING THE VISTA MAGELLANIC CLOUDS SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romita, Krista; Lada, Elizabeth; Cioni, Maria-Rosa, E-mail: k.a.romita@ufl.edu, E-mail: elada@ufl.edu, E-mail: mcioni@aip.de
We present initial results of the first large-scale survey of embedded star clusters in molecular clouds in the Large Magellanic Cloud (LMC) using near-infrared imaging from the Visible and Infrared Survey Telescope for Astronomy Magellanic Clouds Survey. We explored a ∼1.65 deg{sup 2} area of the LMC, which contains the well-known star-forming region 30 Doradus as well as ∼14% of the galaxy’s CO clouds, and identified 67 embedded cluster candidates, 45 of which are newly discovered as clusters. We have determined the sizes, luminosities, and masses for these embedded clusters, examined the star formation rates (SFRs) of their corresponding molecularmore » clouds, and made a comparison between the LMC and the Milky Way. Our preliminary results indicate that embedded clusters in the LMC are generally larger, more luminous, and more massive than those in the local Milky Way. We also find that the surface densities of both embedded clusters and molecular clouds is ∼3 times higher than in our local environment, the embedded cluster mass surface density is ∼40 times higher, the SFR is ∼20 times higher, and the star formation efficiency is ∼10 times higher. Despite these differences, the SFRs of the LMC molecular clouds are consistent with the SFR scaling law presented in Lada et al. This consistency indicates that while the conditions of embedded cluster formation may vary between environments, the overall process within molecular clouds may be universal.« less
NASA Astrophysics Data System (ADS)
Dörner, Ralf; Lok, Benjamin; Broll, Wolfgang
Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1990-01-01
The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.
Obscuring and Feeding Supermassive Black Holes with Evolving Nuclear Star Clusters
NASA Astrophysics Data System (ADS)
Schartmann, M.; Burkert, A.; Krause, M.; Camenzind, M.; Meisenheimer, K.; Davies, R. I.
2010-05-01
Recently, high-resolution observations made with the help of the near-infrared adaptive optics integral field spectrograph SINFONI at the VLT proved the existence of massive and young nuclear star clusters in the centers of a sample of Seyfert galaxies. With the help of high-resolution hydrodynamical simulations with the pluto code, we follow the evolution of such clusters, especially focusing on mass and energy feedback from young stars. This leads to a filamentary inflow of gas on large scales (tens of parsecs), whereas a turbulent and very dense disk builds up on the parsec scale. Here we concentrate on the long-term evolution of the nuclear disk in NGC 1068 with the help of an effective viscous disk model, using the mass input from the large-scale simulations and accounting for star formation in the disk. This two-stage modeling enables us to connect the tens-of-parsecs scale region (observable with SINFONI) with the parsec-scale environment (MIDI observations). At the current age of the nuclear star cluster, our simulations predict disk sizes of the order 0.8 to 0.9 pc, gas masses of order 106 M⊙, and mass transfer rates through the inner boundary of order 0.025 M⊙ yr-1, in good agreement with values derived from observations.
Building to Scale: An Analysis of Web-Based Services in CIC (Big Ten) Libraries.
ERIC Educational Resources Information Center
Dewey, Barbara I.
Advancing library services in large universities requires creative approaches for "building to scale." This is the case for CIC, Committee on Institutional Cooperation (Big Ten), libraries whose home institutions serve thousands of students, faculty, staff, and others. Developing virtual Web-based services is an increasingly viable…
Semihard processes with BLM renormalization scale setting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caporale, Francesco; Ivanov, Dmitry Yu.; Murdaca, Beatrice
We apply the BLM scale setting procedure directly to amplitudes (cross sections) of several semihard processes. It is shown that, due to the presence of β{sub 0}-terms in the NLA results for the impact factors, the obtained optimal renormalization scale is not universal, but depends both on the energy and on the process in question. We illustrate this general conclusion considering the following semihard processes: (i) inclusive production of two forward high-p{sub T} jets separated by large interval in rapidity (Mueller-Navelet jets); (ii) high-energy behavior of the total cross section for highly virtual photons; (iii) forward amplitude of the productionmore » of two light vector mesons in the collision of two virtual photons.« less
NASA Astrophysics Data System (ADS)
Datta, Dipayan; Kossmann, Simone; Neese, Frank
2016-09-01
The domain-based local pair-natural orbital coupled-cluster (DLPNO-CC) theory has recently emerged as an efficient and powerful quantum-chemical method for the calculation of energies of molecules comprised of several hundred atoms. It has been demonstrated that the DLPNO-CC approach attains the accuracy of a standard canonical coupled-cluster calculation to about 99.9% of the basis set correlation energy while realizing linear scaling of the computational cost with respect to system size. This is achieved by combining (a) localized occupied orbitals, (b) large virtual orbital correlation domains spanned by the projected atomic orbitals (PAOs), and (c) compaction of the virtual space through a truncated pair natural orbital (PNO) basis. In this paper, we report on the implementation of an analytic scheme for the calculation of the first derivatives of the DLPNO-CC energy for basis set independent perturbations within the singles and doubles approximation (DLPNO-CCSD) for closed-shell molecules. Perturbation-independent one-particle density matrices have been implemented in order to account for the response of the CC wave function to the external perturbation. Orbital-relaxation effects due to external perturbation are not taken into account in the current implementation. We investigate in detail the dependence of the computed first-order electrical properties (e.g., dipole moment) on the three major truncation parameters used in a DLPNO-CC calculation, namely, the natural orbital occupation number cutoff used for the construction of the PNOs, the weak electron-pair cutoff, and the domain size cutoff. No additional truncation parameter has been introduced for property calculation. We present benchmark calculations on dipole moments for a set of 10 molecules consisting of 20-40 atoms. We demonstrate that 98%-99% accuracy relative to the canonical CCSD results can be consistently achieved in these calculations. However, this comes with the price of tightening the threshold for the natural orbital occupation number cutoff by an order of magnitude compared to the DLPNO-CCSD energy calculations.
Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs
NASA Astrophysics Data System (ADS)
van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.
2018-04-01
Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.
Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.
Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong
2018-01-01
Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.
Weak gravitational lensing due to large-scale structure of the universe
NASA Technical Reports Server (NTRS)
Jaroszynski, Michal; Park, Changbom; Paczynski, Bohdan; Gott, J. Richard, III
1990-01-01
The effect of the large-scale structure of the universe on the propagation of light rays is studied. The development of the large-scale density fluctuations in the omega = 1 universe is calculated within the cold dark matter scenario using a smooth particle approximation. The propagation of about 10 to the 6th random light rays between the redshift z = 5 and the observer was followed. It is found that the effect of shear is negligible, and the amplification of single images is dominated by the matter in the beam. The spread of amplifications is very small. Therefore, the filled-beam approximation is very good for studies of strong lensing by galaxies or clusters of galaxies. In the simulation, the column density was averaged over a comoving area of approximately (1/h Mpc)-squared. No case of a strong gravitational lensing was found, i.e., no 'over-focused' image that would suggest that a few images might be present. Therefore, the large-scale structure of the universe as it is presently known does not produce multiple images with gravitational lensing on a scale larger than clusters of galaxies.
Ho, Shirley; Agarwal, Nishant; Myers, Adam D.; ...
2015-05-22
Here, the Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z=0.5 and z=2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans 0~ 11,00 square degrees and probes a volume of 80 h –3 Gpc 3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimalmore » quadratic estimator in four redshift slices with an accuracy of ~ 25% over a bin width of δ l ~ 10–15 on scales corresponding to matter-radiation equality and larger (0ℓ ~ 2–3).« less
Analysis of large-scale gene expression data.
Sherlock, G
2000-04-01
The advent of cDNA and oligonucleotide microarray technologies has led to a paradigm shift in biological investigation, such that the bottleneck in research is shifting from data generation to data analysis. Hierarchical clustering, divisive clustering, self-organizing maps and k-means clustering have all been recently used to make sense of this mass of data.
Clustering biomolecular complexes by residue contacts similarity.
Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2012-07-01
Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Brasseur, James G.; Juneja, Anurag
1996-11-01
Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117
Nonlocal and collective relaxation in stellar systems
NASA Technical Reports Server (NTRS)
Weinberg, Martin D.
1993-01-01
The modal response of stellar systems to fluctuations at large scales is presently investigated by means of analytic theory and n-body simulation; the stochastic excitation of these modes is shown to increase the relaxation rate even for a system which is moderately far from instability. The n-body simulations, when designed to suppress relaxation at small scales, clearly show the effects of large-scale fluctuations. It is predicted that large-scale fluctuations will be largest for such marginally bound systems as forming star clusters and associations.
Dark matter, long-range forces, and large-scale structure
NASA Technical Reports Server (NTRS)
Gradwohl, Ben-Ami; Frieman, Joshua A.
1992-01-01
If the dark matter in galaxies and clusters is nonbaryonic, it can interact with additional long-range fields that are invisible to experimental tests of the equivalence principle. We discuss the astrophysical and cosmological implications of a long-range force coupled only to the dark matter and find rather tight constraints on its strength. If the force is repulsive (attractive), the masses of galaxy groups and clusters (and the mean density of the universe inferred from them) have been systematically underestimated (overestimated). We explore the consequent effects on the two-point correlation function, large-scale velocity flows, and microwave background anisotropies, for models with initial scale-invariant adiabatic perturbations and cold dark matter.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Dispersion and Cluster Scales in the Ocean
NASA Astrophysics Data System (ADS)
Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.
2017-12-01
Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
Anomalies in the GRBs' distribution
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Horvath, Istvan; Hakkila, Jon; Toth, Viktor
2015-08-01
Gamma-ray bursts (GRBs) are the most luminous objects known: they outshine their host galaxies making them ideal candidates for probing large-scale structure. Earlier, the angular distribution of different GRBs (long, intermediate and short) has been studied in detail with different methods and it has been found that the short and intermediate groups showed deviation from the full randomness at different levels (e.g. Vavrek, R., et al. 2008). However these result based only angular measurements of the BATSE experiment, without any spatial distance indicator involved.Currently we have more than 361 GRBs with measured precise position, optical afterglow and redshift, mainly due to the observations of the Swift mission. This sample is getting large enough that it its homogeneous and isotropic distribution a large scale can be checked. We have recently (Horvath, I. et al., 2014) identified a large clustering of gamma-ray bursts at redshift z ~ 2 in the general direction of the constellations of Hercules and Corona Borealis. This angular excess cannot be entirely attributed to known selection biases, making its existence due to chance unlikely. The scale on which the clustering occurs is disturbingly large, about 2-3 Gpc: the underlying distribution of matter suggested by this cluster is big enough to question standard assumptions about Universal homogeneity and isotropy.
Smith, Jennifer L; Sivasubramaniam, Selvaraj; Rabiu, Mansur M; Kyari, Fatima; Solomon, Anthony W; Gilbert, Clare
2015-01-01
The distribution of trachoma in Nigeria is spatially heterogeneous, with large-scale trends observed across the country and more local variation within areas. Relative contributions of individual and cluster-level risk factors to the geographic distribution of disease remain largely unknown. The primary aim of this analysis is to assess the relationship between climatic factors and trachomatous trichiasis (TT) and/or corneal opacity (CO) due to trachoma in Nigeria, while accounting for the effects of individual risk factors and spatial correlation. In addition, we explore the relative importance of variation in the risk of trichiasis and/or corneal opacity (TT/CO) at different levels. Data from the 2007 National Blindness and Visual Impairment Survey were used for this analysis, which included a nationally representative sample of adults aged 40 years and above. Complete data were available from 304 clusters selected using a multi-stage stratified cluster-random sampling strategy. All participants (13,543 individuals) were interviewed and examined by an ophthalmologist for the presence or absence of TT and CO. In addition to field-collected data, remotely sensed climatic data were extracted for each cluster and used to fit Bayesian hierarchical logistic models to disease outcome. The risk of TT/CO was associated with factors at both the individual and cluster levels, with approximately 14% of the total variation attributed to the cluster level. Beyond established individual risk factors (age, gender and occupation), there was strong evidence that environmental/climatic factors at the cluster-level (lower precipitation, higher land surface temperature, higher mean annual temperature and rural classification) were also associated with a greater risk of TT/CO. This study establishes the importance of large-scale risk factors in the geographical distribution of TT/CO in Nigeria, supporting anecdotal evidence that environmental conditions are associated with increased risk in this context and highlighting their potential use in improving estimates of disease burden at large scales.
Dynamic structural disorder in supported nanoscale catalysts
NASA Astrophysics Data System (ADS)
Rehr, J. J.; Vila, F. D.
2014-04-01
We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.
Ahnood, Dana; Souriti, Ahmad; Williams, Gwyn Samuel
2018-06-01
To explore the views of patients with diabetic retinopathy and maculopathy on their acceptance of virtual clinic review in place of face-to-face clinic appointments. A postal survey was mailed to all 813 patients under the care of the diabetic eye clinic at Singleton Hospital with 7 questions, explanatory information, and a stamped, addressed envelope available for returning completed questionnaires. Four hundred and ninety-eight questionnaires were returned indicating that 86.1% were supportive of the idea of virtual clinics, although only 56.9% were prepared for every visit to be virtual. Of respondents, 6.6% not happy to attend any virtual clinic. This is by far the largest survey of patients' attitudes regarding attending virtual clinics and confirms that the vast majority are supportive of this mode of health care delivery. Copyright © 2018 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Laboratory Study of Air Turbulence-Particle Coupling
NASA Astrophysics Data System (ADS)
Petersen, A.; Baker, L.; Coletti, F.
2017-12-01
Inertial particles suspended in a turbulent flow are unable to follow the fluid's rapid velocity fluctuations, leading to high concentrations in regions where fluid strain dominates vorticity. This phenomenon is known as preferential concentration or clustering and is thought to affect natural processes ranging from the collisional growth of raindrops to the formation of planetesimals in proto-planetary nebulas. In the present study, we use a large jet-stirred chamber to generate homogeneous air turbulence into which we drop particles with an aerodynamic response time comparable to the flow time scales. Using laser imaging we find that turbulence can lead to a multi-fold increase of settling velocity compared to still-air conditions. We then employ Voronoi tessellation to examine the particle spatial distribution, finding strong evidence of turbulence-driven particle clustering over a wide range of experimental conditions. We observe individual clusters of a larger size range than seen previously, sometimes beyond the integral length scale of the turbulence. We also investigate cluster topology and find that they (i) exhibit a fractal structure, (ii) have a nearly constant particle concentration over their entire size range, and (iii) are most often vertically oriented. Furthermore, clustered particles tend to fall faster than those outside clusters, and larger clusters fall faster on average than smaller ones. Finally, by simultaneous measurement of particle and air velocity fields, we provide the first experimental evidence of preferential sweeping, a mechanism previously proposed to explain the increase in particle settling velocity found in numerical simulations, and find it especially effective for clustered particles. These results are significant for the micro-scale physics of atmospheric clouds. The large cluster size range has implications for how droplets will influence the local environment through condensation, evaporation, drag and latent heat effects. Our results also suggest that large collections of droplets will interact due to differential settling, possibly enhancing raindrop formation.
Spatial correlations, clustering and percolation-like transitions in homicide crimes
NASA Astrophysics Data System (ADS)
Alves, L. G. A.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.
2015-07-01
The spatial dynamics of criminal activities has been recently studied through statistical physics methods; however, models and results have been focusing on local scales (city level) and much less is known about these patterns at larger scales, e.g. at a country level. Here we report on a characterization of the spatial dynamics of the homicide crimes along the Brazilian territory using data from all cities (˜5000) in a period of more than thirty years. Our results show that the spatial correlation function in the per capita homicides decays exponentially with the distance between cities and that the characteristic correlation length displays an acute increasing trend in the latest years. We also investigate the formation of spatial clusters of cities via a percolation-like analysis, where clustering of cities and a phase-transition-like behavior describing the size of the largest cluster as a function of a homicide threshold are observed. This transition-like behavior presents evolutive features characterized by an increasing in the homicide threshold (where the transitions occur) and by a decreasing in the transition magnitudes (length of the jumps in the cluster size). We believe that our work sheds new light on the spatial patterns of criminal activities at large scales, which may contribute for better political decisions and resources allocation as well as opens new possibilities for modeling criminal activities by setting up fundamental empirical patterns at large scales.
NASA Astrophysics Data System (ADS)
Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.
2012-07-01
Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.
A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.
Shen, Lili; Guo, Jiming; Wang, Lei
2018-06-06
The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.
Huang, Shiping
2017-11-13
The evolution of the contact area with normal load for rough surfaces has great fundamental and practical importance, ranging from earthquake dynamics to machine wear. This work bridges the gap between the atomic scale and the macroscopic scale for normal contact behavior. The real contact area, which is formed by a large ensemble of discrete contacts (clusters), is proven to be much smaller than the apparent surface area. The distribution of the discrete contact clusters and the interaction between them are key to revealing the mechanism of the contacting solids. To this end, Green's function molecular dynamics (GFMD) is used to study both how the contact cluster evolves from the atomic scale to the macroscopic scale and the interaction between clusters. It is found that the interaction between clusters has a strong effect on their formation. The formation and distribution of the contact clusters is far more complicated than that predicted by the asperity model. Ignorance of the interaction between them leads to overestimating the contacting force. In real contact, contacting clusters are smaller and more discrete due to the interaction between the asperities. Understanding the exact nature of the contact area with the normal load is essential to the following research on friction.
Evolution of the Contact Area with Normal Load for Rough Surfaces: from Atomic to Macroscopic Scales
NASA Astrophysics Data System (ADS)
Huang, Shiping
2017-11-01
The evolution of the contact area with normal load for rough surfaces has great fundamental and practical importance, ranging from earthquake dynamics to machine wear. This work bridges the gap between the atomic scale and the macroscopic scale for normal contact behavior. The real contact area, which is formed by a large ensemble of discrete contacts (clusters), is proven to be much smaller than the apparent surface area. The distribution of the discrete contact clusters and the interaction between them are key to revealing the mechanism of the contacting solids. To this end, Green's function molecular dynamics (GFMD) is used to study both how the contact cluster evolves from the atomic scale to the macroscopic scale and the interaction between clusters. It is found that the interaction between clusters has a strong effect on their formation. The formation and distribution of the contact clusters is far more complicated than that predicted by the asperity model. Ignorance of the interaction between them leads to overestimating the contacting force. In real contact, contacting clusters are smaller and more discrete due to the interaction between the asperities. Understanding the exact nature of the contact area with the normal load is essential to the following research on friction.
Cognitive Model Exploration and Optimization: A New Challenge for Computational Science
2010-03-01
the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post
Supra-galactic colour patterns in globular cluster systems
NASA Astrophysics Data System (ADS)
Forte, Juan C.
2017-07-01
An analysis of globular cluster systems associated with galaxies included in the Virgo and Fornax Hubble Space Telescope-Advanced Camera Surveys reveals distinct (g - z) colour modulation patterns. These features appear on composite samples of globular clusters and, most evidently, in galaxies with absolute magnitudes Mg in the range from -20.2 to -19.2. These colour modulations are also detectable on some samples of globular clusters in the central galaxies NGC 1399 and NGC 4486 (and confirmed on data sets obtained with different instruments and photometric systems), as well as in other bright galaxies in these clusters. After discarding field contamination, photometric errors and statistical effects, we conclude that these supra-galactic colour patterns are real and reflect some previously unknown characteristic. These features suggest that the globular cluster formation process was not entirely stochastic but included a fraction of clusters that formed in a rather synchronized fashion over large spatial scales, and in a tentative time lapse of about 1.5 Gy at redshifts z between 2 and 4. We speculate that the putative mechanism leading to that synchronism may be associated with large scale feedback effects connected with violent star-forming events and/or with supermassive black holes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasenkamp, Daren; Sim, Alexander; Wehner, Michael
Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less
Alexander, Nathan; Woetzel, Nils; Meiler, Jens
2011-02-01
Clustering algorithms are used as data analysis tools in a wide variety of applications in Biology. Clustering has become especially important in protein structure prediction and virtual high throughput screening methods. In protein structure prediction, clustering is used to structure the conformational space of thousands of protein models. In virtual high throughput screening, databases with millions of drug-like molecules are organized by structural similarity, e.g. common scaffolds. The tree-like dendrogram structure obtained from hierarchical clustering can provide a qualitative overview of the results, which is important for focusing detailed analysis. However, in practice it is difficult to relate specific components of the dendrogram directly back to the objects of which it is comprised and to display all desired information within the two dimensions of the dendrogram. The current work presents a hierarchical agglomerative clustering method termed bcl::Cluster. bcl::Cluster utilizes the Pymol Molecular Graphics System to graphically depict dendrograms in three dimensions. This allows simultaneous display of relevant biological molecules as well as additional information about the clusters and the members comprising them.
A Phenomenology of Learning Large: The Tutorial Sphere of xMOOC Video Lectures
ERIC Educational Resources Information Center
Adams, Catherine; Yin, Yin; Vargas Madriz, Luis Francisco; Mullen, C. Scott
2014-01-01
The current discourse surrounding Massive Open Online Courses (MOOCs) is powerful. Despite their rapid and widespread deployment, research has yet to confirm or refute some of the bold claims rationalizing the popularity and efficacy of these large-scale virtual learning environments. Also, MOOCs' reputed disruptive, game-changing potential…
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
Non-Gaussian shape discrimination with spectroscopic galaxy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byun, Joyce; Bean, Rachel, E-mail: byun@astro.cornell.edu, E-mail: rbean@astro.cornell.edu
2015-03-01
We consider how galaxy clustering data, from Mpc to Gpc scales, from upcoming large scale structure surveys, such as Euclid and DESI, can provide discriminating information about the bispectrum shape arising from a variety of inflationary scenarios. Through exploring in detail the weighting of shape properties in the calculation of the halo bias and halo mass function we show how they probe a broad range of configurations, beyond those in the squeezed limit, that can help distinguish between shapes with similar large scale bias behaviors. We assess the impact, on constraints for a diverse set of non-Gaussian shapes, of galaxymore » clustering information in the mildly non-linear regime, and surveys that span multiple redshifts and employ different galactic tracers of the dark matter distribution. Fisher forecasts are presented for a Euclid-like spectroscopic survey of Hα-selected emission line galaxies (ELGs), and a DESI-like survey, of luminous red galaxies (LRGs) and [O-II] doublet-selected ELGs, in combination with Planck-like CMB temperature and polarization data.While ELG samples provide better probes of shapes that are divergent in the squeezed limit, LRG constraints, centered below z<1, yield stronger constraints on shapes with scale-independent large-scale halo biases, such as the equilateral template. The ELG and LRG samples provide complementary degeneracy directions for distinguishing between different shapes. For Hα-selected galaxies, we note that recent revisions of the expected Hα luminosity function reduce the halo bias constraints on the local shape, relative to the CMB. For galaxy clustering constraints to be comparable to those from the CMB, additional information about the Gaussian galaxy bias is needed, such as can be determined from the galaxy clustering bispectrum or probing the halo power spectrum directly through weak lensing. If the Gaussian galaxy bias is constrained to better than a percent level then the LSS and CMB data could provide complementary constraints that will enable differentiation of bispectrum with distinct theoretical origins but with similar large scale, squeezed-limit properties.« less
NASA Astrophysics Data System (ADS)
Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.
2010-12-01
Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.
REVIEWS OF TOPICAL PROBLEMS: Large-scale star formation in galaxies
NASA Astrophysics Data System (ADS)
Efremov, Yurii N.; Chernin, Artur D.
2003-01-01
A brief review is given of the history of modern ideas on the ongoing star formation process in the gaseous disks of galaxies. Recent studies demonstrate the key role of the interplay between the gas self-gravitation and its turbulent motions. The large scale supersonic gas flows create structures of enhanced density which then give rise to the gravitational condensation of gas into stars and star clusters. Formation of star clusters, associations and complexes is considered, as well as the possibility of isolated star formation. Special emphasis is placed on star formation under the action of ram pressure.
DPM — efficient storage in diverse environments
NASA Astrophysics Data System (ADS)
Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio
2014-06-01
Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Light domain walls, massive neutrinos and the large scale structure of the Universe
NASA Technical Reports Server (NTRS)
Massarotti, Alessandro
1991-01-01
Domain walls generated through a cosmological phase transition are considered, which interact nongravitationally with light neutrinos. At a redshift z greater than or equal to 10(exp 4), the network grows rapidly and is virtually decoupled from the matter. As the friction with the matter becomes dominant, a comoving network scale close to that of the comoving horizon scale at z of approximately 10(exp 4) gets frozen. During the later phases, the walls produce matter wakes of a thickness d of approximately 10h(exp -1)Mpc, that may become seeds for the formation of the large scale structure observed in the Universe.
Analysis of Helium Segregation on Surfaces of Plasma-Exposed Tungsten
NASA Astrophysics Data System (ADS)
Maroudas, Dimitrios; Hu, Lin; Hammond, Karl; Wirth, Brian
2015-11-01
We report a systematic theoretical and atomic-scale computational study of implanted helium segregation on surfaces of tungsten, which is considered as a plasma facing component in nuclear fusion reactors. We employ a hierarchy of atomic-scale simulations, including molecular statics to understand the origin of helium surface segregation, targeted molecular-dynamics (MD) simulations of near-surface cluster reactions, and large-scale MD simulations of implanted helium evolution in plasma-exposed tungsten. We find that small, mobile helium clusters (of 1-7 He atoms) in the near-surface region are attracted to the surface due to an elastic interaction force. This thermodynamic driving force induces drift fluxes of these mobile clusters toward the surface, facilitating helium segregation. Moreover, the clusters' drift toward the surface enables cluster reactions, most importantly trap mutation, at rates much higher than in the bulk material. This cluster dynamics has significant effects on the surface morphology, near-surface defect structures, and the amount of helium retained in the material upon plasma exposure.
Clustering fossils in solid inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir
In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less
Clustering in the SDSS Redshift Survey
NASA Astrophysics Data System (ADS)
Zehavi, I.; Blanton, M. R.; Frieman, J. A.; Weinberg, D. H.; SDSS Collaboration
2002-05-01
We present measurements of clustering in the Sloan Digital Sky Survey (SDSS) galaxy redshift survey. Our current sample consists of roughly 80,000 galaxies with redshifts in the range 0.02 < z < 0.2, covering about 1200 square degrees. We measure the clustering in redshift space and in real space. The two-dimensional correlation function ξ (rp,π ) shows clear signatures of redshift distortions, both the small-scale ``fingers-of-God'' effect and the large-scale compression. The inferred real-space correlation function is well described by a power law. The SDSS is especially suitable for investigating the dependence of clustering on galaxy properties, due to the wealth of information in the photometric survey. We focus on the dependence of clustering on color and on luminosity.
NASA Astrophysics Data System (ADS)
Jauzac, Mathilde; Harvey, David; Massey, Richard
2018-07-01
We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and Very Large Telescope/Multi Unit Spectroscopic Explorer spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z = 0.397, M(R < 200 kpc) = 1.6 × 1014 M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar haloes are allowed, the model improves by 35 per cent. This technique may provide a new way to investigate the processes and time-scales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.
NASA Technical Reports Server (NTRS)
Fisher, Scott S.
1986-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
The Scale Sizes of Globular Clusters: Tidal Limits, Evolution, and the Outer Halo
NASA Astrophysics Data System (ADS)
Harris, William
2011-10-01
The physical factors that determine the linear sizes of massive star clusters are not well understood. Their scale sizes were long thought to be governed by the tidal field of the parent galaxy, but major questions are now emerging. Globular clusters, for example, have mean sizes nearly independent of location in the halo. Paradoxically, the recently discovered "anomalous extended clusters" in M31 and elsewhere have scale sizes that fit much better with tidal theory, but they are puzzlingly rare. Lastly, the persistent size difference between metal-poor and metal-rich clusters still lacks a quantitative explanation. Many aspects of these observations call for better modelling of dynamical evolution in the outskirts of clusters, and also their conditions of formation including the early rapid mass loss phase of protoclusters. A new set of accurate measurements of scale sizes and structural parameters, for a large and homogeneous set of globular clusters, would represent a major advance in this subject. We propose to carry out a {WFC3+ACS} imaging survey of the globular clusters in the supergiant Virgo elliptical M87 to cover the complete run of the halo. M87 is an optimum target system because of its huge numbers of clusters and HST's ability to resolve the cluster profiles accurately. We will derive cluster effective radii, central concentrations, luminosities, and colors for more than 4000 clusters using PSF-convolved King-model profile fitting. In parallel, we are developing theoretical tools to model the expected distribution of cluster sizes versus galactocentric distance as functions of cluster mass, concentration, and orbital anisotropy.
Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.
Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S
2017-01-05
The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.
Collective translational and rotational Monte Carlo cluster move for general pairwise interaction
NASA Astrophysics Data System (ADS)
Růžička, Štěpán; Allen, Michael P.
2014-09-01
Virtual move Monte Carlo is a cluster algorithm which was originally developed for strongly attractive colloidal, molecular, or atomistic systems in order to both approximate the collective dynamics and avoid sampling of unphysical kinetic traps. In this paper, we present the algorithm in the form, which selects the moving cluster through a wider class of virtual states and which is applicable to general pairwise interactions, including hard-core repulsion. The newly proposed way of selecting the cluster increases the acceptance probability by up to several orders of magnitude, especially for rotational moves. The results have their applications in simulations of systems interacting via anisotropic potentials both to enhance the sampling of the phase space and to approximate the dynamics.
Large-scale structure from cosmic-string loops in a baryon-dominated universe
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Scherrer, Robert J.
1988-01-01
The results are presented of a numerical simulation of the formation of large-scale structure in a universe with Omega(0) = 0.2 and h = 0.5 dominated by baryons in which cosmic strings provide the initial density perturbations. The numerical model yields a power spectrum. Nonlinear evolution confirms that the model can account for 700 km/s bulk flows and a strong cluster-cluster correlation, but does rather poorly on smaller scales. There is no visual 'filamentary' structure, and the two-point correlation has too steep a logarithmic slope. The value of G mu = 4 x 10 to the -6th is significantly lower than previous estimates for the value of G mu in baryon-dominated cosmic string models.
What drives the evolution of Luminous Compact Blue Galaxies in Clusters vs. the Field?
NASA Astrophysics Data System (ADS)
Wirth, Gregory
2017-08-01
Present-day galaxy clusters consist chiefly of low-mass dwarf elliptical galaxies, but the progenitors of this dominant population remain unclear. A prime candidate is the class of objects known as Luminous Compact Blue Galaxies, common in intermediate-reshift clusters but virtually extinct today. Recent cosmological simulations suggest that the present-day dwarfs galaxies begin as irregular field galaxies, undergo an environmentally-driven starburst phase as they enter the cluster, and stop forming stars earlier than their counterparts in the field. This model predicts that cluster dwarfs should have lower stellar mass per unit dynamical mass than their counterparts in the field. We propose a two-pronged archival research program to test this key prediction using the combination of precision photometry from space and high-quality spectroscopy. First, we will combine optical HST/ACS imaging of five z=0.55 clusters (including two HST Frontier Fields) with Spitzer IR imaging and publicly-released Keck/DEIMOS spectroscopy to measure stellar-to-dynamical-mass ratios for a large sample of cluster LCBGs. Second, we will exploit a new catalog of LCBGs in the COSMOS field to gather corresponding data for a significant sample of field LCBGs. By comparing mass ratios from these datasets, we will test theoretical predictions and determine the primary physical driver of cluster dwarf-galaxy evolution.
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2018-03-01
A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.
Analysis of model output and science data in the Virtual Model Repository (VMR).
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Ridley, A. J.
2014-12-01
Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.
Udsen, Flemming Witt; Lilholt, Pernille Heyckendorff; Hejlesen, Ole; Ehlers, Lars Holger
2014-05-21
Several feasibility studies show promising results of telehealthcare on health outcomes and health-related quality of life for patients suffering from chronic obstructive pulmonary disease, and some of these studies show that telehealthcare may even lower healthcare costs. However, the only large-scale trial we have so far - the Whole System Demonstrator Project in England - has raised doubts about these results since it conclude that telehealthcare as a supplement to usual care is not likely to be cost-effective compared with usual care alone. The present study is known as 'TeleCare North' in Denmark. It seeks to address these doubts by implementing a large-scale, pragmatic, cluster-randomized trial with nested economic evaluation. The purpose of the study is to assess the effectiveness and the cost-effectiveness of a telehealth solution for patients suffering from chronic obstructive pulmonary disease compared to usual practice. General practitioners will be responsible for recruiting eligible participants (1,200 participants are expected) for the trial in the geographical area of the North Denmark Region. Twenty-six municipality districts in the region define the randomization clusters. The primary outcomes are changes in health-related quality of life, and the incremental cost-effectiveness ratio measured from baseline to follow-up at 12 months. Secondary outcomes are changes in mortality and physiological indicators (diastolic and systolic blood pressure, pulse, oxygen saturation, and weight). There has been a call for large-scale clinical trials with rigorous cost-effectiveness assessments in telehealthcare research. This study is meant to improve the international evidence base for the effectiveness and cost-effectiveness of telehealthcare to patients suffering from chronic obstructive pulmonary disease by implementing a large-scale pragmatic cluster-randomized clinical trial. Clinicaltrials.gov, http://NCT01984840, November 14, 2013.
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
A large scale virtual screen of DprE1.
Wilsey, Claire; Gurka, Jessica; Toth, David; Franco, Jimmy
2013-12-01
Tuberculosis continues to plague the world with the World Health Organization estimating that about one third of the world's population is infected. Due to the emergence of MDR and XDR strains of TB, the need for novel therapeutics has become increasing urgent. Herein we report the results of a virtual screen of 4.1 million compounds against a promising drug target, DrpE1. The virtual compounds were obtained from the Zinc docking site and screened using the molecular docking program, AutoDock Vina. The computational hits have led to the identification of several promising lead compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cluster richness-mass calibration with cosmic microwave background lensing
NASA Astrophysics Data System (ADS)
Geach, James E.; Peacock, John A.
2017-11-01
Identifying galaxy clusters through overdensities of galaxies in photometric surveys is the oldest1,2 and arguably the most economical and mass-sensitive detection method3,4, compared with X-ray5-7 and Sunyaev-Zel'dovich effect8 surveys that detect the hot intracluster medium. However, a perennial problem has been the mapping of optical `richness' measurements onto total cluster mass3,9-12. Emitted at a conformal distance of 14 gigaparsecs, the cosmic microwave background acts as a backlight to all intervening mass in the Universe, and therefore has been gravitationally lensed13-15. Experiments such as the Atacama Cosmology Telescope16, South Pole Telescope17-19 and the Planck20 satellite have now detected gravitational lensing of the cosmic microwave background and produced large-area maps of the foreground deflecting structures. Here we present a calibration of cluster optical richness at the 10% level by measuring the average cosmic microwave background lensing measured by Planck towards the positions of large numbers of optically selected clusters, detecting the deflection of photons by structures of total mass of order 1014 M⊙. Although mainly aimed at the study of larger-scale structures, the Planck estimate of the cosmic microwave background lensing field can be used to recover a nearly unbiased lensing signal for stacked clusters on arcminute scales15,21. This approach offers a clean measure of total cluster masses over most of cosmic history, largely independent of baryon physics.
A unified framework for building high performance DVEs
NASA Astrophysics Data System (ADS)
Lei, Kaibin; Ma, Zhixia; Xiong, Hua
2011-10-01
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
Clusters of Galaxies at High Redshift
NASA Astrophysics Data System (ADS)
Fort, Bernard
For a long time, the small number of clusters at z > 0.3 in the Abell survey catalogue and simulations of the standard CDM formation of large scale structures provided a paradigm where clusters were considered as young merging structures. At earlier times, loose concentrations of galaxy clumps were mostly anticipated. Recent observations broke the taboo. Progressively we became convinced that compact and massive clusters at z = 1 or possibly beyond exist and should be searched for.
NASA Astrophysics Data System (ADS)
Fischer, P.
1997-12-01
Weak distortions of background galaxies are rapidly emerging as a powerful tool for the measurement of galaxy cluster mass distributions. Lensing based studies have the advantage of being direct measurements of mass and are not model-dependent as are other techniques (X-ray, radial velocities). To date studies have been limited by CCD field size meaning that full coverage of the clusters out to the virial radii and beyond has not been possible. Probing this large radius region is essential for testing models of large scale structure formation. New wide field CCD mosaics, for the first time, allow mass measurements out to very large radius. We have obtained images for a sample of clusters with the ``Big Throughput Camera'' (BTC) on the CTIO 4m. This camera comprises four thinned SITE 2048(2) CCDs, each 15arcmin on a side for a total area of one quarter of a square degree. We have developed an automated reduction pipeline which: 1) corrects for spatial distortions, 2) corrects for PSF anisotropy, 3) determines relative scaling and background levels, and 4) combines multiple exposures. In this poster we will present some preliminary results of our cluster lensing study. This will include radial mass and light profiles and 2-d mass and galaxy density maps.
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
Zibrek, Katja; Kokkinara, Elena; Mcdonnell, Rachel
2018-04-01
Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.
Quantitative properties of clustering within modern microscopic nuclear models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volya, A.; Tchuvil’sky, Yu. M., E-mail: tchuvl@nucl-th.sinp.msu.ru
2016-09-15
A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially themore » possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.« less
Virtual Exploration of Earth's Evolution
NASA Astrophysics Data System (ADS)
Anbar, A. D.; Bruce, G.; Semken, S. C.; Summons, R. E.; Buxner, S.; Horodyskyj, L.; Kotrc, B.; Swann, J.; Klug Boonstra, S. L.; Oliver, C.
2014-12-01
Traditional introductory STEM courses often reinforce misconceptions because the large scale of many classes forces a structured, lecture-centric model of teaching that emphasizes delivery of facts rather than exploration, inquiry, and scientific reasoning. This problem is especially acute in teaching about the co-evolution of Earth and life, where classroom learning and textbook teaching are far removed from the immersive and affective aspects of field-based science, and where the challenges of taking large numbers of students into the field make it difficult to expose them to the complex context of the geologic record. We are exploring the potential of digital technologies and online delivery to address this challenge, using immersive and engaging virtual environments that are more like games than like lectures, grounded in active learning, and deliverable at scale via the internet. The goal is to invert the traditional lecture-centric paradigm by placing lectures at the periphery and inquiry-driven, integrative virtual investigations at the center, and to do so at scale. To this end, we are applying a technology platform we devised, supported by NASA and the NSF, that integrates a variety of digital media in a format that we call an immersive virtual field trip (iVFT). In iVFTs, students engage directly with virtual representations of real field sites, with which they interact non-linearly at a variety of scales via game-like exploration while guided by an adaptive tutoring system. This platform has already been used to develop pilot iVFTs useful in teaching anthropology, archeology, ecology, and geoscience. With support the Howard Hughes Medical Institute, we are now developing and evaluating a coherent suite of ~ 12 iVFTs that span the sweep of life's history on Earth, from the 3.8 Ga metasediments of West Greenland to ancient hominid sites in East Africa. These iVFTs will teach fundamental principles of geology and practices of scientific inquiry, and expose students to the evidence from which evolutionary and paleoenvironmental inferences are derived. In addition to making these iVFT available to the geoscience community for EPO, we will evaluate the comparative effectiveness of iVFT and traditional lecture and lab approaches to achieving geoscience learning objectives.
The Impact of Non-Thermal Processes in the Intracluster Medium on Cosmological Cluster Observables
NASA Astrophysics Data System (ADS)
Battaglia, Nicholas Ambrose
In this thesis we describe the generation and analysis of hydrodynamical simulations of galaxy clusters and their intracluster medium (ICM), using large cosmological boxes to generate large samples, in conjunction with individual cluster computations. The main focus is the exploration of the non-thermal processes in the ICM and the effect they have on the interpretation of observations used for cosmological constraints. We provide an introduction to the cosmological structure formation framework for our computations and an overview of the numerical simulations and observations of galaxy clusters. We explore the cluster magnetic field observables through radio relics, extended entities in the ICM characterized by their of diffuse radio emission. We show that statistical quantities such as radio relic luminosity functions and rotation measure power spectra are sensitive to magnetic field models. The spectral index of the radio relic emission provides information on structure formation shocks, e.g., on their Mach number. We develop a coarse grained stochastic model of active galaxy nucleus (AGN) feed-back in clusters and show the impact of such inhomogeneous feedback on the thermal pressure profile. We explore variations in the pressure profile as a function of cluster mass, redshift, and radius and provide a constrained fitting function for this profile. We measure the degree of the non-thermal pressure in the gas from internal cluster bulk motions and show it has an impact on the slope and scatter of the Sunyaev-Zel'dovich (SZ) scaling relation. We also find that the gross shape of the ICM, as characterized by scaled moment of inertia tensors, affects the SZ scaling relation. We demonstrate that the shape and the amplitude of the SZ angular power spectrum is sensitive to AGN feedback, and this affects the cosmological parameters determined from high resolution ACT and SPT cosmic microwave background data. We compare analytic, semi-analytic, and simulation-based methods for calculating the SZ power spectrum, and characterize their differences. All the methods must rely, one way or another, on high resolution large-scale hydrodynamical simulations with varying assumptions for modelling the gas of the sort presented here. We show how our results can be used to interpret the latest ACT and SPT power spectrum results. We provide an outlook for the future, describing follow-up work we are undertaking to further advance the theory of cluster science.
Boehm, Alexandria B
2002-10-15
In this study, we extend the established scaling theory for cluster size distributions generated during unsteady coagulation to number-flux distributions that arise during steady-state coagulation and settling in an unmixed water mass. The scaling theory predicts self-similar number-flux distributions and power-law decay of total number flux with depth. The shape of the number-flux distributions and the power-law exponent describing the decay of the total number flux are shown to depend on the homogeneity and small i/j limit of the coagulation kernel and the exponent kappa, which describes the variation in settling velocity with cluster volume. Particle field measurements from Lake Zurich, collected by U. Weilenmann and co-workers (Limnol. Oceanogr.34, 1 (1989)), are used to illustrate how the scaling predictions can be applied to a natural system. This effort indicates that within the mid-depth region of Lake Zurich, clusters of the same size preferentially interact and large clusters react with one another more quickly than small ones, indicative of clusters coagulating in a reaction-limited regime.
NASA Astrophysics Data System (ADS)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Virtual workstation - A multimodal, stereoscopic display environment
NASA Astrophysics Data System (ADS)
Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.
1987-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Nicholas; Graham, Alister W.
2013-02-15
We investigate whether or not nuclear star clusters and supermassive black holes (SMBHs) follow a common set of mass scaling relations with their host galaxy's properties, and hence can be considered to form a single class of central massive object (CMO). We have compiled a large sample of galaxies with measured nuclear star cluster masses and host galaxy properties from the literature and fit log-linear scaling relations. We find that nuclear star cluster mass, M {sub NC}, correlates most tightly with the host galaxy's velocity dispersion: log M {sub NC} = (2.11 {+-} 0.31)log ({sigma}/54) + (6.63 {+-} 0.09), butmore » has a slope dramatically shallower than the relation defined by SMBHs. We find that the nuclear star cluster mass relations involving host galaxy (and spheroid) luminosity and stellar and dynamical mass, intercept with but are in general shallower than the corresponding black hole scaling relations. In particular, M {sub NC}{proportional_to}M {sup 0.55{+-}0.15} {sub Gal,dyn}; the nuclear cluster mass is not a constant fraction of its host galaxy or spheroid mass. We conclude that nuclear stellar clusters and SMBHs do not form a single family of CMOs.« less
Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations
NASA Technical Reports Server (NTRS)
Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.
Venkatesan, Santhosh K.; Dubey, Vikash Kumar
2012-01-01
Structure-based virtual screening of NCI Diversity set II compounds was performed to indentify novel inhibitor scaffolds of trypanothione reductase (TR) from Leishmania infantum. The top 50 ranked hits were clustered using the AuPoSOM tool. Majority of the top-ranked compounds were Tricyclic. Clustering of hits yielded four major clusters each comprising varying number of subclusters differing in their mode of binding and orientation in the active site. Moreover, for the first time, we report selected alkaloids and dibenzothiazepines as inhibitors of Leishmania infantum TR. The mode of binding observed among the clusters also potentiates the probable in vitro inhibition kinetics and aids in defining key interaction which might contribute to the inhibition of enzymatic reduction of T[S] 2. The method provides scope for automation and integration into the virtual screening process employing docking softwares, for clustering the small molecule inhibitors based upon protein-ligand interactions. PMID:22550471
Sala, Esther; Guasch, Laura; Iwaszkiewicz, Justyna; Mulero, Miquel; Salvadó, Maria-Josepa; Pinent, Montserrat; Zoete, Vincent; Grosdidier, Aurélien; Garcia-Vallvé, Santiago; Michielin, Olivier; Pujadas, Gerard
2011-01-01
Background Their large scaffold diversity and properties, such as structural complexity and drug similarity, form the basis of claims that natural products are ideal starting points for drug design and development. Consequently, there has been great interest in determining whether such molecules show biological activity toward protein targets of pharmacological relevance. One target of particular interest is hIKK-2, a serine-threonine protein kinase belonging to the IKK complex that is the primary component responsible for activating NF-κB in response to various inflammatory stimuli. Indeed, this has led to the development of synthetic ATP-competitive inhibitors for hIKK-2. Therefore, the main goals of this study were (a) to use virtual screening to identify potential hIKK-2 inhibitors of natural origin that compete with ATP and (b) to evaluate the reliability of our virtual-screening protocol by experimentally testing the in vitro activity of selected natural-product hits. Methodology/Principal Findings We thus predicted that 1,061 out of the 89,425 natural products present in the studied database would inhibit hIKK-2 with good ADMET properties. Notably, when these 1,061 molecules were merged with the 98 synthetic hIKK-2 inhibitors used in this study and the resulting set was classified into ten clusters according to chemical similarity, there were three clusters that contained only natural products. Five molecules from these three clusters (for which no anti-inflammatory activity has been previously described) were then selected for in vitro activity testing, in which three out of the five molecules were shown to inhibit hIKK-2. Conclusions/Significance We demonstrated that our virtual-screening protocol was successful in identifying lead compounds for developing new inhibitors for hIKK-2, a target of great interest in medicinal chemistry. Additionally, all the tools developed during the current study (i.e., the homology model for the hIKK-2 kinase domain and the pharmacophore) will be made available to interested readers upon request. PMID:21390216
ERIC Educational Resources Information Center
Ghosh, Jaideep; Kshitij, Avinash
2017-01-01
This article introduces a number of methods that can be useful for examining the emergence of large-scale structures in collaboration networks. The study contributes to sociological research by investigating how clusters of research collaborators evolve and sometimes percolate in a collaboration network. Typically, we find that in our networks,…
Nonlinear modulation of the HI power spectrum on ultra-large scales. I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna; Maartens, Roy; Santos, Mario, E-mail: umeobinna@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: mgrsantos@uwc.ac.za
2016-03-01
Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they may bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates the nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolutionmore » bias. It also introduces a term that mimics white noise. These effects may influence forecasting analysis on ultra-large scales.« less
Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz
2016-03-01
Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs.
The MUSIC of galaxy clusters - II. X-ray global properties and scaling relations
NASA Astrophysics Data System (ADS)
Biffi, V.; Sembolini, F.; De Petris, M.; Valdarnini, R.; Yepes, G.; Gottlöber, S.
2014-03-01
We present the X-ray properties and scaling relations of a large sample of clusters extracted from the Marenostrum MUltidark SImulations of galaxy Clusters (MUSIC) data set. We focus on a sub-sample of 179 clusters at redshift z ˜ 0.11, with 3.2 × 1014 h-1 M⊙ < Mvir < 2 × 1015 h-1 M⊙, complete in mass. We employed the X-ray photon simulator PHOX to obtain synthetic Chandra observations and derive observable-like global properties of the intracluster medium (ICM), as X-ray temperature (TX) and luminosity (LX). TX is found to slightly underestimate the true mass-weighted temperature, although tracing fairly well the cluster total mass. We also study the effects of TX on scaling relations with cluster intrinsic properties: total (M500 and gas Mg,500 mass; integrated Compton parameter (YSZ) of the Sunyaev-Zel'dovich (SZ) thermal effect; YX = Mg,500 TX. We confirm that YX is a very good mass proxy, with a scatter on M500-YX and YSZ-YX lower than 5 per cent. The study of scaling relations among X-ray, intrinsic and SZ properties indicates that simulated MUSIC clusters reasonably resemble the self-similar prediction, especially for correlations involving TX. The observational approach also allows for a more direct comparison with real clusters, from which we find deviations mainly due to the physical description of the ICM, affecting TX and, particularly, LX.
HIGH-ENERGY NEUTRINOS FROM SOURCES IN CLUSTERS OF GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Olinto, Angela V.
2016-09-01
High-energy cosmic rays can be accelerated in clusters of galaxies, by mega-parsec scale shocks induced by the accretion of gas during the formation of large-scale structures, or by powerful sources harbored in clusters. Once accelerated, the highest energy particles leave the cluster via almost rectilinear trajectories, while lower energy ones can be confined by the cluster magnetic field up to cosmological time and interact with the intracluster gas. Using a realistic model of the baryon distribution and the turbulent magnetic field in clusters, we studied the propagation and hadronic interaction of high-energy protons in the intracluster medium. We report themore » cumulative cosmic-ray and neutrino spectra generated by galaxy clusters, including embedded sources, and demonstrate that clusters can contribute a significant fraction of the observed IceCube neutrinos above 30 TeV while remaining undetected in high-energy cosmic rays and γ rays for reasonable choices of parameters and source scenarios.« less
Free Factories: Unified Infrastructure for Data Intensive Web Services
Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.
2010-01-01
We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356
A quasi-static approach to structure formation in black hole universes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durk, Jessie; Clifton, Timothy, E-mail: j.durk@qmul.ac.uk, E-mail: t.clifton@qmul.ac.uk
Motivated by the existence of hierarchies of structure in the Universe, we present four new families of exact initial data for inhomogeneous cosmological models at their maximum of expansion. These data generalise existing black hole lattice models to situations that contain clusters of masses, and hence allow the consequences of cosmological structures to be considered in a well-defined and non-perturbative fashion. The degree of clustering is controlled by a parameter λ, in such a way that for λ ∼ 0 or 1 we have very tightly clustered masses, whilst for λ ∼ 0.5 all masses are separated by cosmological distancemore » scales. We study the consequences of structure formation on the total net mass in each of our clusters, as well as calculating the cosmological consequences of the interaction energies both within and between clusters. The locations of the shared horizons that appear around groups of black holes, when they are brought sufficiently close together, are also identified and studied. We find that clustering can have surprisingly large effects on the scale of the cosmology, with models that contain thousands of black holes sometimes being as little as 30% of the size of comparable Friedmann models with the same total proper mass. This deficit is comparable to what might be expected to occur from neglecting gravitational interaction energies in Friedmann cosmology, and suggests that these quantities may have a significant influence on the properties of the large-scale cosmology.« less
A roadmap for natural product discovery based on large-scale genomics and metabolomics
USDA-ARS?s Scientific Manuscript database
Actinobacteria encode a wealth of natural product biosynthetic gene clusters, whose systematic study is complicated by numerous repetitive motifs. By combining several metrics we developed a method for global classification of these gene clusters into families (GCFs) and analyzed the biosynthetic ca...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.
A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less
NASA Astrophysics Data System (ADS)
Huchtmeier, W. K.; Richter, O. G.; Materne, J.
1981-09-01
The large-scale structure of the universe is dominated by clustering. Most galaxies seem to be members of pairs, groups, clusters, and superclusters. To that degree we are able to recognize a hierarchical structure of the universe. Our local group of galaxies (LG) is centred on two large spiral galaxies: the Andromeda nebula and our own galaxy. Three sr:naller galaxies - like M 33 - and at least 23 dwarf galaxies (KraanKorteweg and Tammann, 1979, Astronomische Nachrichten, 300, 181) can be found in the evironment of these two large galaxies. Neighbouring groups have comparable sizes (about 1 Mpc in extent) and comparable numbers of bright members. Small dwarf galaxies cannot at present be observed at great distances.
Descriptor Fingerprints and Their Application to WhiteWine Clustering and Discrimination.
NASA Astrophysics Data System (ADS)
Bangov, I. P.; Moskovkina, M.; Stojanov, B. P.
2018-03-01
This study continues the attempt to use the statistical process for a large-scale analytical data. A group of 3898 white wines, each with 11 analytical laboratory benchmarks was analyzed by a fingerprint similarity search in order to be grouped into separate clusters. A characterization of the wine's quality in each individual cluster was carried out according to individual laboratory parameters.
NASA Astrophysics Data System (ADS)
Ebeling, H.; Barrett, E.; Donovan, D.
2004-07-01
We report the detection of a 4 h-170 Mpc long large-scale filament leading into the massive galaxy cluster MACS J0717.5+3745. The extent of this object well beyond the cluster's nominal virial radius (~2.3 Mpc) rules out prior interaction between its constituent galaxies and the cluster and makes it a prime candidate for a genuine filament as opposed to a merger remnant or a double cluster. The structure was discovered as a pronounced overdensity of galaxies selected to have V-R colors close to the cluster red sequence. Extensive spectroscopic follow-up of over 300 of these galaxies in a region covering the filament and the cluster confirms that the entire structure is located at the cluster redshift of z=0.545. Featuring galaxy surface densities of typically 15 Mpc-2 down to luminosities of 0.13L*V, the most diffuse parts of the filament are comparable in density to the clumps of red galaxies found around A851 in the only similar study carried out to date (Kodama et al.). Our direct detection of an extended large-scale filament funneling matter onto a massive distant cluster provides a superb target for in-depth studies of the evolution of galaxies in environments of greatly varying density and supports the predictions from theoretical models and numerical simulations of structure formation in a hierarchical picture. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The observatory was made possible by the generous financial support of the W. M. Keck Foundation. Based partly on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (US), the Particle Physics and Astronomy Research Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNP (Brazil), and CONICET (Argentina).
Large-Scale Structure Studies with the REFLEX Cluster Survey
NASA Astrophysics Data System (ADS)
Schuecker, P.; Bohringer, H.; Guzzo, L.; Collins, C.; Neumann, D. M.; Schindler, S.; Voges, W.
1998-12-01
First preliminary results of the ROSAT ESO Flux-Limited X-Ray (REFLEX) Cluster Survey are described. The survey covers 13,924 square degrees of the southern hemisphere. The present sample consists of about 470 rich clusters (1/3 non Abell/ACO clusters) with X-ray fluxes S >= 3.0 times 10^{-12} erg s^{-1} cm^{-2} (0.1-2.4 keV) and redshifts z <= 0.3. In contrast to other low-redshift surveys, the cumulative flux-number counts have an almost Euclidean slope. Comoving cluster number densities are found to be almost redshift-independent throughout the total survey volume. The X-ray luminosity function is well described by a Schechter function. The power spectrum of the number density fluctuations could be measured on scales up to 400 h^{-1} Mpc. A deeper survey with about 800 galaxy clusters in the same area is in progress.
Collisions in Compact Star Clusters.
NASA Astrophysics Data System (ADS)
Portegies Zwart, S. F.
The high stellar densities in young compact star clusters, such as the star cluster R136 in the 30 Doradus region, may lead to a large number of stellar collisions. Such collisions were recently found to be much more frequent than previous estimates. The number of collisions scales with the number of stars for clusters with the same initial relaxation time. These collisions take place in a few million years. The collision products may finally collapse into massive black holes. The fraction of the total mass in the star cluster which ends up in a single massive object scales with the total mass of the cluster and its relaxation time. This mass fraction is rather constant, within a factor two or so. Wild extrapolation from the relatively small masses of the studied systems to the cores of galactic nuclei may indicate that the massive black holes in these systems have formed in a similar way.
Neutrino masses, scale-dependent growth, and redshift-space distortions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernández, Oscar F., E-mail: oscarh@physics.mcgill.ca
2017-06-01
Massive neutrinos leave a unique signature in the large scale clustering of matter. We investigate the wavenumber dependence of the growth factor arising from neutrino masses and use a Fisher analysis to determine the aspects of a galaxy survey needed to measure this scale dependence.
Atomic-scale structure and electronic properties of GaN/GaAs superlattices
NASA Astrophysics Data System (ADS)
Goldman, R. S.; Feenstra, R. M.; Briner, B. G.; O'Steen, M. L.; Hauenstein, R. J.
1996-12-01
We have investigated the atomic-scale structure and electronic properties of GaN/GaAs superlattices produced by nitridation of a molecular beam epitaxially grown GaAs surface. Using cross-sectional scanning tunneling microscopy (STM) and spectroscopy, we show that the nitrided layers are laterally inhomogeneous, consisting of groups of atomic-scale defects and larger clusters. Analysis of x-ray diffraction data in terms of fractional area of clusters (determined by STM), reveals a cluster lattice constant similar to bulk GaN. In addition, tunneling spectroscopy on the defects indicates a conduction band state associated with an acceptor level of NAs in GaAs. Therefore, we identify the clusters and defects as nearly pure GaN and NAs, respectively. Together, the results reveal phase segregation in these arsenide/nitride structures, in agreement with the large miscibility gap predicted for GaAsN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik
2014-02-10
To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg{sup 2} COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensitymore » and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M{sub z} {sub =} {sub 0} ∼ 10{sup 15} M {sub ☉} clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M{sub z=} {sub 0} ∼ 10{sup 14} M {sub ☉} are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey.« less
Estimation of detection thresholds for redirected walking techniques.
Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus
2010-01-01
In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Mapping the Heavens: Probing Cosmology with Large Surveys
Frieman, Joshua [Fermilab
2017-12-09
This talk will provide an overview of recent and on-going sky surveys, focusing on their implications for cosmology. I will place particular emphasis on the Sloan Digital Sky Survey, the most ambitious mapping of the Universe yet undertaken, showing a virtual fly-through of the survey that reveals the large-scale structure of the galaxy distribution. Recent measurements of this large-scale structure, in combination with observations of the cosmic microwave background, have provided independent evidence for a Universe dominated by dark matter and dark energy as well as insights into how galaxies and larger-scale structures formed. Future planned surveys will build on these foundations to probe the history of the cosmic expansion--and thereby the dark energy--with greater precision.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
Exploring Learning through Audience Interaction in Virtual Reality Dome Theaters
NASA Astrophysics Data System (ADS)
Apostolellis, Panagiotis; Daradoumis, Thanasis
Informal learning in public spaces like museums, science centers and planetariums is increasingly popular during the last years. Recent advancements in large-scale displays allowed contemporary technology-enhanced museums to get equipped with digital domes, some with real-time capabilities like Virtual Reality systems. By conducting extensive literature review we have come to the conclusion that little to no research has been carried out on the leaning outcomes that the combination of VR and audience interaction can provide in the immersive environments of dome theaters. Thus, we propose that audience collaboration in immersive virtual reality environments presents a promising approach to support effective learning in groups of school aged children.
SEEDisCs: How Clusters Form and Galaxies Transform in the Cosmic Web
NASA Astrophysics Data System (ADS)
Jablonka, P.
2017-08-01
This presentation introduces a new survey, the Spatial Extended EDisCS Survey (SEEDisCS), which aims at understanding how clusters assemble and the level at which galaxies are preprocessed before falling on the cluster cores. I focus on the changes in galaxy properties in the cluster large scale environments, and how we can get constraints on the timescale of star formation quenching. I also discuss new ALMA CO observations, which trace the fate of the galaxy cold gas content along the infalling paths towards the cluster cores.
Galaxy clusters in simulations of the local Universe: a matter of constraints
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Tempel, Elmo
2018-06-01
To study the full formation and evolution history of galaxy clusters and their population, high-resolution simulations of the latter are flourishing. However, comparing observed clusters to the simulated ones on a one-to-one basis to refine the models and theories down to the details is non-trivial. The large variety of clusters limits the comparisons between observed and numerical clusters. Simulations resembling the local Universe down to the cluster scales permit pushing the limit. Simulated and observed clusters can be matched on a one-to-one basis for direct comparisons provided that clusters are well reproduced besides being in the proper large-scale environment. Comparing random and local Universe-like simulations obtained with differently grouped observational catalogues of peculiar velocities, this paper shows that the grouping scheme used to remove non-linear motions in the catalogues that constrain the simulations affects the quality of the numerical clusters. With a less aggressive grouping scheme - galaxies still falling on to clusters are preserved - combined with a bias minimization scheme, the mass of the dark matter haloes, simulacra for five local clusters - Virgo, Centaurus, Coma, Hydra, and Perseus - is increased by 39 per cent closing the gap with observational mass estimates. Simulacra are found on average in 89 per cent of the simulations, an increase of 5 per cent with respect to the previous grouping scheme. The only exception is Perseus. Since the Perseus-Pisces region is not well covered by the used peculiar velocity catalogue, the latest release lets us foresee a better simulacrum for Perseus in a near future.
Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2016-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486
Ecological Consistency of SSU rRNA-Based Operational Taxonomic Units at a Global Scale
Schmidt, Thomas S. B.; Matias Rodrigues, João F.; von Mering, Christian
2014-01-01
Operational Taxonomic Units (OTUs), usually defined as clusters of similar 16S/18S rRNA sequences, are the most widely used basic diversity units in large-scale characterizations of microbial communities. However, it remains unclear how well the various proposed OTU clustering algorithms approximate ‘true’ microbial taxa. Here, we explore the ecological consistency of OTUs – based on the assumption that, like true microbial taxa, they should show measurable habitat preferences (niche conservatism). In a global and comprehensive survey of available microbial sequence data, we systematically parse sequence annotations to obtain broad ecological descriptions of sampling sites. Based on these, we observe that sequence-based microbial OTUs generally show high levels of ecological consistency. However, different OTU clustering methods result in marked differences in the strength of this signal. Assuming that ecological consistency can serve as an objective external benchmark for cluster quality, we conclude that hierarchical complete linkage clustering, which provided the most ecologically consistent partitions, should be the default choice for OTU clustering. To our knowledge, this is the first approach to assess cluster quality using an external, biologically meaningful parameter as a benchmark, on a global scale. PMID:24763141
Galaxy Evolution Viewed as Functions of Environment and Mass
NASA Astrophysics Data System (ADS)
Kodama, Tadayuki; Tanaka, Masayuki; Tanaka, Ichi; Kajisawa, Masaru
We present two large surveys of distant clusters currently being carried out with Subaru, making use of its great capability of wide-field study both in the optical and in the near-infrared. The optical surveys, called PISCES, have mapped out large scale structures in and around 8 distant clusters at 0.4 < z <1.3, composed of multiple filaments and clumps extended over 15-30 Mpc scale. From the photometric and spectroscopic properties of galaxies over a wide range in environment, we find that the truncation of galaxies is seen in the outskirts of clusters rather than in the cluster cores.We also see a clear environmental dependence of the down-sizing (progressively later quenching of star formation in smaller galaxies). The near-infrared surveys are being conducted with a new wide-field instrument targeting proto-clusters around high-zradio-loud galaxies up to z ~4. Most of these field are known to show a large number of Lyαand/or Hαemitters at the same redshifts of the radio galaxies. We see a clear excess of near-infrared selected galaxies (JHK s -selected galaxies as well as DRG) in these fields, and they are indeed proto-clusters with not only young emitters but also evolved populations. Spatial distribution of such NIR selected galaxies is filamentary and track similar structures traced by the emitters. There is an hint that the bright-end of the red sequence first appeared between z= 3 and 2.
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
Simplified Virtualization in a HEP/NP Environment with Condor
NASA Astrophysics Data System (ADS)
Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.
2012-12-01
In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing
Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.
Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.
The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth
ERIC Educational Resources Information Center
Steyvers, Mark; Tenenbaum, Joshua B.
2005-01-01
We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…
NASA Astrophysics Data System (ADS)
Mummery, Benjamin O.; McCarthy, Ian G.; Bird, Simeon; Schaye, Joop
2017-10-01
We use the cosmo-OWLS and bahamas suites of cosmological hydrodynamical simulations to explore the separate and combined effects of baryon physics (particularly feedback from active galactic nuclei, AGN) and free streaming of massive neutrinos on large-scale structure. We focus on five diagnostics: (I) the halo mass function, (II) halo mass density profiles, (III) the halo mass-concentration relation, (IV) the clustering of haloes and (v) the clustering of matter, and we explore the extent to which the effects of baryon physics and neutrino free streaming can be treated independently. Consistent with previous studies, we find that both AGN feedback and neutrino free streaming suppress the total matter power spectrum, although their scale and redshift dependences differ significantly. The inclusion of AGN feedback can significantly reduce the masses of groups and clusters, and increase their scale radii. These effects lead to a decrease in the amplitude of the mass-concentration relation and an increase in the halo autocorrelation function at fixed mass. Neutrinos also lower the masses of groups and clusters while having no significant effect on the shape of their density profiles (thus also affecting the mass-concentration relation and halo clustering in a qualitatively similar way to feedback). We show that, with only a small number of exceptions, the combined effects of baryon physics and neutrino free streaming on all five diagnostics can be estimated to typically better than a few per cent accuracy by treating these processes independently (I.e. by multiplying their separate effects).
NASA Astrophysics Data System (ADS)
Allen, Rob
2016-09-01
Structures within molecules and nuclei have relationships to astronomical patterns. The COBE cosmic scale plots, and large scale surveys of galaxy clusters have patterns also repeating and well known at atomic scales. The Induction, Strong Force, and Nuclear Binding Energy Periods within the Big Bang are revealed to have played roles in the formation of these large scale distributions. Equations related to the enormous patterns also model chemical bonds and likely nucleus and nucleon substructures. ratios of the forces that include gravity are accurately calculated from the distributions and shapes. In addition, particle masses and a great many physical constants can be derived with precision and accuracy from astrophysical shapes. A few very basic numbers can do modelling from nucleon internals to molecules to super novae, and up to the Visible Universe. Equations are also provided along with possible structural configurations for some Cold Dark Matter and Dark Energy.
Detecting communities in large networks
NASA Astrophysics Data System (ADS)
Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.
2005-07-01
We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.
Formation of large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Melott, Adrian L.; Bertschinger, Edmund
1989-01-01
Numerical simulations of large-scale structure formation from cosmic strings and massive neutrinos are described. The linear power spectrum in this model resembles the cold-dark-matter power spectrum. Galaxy formation begins early, and the final distribution consists of isolated density peaks embedded in a smooth background, leading to a natural bias in the distribution of luminous matter. The distribution of clustered matter has a filamentary appearance with large voids.
A pair natural orbital implementation of the coupled cluster model CC2 for excitation energies.
Helmich, Benjamin; Hättig, Christof
2013-08-28
We demonstrate how to extend the pair natural orbital (PNO) methodology for excited states, presented in a previous work for the perturbative doubles correction to configuration interaction singles (CIS(D)), to iterative coupled cluster methods such as the approximate singles and doubles model CC2. The original O(N(5)) scaling of the PNO construction is reduced by using orbital-specific virtuals (OSVs) as an intermediate step without spoiling the initial accuracy of the PNO method. Furthermore, a slower error convergence for charge-transfer states is analyzed and resolved by a numerical Laplace transformation during the PNO construction, so that an equally accurate treatment of local and charge-transfer excitations is achieved. With state-specific truncated PNO expansions, the eigenvalue problem is solved by combining the Davidson algorithm with deflation to project out roots that have already been determined and an automated refresh with a generation of new PNOs to achieve self-consistency of the PNO space. For a large test set, we found that truncation errors for PNO-CC2 excitation energies are only slightly larger than for PNO-CIS(D). The computational efficiency of PNO-CC2 is demonstrated for a large organic dye, where a reduction of the doubles space by a factor of more than 1000 is obtained compared to the canonical calculation. A compression of the doubles space by a factor 30 is achieved by a unified OSV space only. Moreover, calculations with the still preliminary PNO-CC2 implementation on a series of glycine oligomers revealed an early break even point with a canonical RI-CC2 implementation between 100 and 300 basis functions.
What drives the evolution of Luminous Compact Blue Galaxies in Clusters vs. the Field?
NASA Astrophysics Data System (ADS)
Wirth, Gregory D.; Bershady, Matthew A.; Crawford, Steven M.; Hunt, Lucas; Pisano, Daniel J.; Randriamampandry, Solohery M.
2018-06-01
Low-mass dwarf ellipticals are the most numerous members of present-day galaxy clusters, but the progenitors of this dominant population remain unclear. A prime candidate is the class of objects known as Luminous Compact Blue Galaxies (LCBGs), common in intermediate-redshift clusters but virtually extinct today. Recent cosmological simulations suggest that present-day dwarf galaxies begin as irregular field galaxies, undergo an environmentally-driven starburst phase as they enter the cluster, and stop forming stars earlier than their counterparts in the field. This model predicts that cluster dwarfs should have lower stellar mass per unit dynamical mass than their counterparts in the field. We are undertaking a two-pronged archival research program to test this key prediction using the combination of precision photometry from space and high-quality spectroscopy. First, we are combining optical HST/ACS imaging of five z=0.55 clusters (including two HST Frontier Fields) with Spitzer IR imaging and publicly-released Keck/DEIMOS spectroscopy to measure stellar-to-dynamical-mass ratios for a large sample of cluster LCBGs. Second, we are exploiting a new catalog of LCBGs in the COSMOS field to gather corresponding data for a significant sample of field LCBGs. By comparing mass ratios from these datasets, we aim to test theoretical predictions and determine the primary physical driver of cluster dwarf-galaxy evolution.
Clustering molecular dynamics trajectories for optimizing docking experiments.
De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D; Norberto de Souza, Osmar; Barros, Rodrigo C
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand.
Submorphotypes of the maxillary first molar and their effects on alignment and rotation.
Kim, Hong-Kyun; Kwon, Ho Beom; Hyun, Hong-Keun; Jung, Min-Ho; Han, Seong Ho; Park, Young-Seok
2014-09-01
The aim of this study was to explore the shape differences in maxillary first molars with orthographic measurements using 3-dimensional virtual models to assess whether there is variability in morphology that could affect the alignment results when treated by straight-wire appliance systems. A total of 175 maxillary first molars with 4 cusps were selected for classification. With 3-dimensional laser scanning and reconstruction software, virtual casts were constructed. After performing several linear and angular measurements on the virtual occlusal plane, the teeth were clustered into 2 groups by the method of partitioning around medoids. To visualize the 2 groups, occlusal polygons were constructed using the average data of these groups. The resultant 2 clusters showed statistically significant differences in the measurements describing the cusp locations and the buccal and lingual outlines. The rotation along the centers made the 2 cluster polygons look similar, but there was a difference in the direction of the midsagittal lines. There was considerable variability in morphology according to 2 clusters in the population of this study. The occlusal polygons showed that the outlines of the 2 clusters were similar, but the midsagittal line directions and inner geometries were different. The difference between the morphologies of the 2 clusters could result in occlusal contact differences, which might be considered for better alignment of the maxillary posterior segment. Copyright © 2014 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
DISPLACEMENT CASCADE SIMULATION IN TUNGSTEN UP TO 200 KEV OF DAMAGE ENERGY AT 300, 1025, AND 2050 K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.
2015-09-22
We generated molecular dynamics database of primary defects that adequately covers the range of tungsten recoil energy imparted by 14-MeV neutrons. During this semi annual period, cascades at 150 and 200 keV at 300 and 1025 K were simulated. Overall, we included damage energy up to 200 keV at 300 and 1025 K, and up to 100 keV at 2050 K. We report the number of surviving Frenkel pairs (NF) and the size distribution of defect clusters. The slope of the NF curve versus cascade damage energy (EMD), on a log-log scale, changes at a transition energy (μ). For EMDmore » > μ, the cascade forms interconnected damage regions that facilitate the formation of large clusters of defects. At 300 K and EMD = 200 keV, the largest size of interstitial cluster and vacancy cluster is 266 and 335, respectively. Similarly, at 1025 K and EMD = 200 keV, the largest size of interstitial cluster and vacancy cluster is 296 and 338, respectively. At 2050 K, large interstitial clusters also routinely form, but practically no large vacancy clusters do« less
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
Prokaryotic Gene Clusters: A Rich Toolbox for Synthetic Biology
Fischbach, Michael; Voigt, Christopher A.
2014-01-01
Bacteria construct elaborate nanostructures, obtain nutrients and energy from diverse sources, synthesize complex molecules, and implement signal processing to react to their environment. These complex phenotypes require the coordinated action of multiple genes, which are often encoded in a contiguous region of the genome, referred to as a gene cluster. Gene clusters sometimes contain all of the genes necessary and sufficient for a particular function. As an evolutionary mechanism, gene clusters facilitate the horizontal transfer of the complete function between species. Here, we review recent work on a number of clusters whose functions are relevant to biotechnology. Engineering these clusters has been hindered by their regulatory complexity, the need to balance the expression of many genes, and a lack of tools to design and manipulate DNA at this scale. Advances in synthetic biology will enable the large-scale bottom-up engineering of the clusters to optimize their functions, wake up cryptic clusters, or to transfer them between organisms. Understanding and manipulating gene clusters will move towards an era of genome engineering, where multiple functions can be “mixed-and-matched” to create a designer organism. PMID:21154668
A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network
Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue
2016-01-01
Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency. PMID:26907272
A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network.
Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue
2016-02-19
Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.
Using the morphology and magnetic fields of tailed radio galaxies as environmental probes
NASA Astrophysics Data System (ADS)
Johnston-Hollitt, M.; Dehghan, S.; Pratley, L.
2015-03-01
Bent-tailed (BT) radio sources have long been known to trace over densities in the Universe up to z ~ 1 and there is increasing evidence this association persists out to redshifts of 2. The morphology of the jets in BT galaxies is primarily a function of the environment that they have resided in and so BTs provide invaluable clues as to their local conditions. Thus, not only can samples of BT galaxies be used as signposts of large-scale structure, but are also valuable for obtaining a statistical measurement of properties of the intra-cluster medium including the presence of cluster accretion shocks & winds, and as historical anemometers, preserving the dynamical history of their surroundings in their jets. We discuss the use of BTs to unveil large-scale structure and provide an example in which a BT was used to unlock the dynamical history of its host cluster. In addition to their use as density and dynamical indicators, BTs are useful probes of the magnetic field on their environment on scales which are inaccessible to other methods. Here we discuss a novel way in which a particular sub-class of BTs, the so-called `corkscrew' galaxies might further elucidate the coherence lengths of the magnetic fields in their vicinity. Given that BTs are estimated to make up a large population in next generation surveys we posit that the use of jets in this way could provide a unique source of environmental information for clusters and groups up to z = 2.
Helium segregation on surfaces of plasma-exposed tungsten
Maroudas, Dimitrios; Blondel, Sophie; Hu, Lin; ...
2016-01-21
Here we report a hierarchical multi-scale modeling study of implanted helium segregation on surfaces of tungsten, considered as a plasma facing component in nuclear fusion reactors. We employ a hierarchy of atomic-scale simulations based on a reliable interatomic interaction potential, including molecular-statics simulations to understand the origin of helium surface segregation, targeted molecular-dynamics (MD) simulations of near-surface cluster reactions, and large-scale MD simulations of implanted helium evolution in plasma-exposed tungsten. We find that small, mobile He-n (1 <= n <= 7) clusters in the near-surface region are attracted to the surface due to an elastic interaction force that provides themore » thermodynamic driving force for surface segregation. Elastic interaction force induces drift fluxes of these mobile Hen clusters, which increase substantially as the migrating clusters approach the surface, facilitating helium segregation on the surface. Moreover, the clusters' drift toward the surface enables cluster reactions, most importantly trap mutation, in the near-surface region at rates much higher than in the bulk material. Moreover, these near-surface cluster dynamics have significant effects on the surface morphology, near-surface defect structures, and the amount of helium retained in the material upon plasma exposure. We integrate the findings of such atomic-scale simulations into a properly parameterized and validated spatially dependent, continuum-scale reaction-diffusion cluster dynamics model, capable of predicting implanted helium evolution, surface segregation, and its near-surface effects in tungsten. This cluster-dynamics model sets the stage for development of fully atomistically informed coarse-grained models for computationally efficient simulation predictions of helium surface segregation, as well as helium retention and surface morphological evolution, toward optimal design of plasma facing components.« less
Modulated Modularity Clustering as an Exploratory Tool for Functional Genomic Inference
Stone, Eric A.; Ayroles, Julien F.
2009-01-01
In recent years, the advent of high-throughput assays, coupled with their diminishing cost, has facilitated a systems approach to biology. As a consequence, massive amounts of data are currently being generated, requiring efficient methodology aimed at the reduction of scale. Whole-genome transcriptional profiling is a standard component of systems-level analyses, and to reduce scale and improve inference clustering genes is common. Since clustering is often the first step toward generating hypotheses, cluster quality is critical. Conversely, because the validation of cluster-driven hypotheses is indirect, it is critical that quality clusters not be obtained by subjective means. In this paper, we present a new objective-based clustering method and demonstrate that it yields high-quality results. Our method, modulated modularity clustering (MMC), seeks community structure in graphical data. MMC modulates the connection strengths of edges in a weighted graph to maximize an objective function (called modularity) that quantifies community structure. The result of this maximization is a clustering through which tightly-connected groups of vertices emerge. Our application is to systems genetics, and we quantitatively compare MMC both to the hierarchical clustering method most commonly employed and to three popular spectral clustering approaches. We further validate MMC through analyses of human and Drosophila melanogaster expression data, demonstrating that the clusters we obtain are biologically meaningful. We show MMC to be effective and suitable to applications of large scale. In light of these features, we advocate MMC as a standard tool for exploration and hypothesis generation. PMID:19424432
Modeling and Testing Dark Energy and Gravity with Galaxy Cluster Data
NASA Astrophysics Data System (ADS)
Rapetti, David; Cataneo, Matteo; Heneka, Caroline; Mantz, Adam; Allen, Steven W.; Von Der Linden, Anja; Schmidt, Fabian; Lombriser, Lucas; Li, Baojiu; Applegate, Douglas; Kelly, Patrick; Morris, Glenn
2018-06-01
The abundance of galaxy clusters is a powerful probe to constrain the properties of dark energy and gravity at large scales. We employed a self-consistent analysis that includes survey, observable-mass scaling relations and weak gravitational lensing data to obtain constraints on f(R) gravity, which are an order of magnitude tighter than the best previously achieved, as well as on cold dark energy of negligible sound speed. The latter implies clustering of the dark energy fluid at all scales, allowing us to measure the effects of dark energy perturbations at cluster scales. For this study, we recalibrated the halo mass function using the following non-linear characteristic quantities: the spherical collapse threshold, the virial overdensity and an additional mass contribution for cold dark energy. We also presented a new modeling of the f(R) gravity halo mass function that incorporates novel corrections to capture key non-linear effects of the Chameleon screening mechanism, as found in high resolution N-body simulations. All these results permit us to predict, as I will also exemplify, and eventually obtain the next generation of cluster constraints on such models, and provide us with frameworks that can also be applied to other proposed dark energy and modified gravity models using cluster abundance observations.
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.
1987-01-01
A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.
Thermodynamics of the Coma Cluster Outskirts
NASA Astrophysics Data System (ADS)
Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Mantz, A.; Matsushita, K.; Nulsen, P. E. J.; Sanders, J. S.; Sasaki, T.; Sato, T.; Takei, Y.; Walker, S. A.
2013-09-01
We present results from a large mosaic of Suzaku observations of the Coma Cluster, the nearest and X-ray brightest hot (~8 keV), dynamically active, non-cool core system, focusing on the thermodynamic properties of the intracluster medium on large scales. For azimuths not aligned with an infalling subcluster toward the southwest, our measured temperature and X-ray brightness profiles exhibit broadly consistent radial trends, with the temperature decreasing from about 8.5 keV at the cluster center to about 2 keV at a radius of 2 Mpc, which is the edge of our detection limit. The southwest merger significantly boosts the surface brightness, allowing us to detect X-ray emission out to ~2.2 Mpc along this direction. Apart from the southwestern infalling subcluster, the surface brightness profiles show multiple edges around radii of 30-40 arcmin. The azimuthally averaged temperature profile, as well as the deprojected density and pressure profiles, all show a sharp drop consistent with an outwardly-propagating shock front located at 40 arcmin, corresponding to the outermost edge of the giant radio halo observed at 352 MHz with the Westerbork Synthesis Radio Telescope. The shock front may be powering this radio emission. A clear entropy excess inside of r 500 reflects the violent merging events linked with these morphological features. Beyond r 500, the entropy profiles of the Coma Cluster along the relatively relaxed directions are consistent with the power-law behavior expected from simple models of gravitational large-scale structure formation. The pressure is also in agreement at these radii with the expected values measured from Sunyaev-Zel'dovich data from the Planck satellite. However, due to the large uncertainties associated with the Coma Cluster measurements, we cannot yet exclude an entropy flattening in this system consistent with that seen in more relaxed cool core clusters.
Virtual reality adaptive stimulation of limbic networks in the mental readiness training.
Cosić, Kresimir; Popović, Sinisa; Kostović, Ivica; Judas, Milos
2010-01-01
A significant proportion of severe psychological problems in recent large-scale peacekeeping operations underscores the importance of effective methods for strengthening the stress resilience. Virtual reality (VR) adaptive stimulation, based on the estimation of the participant's emotional state from physiological signals, may enhance the mental readiness training (MRT). Understanding neurobiological mechanisms by which the MRT based on VR adaptive stimulation can affect the resilience to stress is important for practical application in the stress resilience management. After the delivery of a traumatic audio-visual stimulus in the VR, the cascade of events occurs in the brain, which evokes various physiological manifestations. In addition to the "limbic" emotional and visceral brain circuitry, other large-scale sensory, cognitive, and memory brain networks participate with less known impact in this physiological response. The MRT based on VR adaptive stimulation may strengthen the stress resilience through targeted brain-body interactions. Integrated interdisciplinary efforts, which would integrate the brain imaging and the proposed approach, may contribute to clarifying the neurobiological foundation of the resilience to stress.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Cloud computing for detecting high-order genome-wide epistatic interaction via dynamic clustering.
Guo, Xuan; Meng, Yu; Yu, Ning; Pan, Yi
2014-04-10
Taking the advantage of high-throughput single nucleotide polymorphism (SNP) genotyping technology, large genome-wide association studies (GWASs) have been considered to hold promise for unravelling complex relationships between genotype and phenotype. At present, traditional single-locus-based methods are insufficient to detect interactions consisting of multiple-locus, which are broadly existing in complex traits. In addition, statistic tests for high order epistatic interactions with more than 2 SNPs propose computational and analytical challenges because the computation increases exponentially as the cardinality of SNPs combinations gets larger. In this paper, we provide a simple, fast and powerful method using dynamic clustering and cloud computing to detect genome-wide multi-locus epistatic interactions. We have constructed systematic experiments to compare powers performance against some recently proposed algorithms, including TEAM, SNPRuler, EDCF and BOOST. Furthermore, we have applied our method on two real GWAS datasets, Age-related macular degeneration (AMD) and Rheumatoid arthritis (RA) datasets, where we find some novel potential disease-related genetic factors which are not shown up in detections of 2-loci epistatic interactions. Experimental results on simulated data demonstrate that our method is more powerful than some recently proposed methods on both two- and three-locus disease models. Our method has discovered many novel high-order associations that are significantly enriched in cases from two real GWAS datasets. Moreover, the running time of the cloud implementation for our method on AMD dataset and RA dataset are roughly 2 hours and 50 hours on a cluster with forty small virtual machines for detecting two-locus interactions, respectively. Therefore, we believe that our method is suitable and effective for the full-scale analysis of multiple-locus epistatic interactions in GWAS.
Cloud computing for detecting high-order genome-wide epistatic interaction via dynamic clustering
2014-01-01
Backgroud Taking the advan tage of high-throughput single nucleotide polymorphism (SNP) genotyping technology, large genome-wide association studies (GWASs) have been considered to hold promise for unravelling complex relationships between genotype and phenotype. At present, traditional single-locus-based methods are insufficient to detect interactions consisting of multiple-locus, which are broadly existing in complex traits. In addition, statistic tests for high order epistatic interactions with more than 2 SNPs propose computational and analytical challenges because the computation increases exponentially as the cardinality of SNPs combinations gets larger. Results In this paper, we provide a simple, fast and powerful method using dynamic clustering and cloud computing to detect genome-wide multi-locus epistatic interactions. We have constructed systematic experiments to compare powers performance against some recently proposed algorithms, including TEAM, SNPRuler, EDCF and BOOST. Furthermore, we have applied our method on two real GWAS datasets, Age-related macular degeneration (AMD) and Rheumatoid arthritis (RA) datasets, where we find some novel potential disease-related genetic factors which are not shown up in detections of 2-loci epistatic interactions. Conclusions Experimental results on simulated data demonstrate that our method is more powerful than some recently proposed methods on both two- and three-locus disease models. Our method has discovered many novel high-order associations that are significantly enriched in cases from two real GWAS datasets. Moreover, the running time of the cloud implementation for our method on AMD dataset and RA dataset are roughly 2 hours and 50 hours on a cluster with forty small virtual machines for detecting two-locus interactions, respectively. Therefore, we believe that our method is suitable and effective for the full-scale analysis of multiple-locus epistatic interactions in GWAS. PMID:24717145
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Atomistic modeling of dropwise condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikarwar, B. S., E-mail: bssikarwar@amity.edu; Singh, P. L.; Muralidhar, K.
The basic aim of the atomistic modeling of condensation of water is to determine the size of the stable cluster and connect phenomena occurring at atomic scale to the macroscale. In this paper, a population balance model is described in terms of the rate equations to obtain the number density distribution of the resulting clusters. The residence time is taken to be large enough so that sufficient time is available for all the adatoms existing in vapor-phase to loose their latent heat and get condensed. The simulation assumes clusters of a given size to be formed from clusters of smallermore » sizes, but not by the disintegration of the larger clusters. The largest stable cluster size in the number density distribution is taken to be representative of the minimum drop radius formed in a dropwise condensation process. A numerical confirmation of this result against predictions based on a thermodynamic model has been obtained. Results show that the number density distribution is sensitive to the surface diffusion coefficient and the rate of vapor flux impinging on the substrate. The minimum drop radius increases with the diffusion coefficient and the impinging vapor flux; however, the dependence is weak. The minimum drop radius predicted from thermodynamic considerations matches the prediction of the cluster model, though the former does not take into account the effect of the surface properties on the nucleation phenomena. For a chemically passive surface, the diffusion coefficient and the residence time are dependent on the surface texture via the coefficient of friction. Thus, physical texturing provides a means of changing, within limits, the minimum drop radius. The study reveals that surface texturing at the scale of the minimum drop radius does not provide controllability of the macro-scale dropwise condensation at large timescales when a dynamic steady-state is reached.« less
Circadian Rhythms in Socializing Propensity.
Zhang, Cheng; Phang, Chee Wei; Zeng, Xiaohua; Wang, Ximeng; Xu, Yunjie; Huang, Yun; Contractor, Noshir
2015-01-01
Using large-scale interaction data from a virtual world, we show that people's propensity to socialize (forming new social connections) varies by hour of the day. We arrive at our results by longitudinally tracking people's friend-adding activities in a virtual world. Specifically, we find that people are most likely to socialize during the evening, at approximately 8 p.m. and 12 a.m., and are least likely to do so in the morning, at approximately 8 a.m. Such patterns prevail on weekdays and weekends and are robust to variations in individual characteristics and geographical conditions.
Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems
NASA Astrophysics Data System (ADS)
Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard
Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.
Stable clustering and the resolution of dissipationless cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco
2017-10-01
The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.
An Empirical Comparison of Variable Standardization Methods in Cluster Analysis.
ERIC Educational Resources Information Center
Schaffer, Catherine M.; Green, Paul E.
1996-01-01
The common marketing research practice of standardizing the columns of a persons-by-variables data matrix prior to clustering the entities corresponding to the rows was evaluated with 10 large-scale data sets. Results indicate that the column standardization practice may be problematic for some kinds of data that marketing researchers used for…
Testing Gravity and Cosmic Acceleration with Galaxy Clustering
NASA Astrophysics Data System (ADS)
Kazin, Eyal; Tinker, J.; Sanchez, A. G.; Blanton, M.
2012-01-01
The large-scale structure contains vast amounts of cosmological information that can help understand the accelerating nature of the Universe and test gravity on large scales. Ongoing and future sky surveys are designed to test these using various techniques applied on clustering measurements of galaxies. We present redshift distortion measurements of the Sloan Digital Sky Survey II Luminous Red Galaxy sample. We find that when combining the normalized quadrupole Q with the projected correlation function wp(rp) along with cluster counts (Rapetti et al. 2010), results are consistent with General Relativity. The advantage of combining Q and wp is the addition of the bias information, when using the Halo Occupation Distribution framework. We also present improvements to the standard technique of measuring Hubble expansion rates H(z) and angular diameter distances DA(z) when using the baryonic acoustic feature as a standard ruler. We introduce clustering wedges as an alternative basis to the multipole expansion and show that it yields similar constraints. This alternative basis serves as a useful technique to test for systematics, and ultimately improve measurements of the cosmic acceleration.
Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing
NASA Astrophysics Data System (ADS)
Colombo, Matteo
2017-03-01
Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.
NASA Astrophysics Data System (ADS)
Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei
2017-07-01
Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.
NASA Astrophysics Data System (ADS)
Lamb, Derek A.
2016-10-01
While sunspots follow a well-defined pattern of emergence in space and time, small-scale flux emergence is assumed to occur randomly at all times in the quiet Sun. HMI's full-disk coverage, high cadence, spatial resolution, and duty cycle allow us to probe that basic assumption. Some case studies of emergence suggest that temporal clustering on spatial scales of 50-150 Mm may occur. If clustering is present, it could serve as a diagnostic of large-scale subsurface magnetic field structures. We present the results of a manual survey of small-scale flux emergence events over a short time period, and a statistical analysis addressing the question of whether these events show spatio-temporal behavior that is anything other than random.
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering
NASA Astrophysics Data System (ADS)
Bromley, B. C.; Warren, M. S.; Zurek, W. H.
1997-02-01
We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.
NASA Astrophysics Data System (ADS)
Durkalec, A.; Le Fèvre, O.; Pollo, A.; de la Torre, S.; Cassata, P.; Garilli, B.; Le Brun, V.; Lemaux, B. C.; Maccagni, D.; Pentericci, L.; Tasca, L. A. M.; Thomas, R.; Vanzella, E.; Zamorani, G.; Zucca, E.; Amorín, R.; Bardelli, S.; Cassarà, L. P.; Castellano, M.; Cimatti, A.; Cucciati, O.; Fontana, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Ilbert, O.; Paltani, S.; Ribeiro, B.; Schaerer, D.; Scodeggio, M.; Sommariva, V.; Talia, M.; Tresse, L.; Vergani, D.; Capak, P.; Charlot, S.; Contini, T.; Cuby, J. G.; Dunlop, J.; Fotopoulou, S.; Koekemoer, A.; López-Sanjuan, C.; Mellier, Y.; Pforr, J.; Salvato, M.; Scoville, N.; Taniguchi, Y.; Wang, P. W.
2015-11-01
We investigate the evolution of galaxy clustering for galaxies in the redshift range 2.0
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
Emergence of good conduct, scaling and zipf laws in human behavioral sequences in an online world.
Thurner, Stefan; Szell, Michael; Sinatra, Roberta
2012-01-01
We study behavioral action sequences of players in a massive multiplayer online game. In their virtual life players use eight basic actions which allow them to interact with each other. These actions are communication, trade, establishing or breaking friendships and enmities, attack, and punishment. We measure the probabilities for these actions conditional on previous taken and received actions and find a dramatic increase of negative behavior immediately after receiving negative actions. Similarly, positive behavior is intensified by receiving positive actions. We observe a tendency towards antipersistence in communication sequences. Classifying actions as positive (good) and negative (bad) allows us to define binary 'world lines' of lives of individuals. Positive and negative actions are persistent and occur in clusters, indicated by large scaling exponents α ~ 0.87 of the mean square displacement of the world lines. For all eight action types we find strong signs for high levels of repetitiveness, especially for negative actions. We partition behavioral sequences into segments of length n (behavioral 'words' and 'motifs') and study their statistical properties. We find two approximate power laws in the word ranking distribution, one with an exponent of κ ~ -1 for the ranks up to 100, and another with a lower exponent for higher ranks. The Shannon n-tuple redundancy yields large values and increases in terms of word length, further underscoring the non-trivial statistical properties of behavioral sequences. On the collective, societal level the timeseries of particular actions per day can be understood by a simple mean-reverting log-normal model.
NASA Astrophysics Data System (ADS)
Iqbal, Asif; Kale, Ruta; Majumdar, Subhabrata; Nath, Biman B.; Pandge, Mahadev; Sharma, Prateek; Malik, Manzoor A.; Raychaudhury, Somak
2017-12-01
Active Galactic Nuclei (AGN) feedback is regarded as an important non-gravitational process in galaxy clusters, providing useful constraints on large-scale structure formation. It modifies the structure and energetics of the intra-cluster medium (ICM) and hence its understanding is crucially needed in order to use clusters as high precision cosmological probes. In this context, particularly keeping in mind the upcoming high quality radio data expected from radio surveys like Square Kilometre Array (SKA) with its higher sensitivity, high spatial and spectral resolutions, we review our current understanding of AGN feedback, its cosmological implications and the impact that SKA can have in revolutionizing our understanding of AGN feedback in large-scale structures. Recent developments regarding the AGN outbursts and its possible contribution to excess entropy in the hot atmospheres of groups and clusters, its correlation with the feedback energy in ICM, quenching of cooling flows and the possible connection between cool core clusters and radio mini-halos, are discussed. We describe current major issues regarding modeling of AGN feedback and its impact on the surrounding medium. With regard to the future of AGN feedback studies, we examine the possible breakthroughs that can be expected from SKA observations. In the context of cluster cosmology, for example, we point out the importance of SKA observations for cluster mass calibration by noting that most of z>1 clusters discovered by eROSITA X-ray mission can be expected to be followed up through a 1000 hour SKA1-mid programme. Moreover, approximately 1000 radio mini halos and ˜ 2500 radio halos at z<0.6 can be potentially detected by SKA1 and SKA2 and used as tracers of galaxy clusters and determination of cluster selection function.
System-Level Virtualization for High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian
2008-01-01
System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less
NASA Astrophysics Data System (ADS)
Swenson, D. R.; Wu, A. T.; Degenkolb, E.; Insepov, Z.
2007-08-01
Sub-micron-scale surface roughness and contamination cause field emission that can lead to high-voltage breakdown of electrodes, and these are limiting factors in the development of high gradient RF technology. We are studying various Gas Cluster Ion Beam (GCIB) treatments to smooth, clean, etch and/or chemically alter electrode surfaces to allow higher fields and accelerating gradients, and to reduce the time and cost of conditioning high-voltage electrodes. For this paper, we have processed Nb, stainless steel and Ti electrode materials using beams of Ar, O2, or NF3 + O2 clusters with accelerating potentials up to 35 kV. Using a scanning field emission microscope (SFEM), we have repeatedly seen a dramatic reduction in the number of field emission sites on Nb coupons treated with GCIB. Smoothing effects on stainless steel and Ti substrates, evaluated using SEM and AFM imaging, show that 200-nm-wide polishing scratch marks are greatly attenuated. A 150-mm diameter GCIB-treated stainless steel electrode has shown virtually no DC field emission current at gradients over 20 MV/m.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
National randomized controlled trial of virtual house calls for Parkinson disease.
Beck, Christopher A; Beran, Denise B; Biglan, Kevin M; Boyd, Cynthia M; Dorsey, E Ray; Schmidt, Peter N; Simone, Richard; Willis, Allison W; Galifianakis, Nicholas B; Katz, Maya; Tanner, Caroline M; Dodenhoff, Kristen; Aldred, Jason; Carter, Julie; Fraser, Andrew; Jimenez-Shahed, Joohi; Hunter, Christine; Spindler, Meredith; Reichwein, Suzanne; Mari, Zoltan; Dunlop, Becky; Morgan, John C; McLane, Dedi; Hickey, Patrick; Gauger, Lisa; Richard, Irene Hegeman; Mejia, Nicte I; Bwala, Grace; Nance, Martha; Shih, Ludy C; Singer, Carlos; Vargas-Parra, Silvia; Zadikoff, Cindy; Okon, Natalia; Feigin, Andrew; Ayan, Jean; Vaughan, Christina; Pahwa, Rajesh; Dhall, Rohit; Hassan, Anhar; DeMello, Steven; Riggare, Sara S; Wicks, Paul; Achey, Meredith A; Elson, Molly J; Goldenthal, Steven; Keenan, H Tait; Korn, Ryan; Schwarz, Heidi; Sharma, Saloni; Stevenson, E Anna; Zhu, William
2017-09-12
To determine whether providing remote neurologic care into the homes of people with Parkinson disease (PD) is feasible, beneficial, and valuable. In a 1-year randomized controlled trial, we compared usual care to usual care supplemented by 4 virtual visits via video conferencing from a remote specialist into patients' homes. Primary outcome measures were feasibility, as measured by the proportion who completed at least one virtual visit and the proportion of virtual visits completed on time; and efficacy, as measured by the change in the Parkinson's Disease Questionnaire-39, a quality of life scale. Secondary outcomes included quality of care, caregiver burden, and time and travel savings. A total of 927 individuals indicated interest, 210 were enrolled, and 195 were randomized. Participants had recently seen a specialist (73%) and were largely college-educated (73%) and white (96%). Ninety-five (98% of the intervention group) completed at least one virtual visit, and 91% of 388 virtual visits were completed. Quality of life did not improve in those receiving virtual house calls (0.3 points worse on a 100-point scale; 95% confidence interval [CI] -2.0 to 2.7 points; p = 0.78) nor did quality of care or caregiver burden. Each virtual house call saved patients a median of 88 minutes (95% CI 70-120; p < 0.0001) and 38 miles per visit (95% CI 36-56; p < 0.0001). Providing remote neurologic care directly into the homes of people with PD was feasible and was neither more nor less efficacious than usual in-person care. Virtual house calls generated great interest and provided substantial convenience. NCT02038959. This study provides Class III evidence that for patients with PD, virtual house calls from a neurologist are feasible and do not significantly change quality of life compared to in-person visits. The study is rated Class III because it was not possible to mask patients to visit type. © 2017 American Academy of Neurology.
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
The challenge of turbulent acceleration of relativistic particles in the intra-cluster medium
NASA Astrophysics Data System (ADS)
Brunetti, Gianfranco
2016-01-01
Acceleration of cosmic-ray electrons (CRe) in the intra-cluster medium (ICM) is probed by radio observations that detect diffuse, megaparsec-scale, synchrotron sources in a fraction of galaxy clusters. Giant radio halos are the most spectacular manifestations of non-thermal activity in the ICM and are currently explained assuming that turbulence, driven during massive cluster-cluster mergers, reaccelerates CRe at several giga-electron volts. This scenario implies a hierarchy of complex mechanisms in the ICM that drain energy from large scales into electromagnetic fluctuations in the plasma and collisionless mechanisms of particle acceleration at much smaller scales. In this paper we focus on the physics of acceleration by compressible turbulence. The spectrum and damping mechanisms of the electromagnetic fluctuations, and the mean free path (mfp) of CRe, are the most relevant ingredients that determine the efficiency of acceleration. These ingredients in the ICM are, however, poorly known, and we show that calculations of turbulent acceleration are also sensitive to these uncertainties. On the other hand this fact implies that the non-thermal properties of galaxy clusters probe the complex microphysics and the weakly collisional nature of the ICM.
iRODS: A Distributed Data Management Cyberinfrastructure for Observatories
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; Vernon, F.
2007-12-01
Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.
NASA Astrophysics Data System (ADS)
Fukunaga, Naoto; Konishi, Katsuaki
2015-12-01
Poly(ethylene glycol) (PEG) has been widely used for the surface protection of inorganic nanoobjects because of its virtually `inert' nature, but little attention has been paid to its inherent electronic impacts on inorganic cores. Herein, we definitively show, through studies on optical properties of a series of PEG-modified Cd10Se4(SR)10 clusters, that the surrounding PEG environments can electronically affect the properties of the inorganic core. For the clusters with PEG units directly attached to an inorganic core (R = (CH2CH2O)nOCH3, 1-PEGn, n = 3, ~7, ~17, ~46), the absorption bands, associated with the low-energy transitions, continuously blue-shifted with the increasing PEG chain length. The chain length dependencies were also observed in the photoluminescence properties, particularly in the excitation spectral profiles. By combining the spectral features of several PEG17-modified clusters (2-Cm-PEG17 and 3) whose PEG and core units are separated by various alkyl chain-based spacers, it was demonstrated that sufficiently long PEG units, including PEG17 and PEG46, cause electronic perturbations in the cluster properties when they are arranged near the inorganic core. These unique effects of the long-PEG environments could be correlated with their large dipole moments, suggesting that the polarity of the proximal chemical environment is critical when affecting the electronic properties of the inorganic cluster core.Poly(ethylene glycol) (PEG) has been widely used for the surface protection of inorganic nanoobjects because of its virtually `inert' nature, but little attention has been paid to its inherent electronic impacts on inorganic cores. Herein, we definitively show, through studies on optical properties of a series of PEG-modified Cd10Se4(SR)10 clusters, that the surrounding PEG environments can electronically affect the properties of the inorganic core. For the clusters with PEG units directly attached to an inorganic core (R = (CH2CH2O)nOCH3, 1-PEGn, n = 3, ~7, ~17, ~46), the absorption bands, associated with the low-energy transitions, continuously blue-shifted with the increasing PEG chain length. The chain length dependencies were also observed in the photoluminescence properties, particularly in the excitation spectral profiles. By combining the spectral features of several PEG17-modified clusters (2-Cm-PEG17 and 3) whose PEG and core units are separated by various alkyl chain-based spacers, it was demonstrated that sufficiently long PEG units, including PEG17 and PEG46, cause electronic perturbations in the cluster properties when they are arranged near the inorganic core. These unique effects of the long-PEG environments could be correlated with their large dipole moments, suggesting that the polarity of the proximal chemical environment is critical when affecting the electronic properties of the inorganic cluster core. Electronic supplementary information (ESI) available: Details of synthetic procedures and characterisation data of the PEGylated thiols and clusters and additional absorption, photoluminescence emission and excitation spectral data. See DOI: 10.1039/c5nr06307h
The case for electron re-acceleration at galaxy cluster shocks
NASA Astrophysics Data System (ADS)
van Weeren, Reinout J.; Andrade-Santos, Felipe; Dawson, William A.; Golovich, Nathan; Lal, Dharam V.; Kang, Hyesung; Ryu, Dongsu; Brìggen, Marcus; Ogrean, Georgiana A.; Forman, William R.; Jones, Christine; Placco, Vinicius M.; Santucci, Rafael M.; Wittman, David; Jee, M. James; Kraft, Ralph P.; Sobral, David; Stroe, Andra; Fogarty, Kevin
2017-01-01
On the largest scales, the Universe consists of voids and filaments making up the cosmic web. Galaxy clusters are located at the knots in this web, at the intersection of filaments. Clusters grow through accretion from these large-scale filaments and by mergers with other clusters and groups. In a growing number of galaxy clusters, elongated Mpc-sized radio sources have been found1,2 . Also known as radio relics, these regions of diffuse radio emission are thought to trace relativistic electrons in the intracluster plasma accelerated by low-Mach-number shocks generated by cluster-cluster merger events 3 . A long-standing problem is how low-Mach-number shocks can accelerate electrons so efficiently to explain the observed radio relics. Here, we report the discovery of a direct connection between a radio relic and a radio galaxy in the merging galaxy cluster Abell 3411-3412 by combining radio, X-ray and optical observations. This discovery indicates that fossil relativistic electrons from active galactic nuclei are re-accelerated at cluster shocks. It also implies that radio galaxies play an important role in governing the non-thermal component of the intracluster medium in merging clusters.
Slow-Down in Diffusion in Crowded Protein Solutions Correlates with Transient Cluster Formation.
Nawrocki, Grzegorz; Wang, Po-Hung; Yu, Isseki; Sugita, Yuji; Feig, Michael
2017-12-14
For a long time, the effect of a crowded cellular environment on protein dynamics has been largely ignored. Recent experiments indicate that proteins diffuse more slowly in a living cell than in a diluted solution, and further studies suggest that the diffusion depends on the local surroundings. Here, detailed insight into how diffusion depends on protein-protein contacts is presented based on extensive all-atom molecular dynamics simulations of concentrated villin headpiece solutions. After force field adjustments in the form of increased protein-water interactions to reproduce experimental data, translational and rotational diffusion was analyzed in detail. Although internal protein dynamics remained largely unaltered, rotational diffusion was found to slow down more significantly than translational diffusion as the protein concentration increased. The decrease in diffusion is interpreted in terms of a transient formation of protein clusters. These clusters persist on sub-microsecond time scales and follow distributions that increasingly shift toward larger cluster size with increasing protein concentrations. Weighting diffusion coefficients estimated for different clusters extracted from the simulations with the distribution of clusters largely reproduces the overall observed diffusion rates, suggesting that transient cluster formation is a primary cause for a slow-down in diffusion upon crowding with other proteins.
Performance Studies on Distributed Virtual Screening
Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.
2014-01-01
Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219
Highland, Krista B; Costanzo, Michelle E; Jovanovic, Tanja; Norrholm, Seth D; Ndiongue, Rochelle B; Reinhardt, Brian J; Rothbaum, Barbara; Rizzo, Albert A; Roy, Michael J
2015-01-01
Posttraumatic stress disorder (PTSD) symptoms can result in functional impairment among service members (SMs), even in those without a clinical diagnosis. The variability in outcomes may be related to underlying catecholamine mechanisms. Individuals with PTSD tend to have elevated basal catecholamine levels, though less is known regarding catecholamine responses to trauma-related stimuli. We assessed whether catecholamine responses to a virtual combat environment impact the relationship between PTSD symptom clusters and elements of functioning. Eighty-seven clinically healthy SMs, within 2 months after deployment to Iraq or Afghanistan, completed self-report measures, viewed virtual-reality (VR) combat sequences, and had sequential blood draws. Norepinephrine responses to VR combat exposure moderated the relationship between avoidance symptoms and scales of functioning including physical functioning, physical-role functioning, and vitality. Among those with high levels of avoidance, norepinephrine change was inversely associated with functional status, whereas a positive correlation was observed for those with low levels of avoidance. Our findings represent a novel use of a virtual environment to display combat-related stimuli to returning SMs to elucidate mind-body connections inherent in their responses. The insight gained improves our understanding of post-deployment symptoms and quality of life in SMs and may facilitate enhancements in treatment. Further research is needed to validate these findings in other populations and to define the implications for treatment effectiveness.
Virtually Naked: Virtual Environment Reveals Sex-Dependent Nature of Skin Disclosure
Lomanowska, Anna M.; Guitton, Matthieu J.
2012-01-01
The human tendency to reveal or cover naked skin reflects a competition between the individual propensity for social interactions related to sexual appeal and interpersonal touch versus climatic, environmental, physical, and cultural constraints. However, due to the ubiquitous nature of these constraints, isolating on a large scale the spontaneous human tendency to reveal naked skin has remained impossible. Using the online 3-dimensional virtual world of Second Life, we examined spontaneous human skin-covering behavior unhindered by real-world climatic, environmental, and physical variables. Analysis of hundreds of avatars revealed that virtual females disclose substantially more naked skin than virtual males. This phenomenon was not related to avatar hypersexualization as evaluated by measurement of sexually dimorphic body proportions. Furthermore, analysis of skin-covering behavior of a population of culturally homogeneous avatars indicated that the propensity of female avatars to reveal naked skin persisted despite explicit cultural norms promoting less revealing attire. These findings have implications for further understanding how sex-specific aspects of skin disclosure influence human social interactions in both virtual and real settings. PMID:23300580
Virtually naked: virtual environment reveals sex-dependent nature of skin disclosure.
Lomanowska, Anna M; Guitton, Matthieu J
2012-01-01
The human tendency to reveal or cover naked skin reflects a competition between the individual propensity for social interactions related to sexual appeal and interpersonal touch versus climatic, environmental, physical, and cultural constraints. However, due to the ubiquitous nature of these constraints, isolating on a large scale the spontaneous human tendency to reveal naked skin has remained impossible. Using the online 3-dimensional virtual world of Second Life, we examined spontaneous human skin-covering behavior unhindered by real-world climatic, environmental, and physical variables. Analysis of hundreds of avatars revealed that virtual females disclose substantially more naked skin than virtual males. This phenomenon was not related to avatar hypersexualization as evaluated by measurement of sexually dimorphic body proportions. Furthermore, analysis of skin-covering behavior of a population of culturally homogeneous avatars indicated that the propensity of female avatars to reveal naked skin persisted despite explicit cultural norms promoting less revealing attire. These findings have implications for further understanding how sex-specific aspects of skin disclosure influence human social interactions in both virtual and real settings.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
The astrophysics of crowded places.
Davies, Melvyn
2002-12-15
Today the Sun is in a relatively uncrowded place. The distance between it and the nearest other star is relatively large (about 200,000 times the Earth-Sun distance!). This is beneficial to life on Earth; a close encounter with another star is extremely unlikely. Such encounters would either remove the Earth from its orbit around the Sun or leave it on an eccentric orbit similar to a comet's. But the Sun was not formed in isolation. It was born within a more-crowded cluster of perhaps a few hundred stars. As the surrounding gas evaporated away, the cluster itself evaporated too, dispersing its stars into the Galaxy. Virtually all stars in the Galaxy share this history, and here I will describe the role of 'clusterness' in a star's life. Stars are often formed in larger stellar clusters (known as open and globular clusters), some of which are still around today. I will focus on stars in globular clusters and describe how the interactions between stars in these clusters may explain the zoo of stellar exotica which have recently been observed with instruments such as the Hubble Space Telescope and the X-ray telescopes XMM-Newton and Chandra. In recent years, myriad planets orbiting stars other than the Sun--the so-called 'extrasolar' planets--have been discovered. I will describe how a crowded environment will affect such planetary systems and may in fact explain some of their mysterious properties.
Helium segregation on surfaces of plasma-exposed tungsten
NASA Astrophysics Data System (ADS)
Maroudas, Dimitrios; Blondel, Sophie; Hu, Lin; Hammond, Karl D.; Wirth, Brian D.
2016-02-01
We report a hierarchical multi-scale modeling study of implanted helium segregation on surfaces of tungsten, considered as a plasma facing component in nuclear fusion reactors. We employ a hierarchy of atomic-scale simulations based on a reliable interatomic interaction potential, including molecular-statics simulations to understand the origin of helium surface segregation, targeted molecular-dynamics (MD) simulations of near-surface cluster reactions, and large-scale MD simulations of implanted helium evolution in plasma-exposed tungsten. We find that small, mobile He n (1 ⩽ n ⩽ 7) clusters in the near-surface region are attracted to the surface due to an elastic interaction force that provides the thermodynamic driving force for surface segregation. This elastic interaction force induces drift fluxes of these mobile He n clusters, which increase substantially as the migrating clusters approach the surface, facilitating helium segregation on the surface. Moreover, the clusters’ drift toward the surface enables cluster reactions, most importantly trap mutation, in the near-surface region at rates much higher than in the bulk material. These near-surface cluster dynamics have significant effects on the surface morphology, near-surface defect structures, and the amount of helium retained in the material upon plasma exposure. We integrate the findings of such atomic-scale simulations into a properly parameterized and validated spatially dependent, continuum-scale reaction-diffusion cluster dynamics model, capable of predicting implanted helium evolution, surface segregation, and its near-surface effects in tungsten. This cluster-dynamics model sets the stage for development of fully atomistically informed coarse-grained models for computationally efficient simulation predictions of helium surface segregation, as well as helium retention and surface morphological evolution, toward optimal design of plasma facing components.
The Large Local Hole in the Galaxy Distribution: The 2MASS Galaxy Angular Power Spectrum
NASA Astrophysics Data System (ADS)
Frith, W. J.; Outram, P. J.; Shanks, T.
2005-06-01
We present new evidence for a large deficiency in the local galaxy distribution situated in the ˜4000 deg2 APM survey area. We use models guided by the 2dF Galaxy Redshift Survey (2dFGRS) n(z) as a probe of the underlying large-scale structure. We first check the usefulness of this technique by comparing the 2dFGRS n(z) model prediction with the K-band and B-band number counts extracted from the 2MASS and 2dFGRS parent catalogues over the 2dFGRS Northern and Southern declination strips, before turning to a comparison with the APM counts. We find that the APM counts in both the B and K-bands indicate a deficiency in the local galaxy distribution of ˜30% to z ≈ 0.1 over the entire APM survey area. We examine the implied significance of such a large local hole, considering several possible forms for the real-space correlation function. We find that such a deficiency in the APM survey area indicates an excess of power at large scales over what is expected from the correlation function observed in 2dFGRS correlation function or predicted from ΛCDM Hubble Volume mock catalogues. In order to check further the clustering at large scales in the 2MASS data, we have calculated the angular power spectrum for 2MASS galaxies. Although in the linear regime (l<30), ΛCDM models can give a good fit to the 2MASS angular power spectrum, over a wider range (l<100) the power spectrum from Hubble Volume mock catalogues suggests that scale-dependent bias may be needed for ΛCDM to fit. However, the modest increase in large-scale power observed in the 2MASS angular power spectrum is still not enough to explain the local hole. If the APM survey area really is 25% deficient in galaxies out to z≈0.1, explanations for the disagreement with observed galaxy clustering statistics include the possibilities that the galaxy clustering is non-Gaussian on large scales or that the 2MASS volume is still too small to represent a `fair sample' of the Universe. Extending the 2dFGRS redshift survey over the whole APM area would resolve many of the remaining questions about the existence and interpretation of this local hole.
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Behavioral self-organization underlies the resilience of a coastal ecosystem.
de Paoli, Hélène; van der Heide, Tjisse; van den Berg, Aniek; Silliman, Brian R; Herman, Peter M J; van de Koppel, Johan
2017-07-25
Self-organized spatial patterns occur in many terrestrial, aquatic, and marine ecosystems. Theoretical models and observational studies suggest self-organization, the formation of patterns due to ecological interactions, is critical for enhanced ecosystem resilience. However, experimental tests of this cross-ecosystem theory are lacking. In this study, we experimentally test the hypothesis that self-organized pattern formation improves the persistence of mussel beds ( Mytilus edulis ) on intertidal flats. In natural beds, mussels generate self-organized patterns at two different spatial scales: regularly spaced clusters of mussels at centimeter scale driven by behavioral aggregation and large-scale, regularly spaced bands at meter scale driven by ecological feedback mechanisms. To test for the relative importance of these two spatial scales of self-organization on mussel bed persistence, we conducted field manipulations in which we factorially constructed small-scale and/or large-scale patterns. Our results revealed that both forms of self-organization enhanced the persistence of the constructed mussel beds in comparison to nonorganized beds. Small-scale, behaviorally driven cluster patterns were found to be crucial for persistence, and thus resistance to wave disturbance, whereas large-scale, self-organized patterns facilitated reformation of small-scale patterns if mussels were dislodged. This study provides experimental evidence that self-organization can be paramount to enhancing ecosystem persistence. We conclude that ecosystems with self-organized spatial patterns are likely to benefit greatly from conservation and restoration actions that use the emergent effects of self-organization to increase ecosystem resistance to disturbance.
Behavioral self-organization underlies the resilience of a coastal ecosystem
de Paoli, Hélène; van der Heide, Tjisse; van den Berg, Aniek; Silliman, Brian R.; Herman, Peter M. J.
2017-01-01
Self-organized spatial patterns occur in many terrestrial, aquatic, and marine ecosystems. Theoretical models and observational studies suggest self-organization, the formation of patterns due to ecological interactions, is critical for enhanced ecosystem resilience. However, experimental tests of this cross-ecosystem theory are lacking. In this study, we experimentally test the hypothesis that self-organized pattern formation improves the persistence of mussel beds (Mytilus edulis) on intertidal flats. In natural beds, mussels generate self-organized patterns at two different spatial scales: regularly spaced clusters of mussels at centimeter scale driven by behavioral aggregation and large-scale, regularly spaced bands at meter scale driven by ecological feedback mechanisms. To test for the relative importance of these two spatial scales of self-organization on mussel bed persistence, we conducted field manipulations in which we factorially constructed small-scale and/or large-scale patterns. Our results revealed that both forms of self-organization enhanced the persistence of the constructed mussel beds in comparison to nonorganized beds. Small-scale, behaviorally driven cluster patterns were found to be crucial for persistence, and thus resistance to wave disturbance, whereas large-scale, self-organized patterns facilitated reformation of small-scale patterns if mussels were dislodged. This study provides experimental evidence that self-organization can be paramount to enhancing ecosystem persistence. We conclude that ecosystems with self-organized spatial patterns are likely to benefit greatly from conservation and restoration actions that use the emergent effects of self-organization to increase ecosystem resistance to disturbance. PMID:28696313
X-ray morphological study of the ESZ sample
NASA Astrophysics Data System (ADS)
Lovisari, L.; Forman, W.; Jones, C.; Andrade-Santos, F.; Democles, J.; Pratt, G.; Ettori, S.; Arnaud, M.; Randall, S.; Kraft, R.
2017-10-01
An accurate knowledge of the scaling relations between X-ray observables and cluster mass is a crucial step for studies that aim to constrain cosmological parameters using galaxy clusters. The measure of the dynamical state of the systems offers important information to obtain precise scaling relations and understand their scatter. Unfortunately, characterize the dynamical state of a galaxy cluster requires to access a large set of information in different wavelength which are available only for a few individual systems. An alternative is to compute well defined morphological parameters making use of the relatively cheap X-ray images and profiles. Due to different projection effects none of the methods is good in all the cases and a combination of them is more effective to quantify the level of substructures. I will present the cluster morphologies that we derived for the ESZ sample. I will show their dependence on different cluster properties like total mass, redshift, and luminosity and how they differ from the ones obtained for X-ray selected clusters.
NASA Astrophysics Data System (ADS)
Vavryčuk, Václav
2018-07-01
A cosmological model, in which the cosmic microwave background (CMB) is a thermal radiation of intergalactic dust instead of a relic radiation of the big bang, is revived and revisited. The model suggests that a virtually transparent local Universe becomes considerably opaque at redshifts z > 2-3. Such opacity is hardly to be detected in the Type Ia supernova data, but confirmed using quasar data. The opacity steeply increases with redshift because of a high proper density of intergalactic dust in the previous epochs. The temperature of intergalactic dust increases as (1 + z) and exactly compensates the change of wavelengths due to redshift, so that the dust radiation looks apparently like the radiation of the blackbody with a single temperature. The predicted dust temperature is TD = 2.776 K, which differs from the CMB temperature by 1.9 per cent only, and the predicted ratio between the total CMB and extragalactic background light (EBL) intensities is 13.4 which is close to 12.5 obtained from observations. The CMB temperature fluctuations are caused by EBL fluctuations produced by galaxy clusters and voids in the Universe. The polarization anomalies of the CMB correlated with temperature anisotropies are caused by the polarized thermal emission of needle-shaped conducting dust grains aligned by large-scale magnetic fields around clusters and voids. A strong decline of the luminosity density for z > 4 is interpreted as the result of high opacity of the Universe rather than of a decline of the global stellar mass density at high redshifts.
NASA Astrophysics Data System (ADS)
Vavryčuk, Václav
2018-04-01
A cosmological model, in which the cosmic microwave background (CMB) is a thermal radiation of intergalactic dust instead of a relic radiation of the Big Bang, is revived and revisited. The model suggests that a virtually transparent local Universe becomes considerably opaque at redshifts z > 2 - 3. Such opacity is hardly to be detected in the Type Ia supernova data, but confirmed using quasar data. The opacity steeply increases with redshift because of a high proper density of intergalactic dust in the previous epochs. The temperature of intergalactic dust increases as (1 + z) and exactly compensates the change of wavelengths due to redshift, so that the dust radiation looks apparently like the radiation of the blackbody with a single temperature. The predicted dust temperature is TD = 2.776 K, which differs from the CMB temperature by 1.9% only, and the predicted ratio between the total CMB and EBL intensities is 13.4 which is close to 12.5 obtained from observations. The CMB temperature fluctuations are caused by EBL fluctuations produced by galaxy clusters and voids in the Universe. The polarization anomalies of the CMB correlated with temperature anisotropies are caused by the polarized thermal emission of needle-shaped conducting dust grains aligned by large-scale magnetic fields around clusters and voids. A strong decline of the luminosity density for z > 4 is interpreted as the result of high opacity of the Universe rather than of a decline of the global stellar mass density at high redshifts.
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models
NASA Technical Reports Server (NTRS)
Mjoisness, Eric; Castano, Rebecca; Gray, Alexander
1999-01-01
We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.
NASA Astrophysics Data System (ADS)
Hullo, J.-F.; Thibault, G.; Boucheny, C.
2015-02-01
In a context of increased maintenance operations and workers generational renewal, a nuclear owner and operator like Electricité de France (EDF) is interested in the scaling up of tools and methods of "as-built virtual reality" for larger buildings and wider audiences. However, acquisition and sharing of as-built data on a large scale (large and complex multi-floored buildings) challenge current scientific and technical capacities. In this paper, we first present a state of the art of scanning tools and methods for industrial plants with very complex architecture. Then, we introduce the inner characteristics of the multi-sensor scanning and visualization of the interior of the most complex building of a power plant: a nuclear reactor building. We introduce several developments that made possible a first complete survey of such a large building, from acquisition, processing and fusion of multiple data sources (3D laser scans, total-station survey, RGB panoramic, 2D floor plans, 3D CAD as-built models). In addition, we present the concepts of a smart application developed for the painless exploration of the whole dataset. The goal of this application is to help professionals, unfamiliar with the manipulation of such datasets, to take into account spatial constraints induced by the building complexity while preparing maintenance operations. Finally, we discuss the main feedbacks of this large experiment, the remaining issues for the generalization of such large scale surveys and the future technical and scientific challenges in the field of industrial "virtual reality".
DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphrey, Marty, A
The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.
Automatic three-dimensional measurement of large-scale structure based on vision metrology.
Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng
2014-01-01
All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.
Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo
2018-04-01
We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
The merger remnant NGC 3610 and its globular cluster system: a large-scale study
NASA Astrophysics Data System (ADS)
Bassino, Lilia P.; Caso, Juan P.
2017-04-01
We present a photometric study of the prototype merger remnant NGC 3610 and its globular cluster (GC) system, based on new Gemini/GMOS and Advanced Camera for Surveys/Hubble Space Telescope archival images. Thanks to the large field of view of our GMOS data, larger than previous studies, we are able to detect a 'classical' bimodal GC colour distribution, corresponding to metal-poor and metal-rich GCs, at intermediate radii and a small subsample of likely young clusters of intermediate colours, mainly located in the outskirts. The extent of the whole GC system is settled as about 40 kpc. The GC population is quite poor, about 500 ± 110 members that corresponds to a low total specific frequency SN ˜ 0.8. The effective radii of a cluster sample are determined, including those of two spectroscopically confirmed young and metal-rich clusters, that are in the limit between GC and UCD sizes and brightness. The large-scale galaxy surface-brightness profile can be decomposed as an inner embedded disc and an outer spheroid, determining for both larger extents than earlier research (10 and 30 kpc, respectively). We detect boxy isophotes, expected in merger remnants, and show a wealth of fine-structure in the surface-brightness distribution with unprecedented detail, coincident with the outer spheroid. The lack of symmetry in the galaxy colour map adds a new piece of evidence to the recent merger scenario of NGC 3610.
Measurement of kT splitting scales in W→ℓν events at [Formula: see text] with the ATLAS detector.
Aad, G; Abajyan, T; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdelalim, A A; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Acharya, B S; Adamczyk, L; Adams, D L; Addy, T N; Adelman, J; Adomeit, S; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahsan, M; Aielli, G; Åkesson, T P A; Akimoto, G; Akimov, A V; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alonso, F; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amelung, C; Ammosov, V V; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Arfaoui, S; Arguin, J-F; Argyropoulos, S; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Artamonov, A; Artoni, G; Arutinov, D; Asai, S; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Astbury, A; Atkinson, M; Auerbach, B; Auge, E; Augsten, K; Aurousseau, M; Avolio, G; Axen, D; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagnaia, P; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, S; Balek, P; Balli, F; Banas, E; Banerjee, P; Banerjee, Sw; Banfi, D; Bangert, A; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartsch, V; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Beale, S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behar Harpaz, S; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellomo, M; Belloni, A; Beloborodova, O; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernat, P; Bernhard, R; Bernius, C; Bernlochner, F U; Berry, T; Bertella, C; Bertin, A; Bertolucci, F; Besana, M I; Besjes, G J; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bittner, B; Black, C W; Black, J E; Black, K M; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blocki, J; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boek, T T; Boelaert, N; Bogaerts, J A; Bogdanchikov, A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Bolnet, N M; Bomben, M; Bona, M; Boonekamp, M; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Bremer, J; Brendlinger, K; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Broggi, F; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brown, G; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchanan, J; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Budick, B; Bugge, L; Bulekov, O; Bundock, A C; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Buttinger, W; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Calvet, S; Camacho Toro, R; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capriotti, D; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Chapman, J W; Charlton, D G; Chavda, V; Chavez Barajas, C A; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, X; Chen, Y; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Cheung, S L; Chevalier, L; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choudalakis, G; Chouridou, S; Chow, B K B; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirilli, M; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, B; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Colas, J; Cole, S; Colijn, A P; Collins, N J; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Courneyea, L; Cowan, G; Cox, B E; Cranmer, K; Crépé-Renaudin, S; Crescioli, F; Cristinziani, M; Crosetti, G; Cuciuc, C-M; Cuenca Almenar, C; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Curtis, C J; Cuthbert, C; Cwetanski, P; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dallaire, F; Dallapiccola, C; Dam, M; Damiani, D S; Danielsson, H O; Dao, V; Darbo, G; Darlea, G L; Darmora, S; Dassoulas, J A; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, E; Davies, M; Davignon, O; Davison, A R; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Taille, C; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Del Peso, J; Del Prete, T; Delemontex, T; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deviveiros, P O; Dewhurst, A; DeWilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dinut, F; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobos, D; Dobson, E; Dodd, J; Doglioni, C; Doherty, T; Dohmae, T; Doi, Y; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dotti, A; Dova, M T; Doyle, A T; Dressnandt, N; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Duda, D; Dudarev, A; Dudziak, F; Duerdoth, I P; Duflot, L; Dufour, M-A; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Duxfield, R; Dwuznik, M; Ebenstein, W L; Ebke, J; Eckweiler, S; Edson, W; Edwards, C A; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Engelmann, R; Engl, A; Epp, B; Erdmann, J; Ereditato, A; Eriksson, D; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evangelakou, D; Evans, H; Fabbri, L; Fabre, C; Facini, G; Fakhrutdinov, R M; Falciano, S; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, J; Fisher, M J; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Fonseca Martin, T; Formica, A; Forti, A; Fortin, D; Fournier, D; Fowler, A J; Fox, H; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Frank, T; Franklin, M; Franz, S; Fraternali, M; Fratina, S; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gadatsch, S; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Gan, K K; Gandrajula, R P; Gao, Y S; Gaponenko, A; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Giugni, D; Giunta, M; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glonti, G L; Goddard, J R; Godfrey, J; Godlewski, J; Goebel, M; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Göpfert, T; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramstad, E; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Greenshaw, T; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Grybel, K; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gunther, J; Guo, B; Guo, J; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Hadavand, H K; Hadley, D R; Haefner, P; Hajduk, Z; Hakobyan, H; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Handel, C; Hanke, P; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansson, P; Hara, K; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayakawa, T; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heisterkamp, S; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, R C W; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Hernandez, C M; Hernández Jiménez, Y; Herrberg, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmgren, S O; Holy, T; Holzbauer, J L; Hong, T M; Hooft van Huysduynen, L; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hubacek, Z; Hubaut, F; Huegging, F; Huettmann, A; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Iengo, P; Igonkina, O; Ikegami, Y; Ikematsu, K; Ikeno, M; Iliadis, D; Ilic, N; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Jantsch, A; Janus, M; Jared, R C; Jarlskog, G; Jeanty, L; Jeng, G-Y; Jen-La Plante, I; Jennens, D; Jenni, P; Jeske, C; Jež, P; Jézéquel, S; Jha, M K; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinnouchi, O; Joergensen, M D; Joffe, D; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Joram, C; Jorge, P M; Joshi, K D; Jovicevic, J; Jovin, T; Ju, X; Jung, C A; Jungst, R M; Juranek, V; Jussel, P; Juste Rozas, A; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalinin, S; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karagounis, M; Karakostas, K; Karnevskiy, M; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Keller, J S; Kenyon, M; Keoshkerian, H; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Ko, B R; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koenig, S; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Köneke, K; König, A C; Kono, T; Kononov, A I; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Krejci, F; Kretzschmar, J; Kreutzfeldt, K; Krieger, N; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, M K; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Lablak, S; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laisne, E; Lambourne, L; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larner, A; Lassnig, M; Laurelli, P; Lavorini, V; Lavrijsen, W; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, M; Legendre, M; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Leontsinis, S; Lepold, F; Leroy, C; Lessard, J-R; Lester, C G; Lester, C M; Levêque, J; Levin, D; Levinson, L J; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, S; Li, X; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, D; Liu, J B; Liu, L; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, R E; Lopes, L; Lopez Mateos, D; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Lukas, W; Luminari, L; Lund, E; Lundberg, B; Lundberg, J; Lundberg, O; Lund-Jensen, B; Lundquist, J; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Macina, D; Mackeprang, R; Madar, R; Madaras, R J; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Magnoni, L; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V; Malyukov, S; Mamuzic, J; Manabe, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchiori, G; Marcisovsky, M; Marino, C P; Marroquim, F; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martinez Outschoorn, V; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matsunaga, H; Matsushita, T; Mättig, P; Mättig, S; Mattravers, C; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazur, M; Mazzaferro, L; Mazzanti, M; Mc Donald, J; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meehan, S; Meera-Lebbai, R; Meguro, T; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Michal, S; Micu, L; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Miñano Moya, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitrevski, J; Mitsou, V A; Mitsui, S; Miyagawa, P S; Mjörnmark, J U; Moa, T; Moeller, V; Mohapatra, S; Mohr, W; Moles-Valls, R; Molfetas, A; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Mora Herrera, C; Moraes, A; Morange, N; Morel, J; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Möser, N; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Muenstermann, D; Müller, T A; Munwes, Y; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Napier, A; Narayan, R; Nash, M; Nattermann, T; Naumann, T; Navarro, G; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neusiedl, A; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen, D H; Nguyen Thi Hong, V; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Niedercorn, F; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novakova, J; Nozaki, M; Nozka, L; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; O'Brien, B J; O'Neil, D C; O'Shea, V; Oakes, L B; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Odier, J; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira, M; Oliveira Damazio, D; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Osuna, C; Otero Y Garzon, G; Ottersbach, J P; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Ouyang, Q; Ovcharova, A; Owen, M; Owen, S; Ozcan, V E; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Paleari, C P; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Papadelis, A; Papadopoulou, Th D; Paramonov, A; Paredes Hernandez, D; Park, W; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pashapour, S; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, M; Pedraza Lopez, S; Pedraza Morales, M I; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penson, A; Penwell, J; Perez Cavalcanti, T; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Phan, A; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piec, S M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pizio, C; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poblaguev, A; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Poll, J; Polychronakos, V; Pomeroy, D; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Prabhu, R; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Pretzl, K; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Prudent, X; Przybycien, M; Przysiezniak, H; Psoroulas, S; Ptacek, E; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Pylypchenko, Y; Qian, J; Quadt, A; Quarrie, D R; Quayle, W B; Quilty, D; Raas, M; Radeka, V; Radescu, V; Radloff, P; Ragusa, F; Rahal, G; Rahimi, A M; Rajagopalan, S; Rammensee, M; Rammes, M; Randle-Conde, A S; Randrianarivony, K; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reinsch, A; Reisinger, I; Relich, M; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richter, R; Richter-Was, E; Ridel, M; Rieck, P; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Ritsch, E; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Roe, A; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, A; Rose, M; Rosenbaum, G A; Rosendahl, P L; Rosenthal, O; Rosselet, L; Rossetti, V; Rossi, E; Rossi, L P; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Ruckstuhl, N; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rumyantsev, L; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ruzicka, P; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarrazin, B; Sarri, F; Sartisohn, G; Sasaki, O; Sasaki, Y; Sasao, N; Satsounkevitch, I; Sauvage, G; Sauvan, E; Sauvan, J B; Savard, P; Savinov, V; Savu, D O; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaelicke, A; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schroeder, C; Schroer, N; Schultens, M J; Schultes, J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherwood, P; Shimizu, S; Shimojima, M; Shin, T; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skottowe, H P; Skovpen, K; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snow, S W; Snow, J; Snyder, S; Sobie, R; Sodomka, J; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Soni, N; Sood, A; Sopko, V; Sopko, B; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A; South, D; Spagnolo, S; Spanò, F; Spighi, R; Spigo, G; Spiwoks, R; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Staude, A; Stavina, P; Steele, G; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Stugu, B; Stumer, I; Stupak, J; Sturm, P; Styles, N A; Su, D; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A; Tam, J Y C; Tamsett, M C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tayalati, Y; Taylor, C; Taylor, F E; Taylor, G N; Taylor, W; Teinturier, M; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Tonoyan, A; Topfel, C; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tua, A; Tudorache, A; Tudorache, V; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Poel, E; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; Vanadia, M; Vandelli, W; Vaniachine, A; Vankov, P; Vannucci, F; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Vazquez Schroeder, T; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virzi, J; Vitells, O; Viti, M; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; Volpini, G; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorwerk, V; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, W; Wagner, P; Wahlen, H; Wahrmund, S; Wakabayashi, J; Walch, S; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Weber, M S; Webster, J S; Weidberg, A R; Weigell, P; Weingarten, J; Weiser, C; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Wessels, M; Wetter, J; Weydert, C; Whalen, K; White, A; White, M J; White, S; Whitehead, S R; Whiteson, D; Whittington, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Williams, S; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wong, W C; Wooden, G; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wraight, K; Wright, M; Wrona, B; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wynne, B M; Xella, S; Xiao, M; Xie, S; Xu, C; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yang, Z; Yanush, S; Yao, L; Yasu, Y; Yatsenko, E; Ye, J; Ye, S; Yen, A L; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D; Yu, D R; Yu, J; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zambito, S; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, L; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhuravlov, V; Zibell, A; Zieminska, D; Zimin, N I; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zitoun, R; Živković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zutshi, V; Zwalinski, L
A measurement of splitting scales, as defined by the k T clustering algorithm, is presented for final states containing a W boson produced in proton-proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb -1 which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k T cluster sequence of the hadronic activity accompanying the W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.
Matsuura, Tomoaki; Tanimura, Naoki; Hosoda, Kazufumi; Yomo, Tetsuya; Shimizu, Yoshihiro
2017-01-01
To elucidate the dynamic features of a biologically relevant large-scale reaction network, we constructed a computational model of minimal protein synthesis consisting of 241 components and 968 reactions that synthesize the Met-Gly-Gly (MGG) peptide based on an Escherichia coli-based reconstituted in vitro protein synthesis system. We performed a simulation using parameters collected primarily from the literature and found that the rate of MGG peptide synthesis becomes nearly constant in minutes, thus achieving a steady state similar to experimental observations. In addition, concentration changes to 70% of the components, including intermediates, reached a plateau in a few minutes. However, the concentration change of each component exhibits several temporal plateaus, or a quasi-stationary state (QSS), before reaching the final plateau. To understand these complex dynamics, we focused on whether the components reached a QSS, mapped the arrangement of components in a QSS in the entire reaction network structure, and investigated time-dependent changes. We found that components in a QSS form clusters that grow over time but not in a linear fashion, and that this process involves the collapse and regrowth of clusters before the formation of a final large single cluster. These observations might commonly occur in other large-scale biological reaction networks. This developed analysis might be useful for understanding large-scale biological reactions by visualizing complex dynamics, thereby extracting the characteristics of the reaction network, including phase transitions. PMID:28167777
NASA Astrophysics Data System (ADS)
Nguyen-Luong, Q.; Anderson, L. D.; Motte, F.; Kim, Kee-Tae; Schilke, P.; Carlhoff, P.; Beuther, H.; Schneider, N.; Didelon, P.; Kramer, C.; Louvet, F.; Nony, T.; Bihr, S.; Rugel, M.; Soler, J.; Wang, Y.; Bronfman, L.; Simon, R.; Menten, K. M.; Wyrowski, F.; Walmsley, C. M.
2017-08-01
We report the first map of large-scale (10 pc in length) emission of millimeter-wavelength hydrogen recombination lines (mm-RRLs) toward the giant H II region around the W43-Main young massive star cluster (YMC). Our mm-RRL data come from the IRAM 30 m telescope and are analyzed together with radio continuum and cm-RRL data from the Karl G. Jansky Very Large Array and HCO+ 1-0 line emission data from the IRAM 30 m. The mm-RRLs reveal an expanding wind-blown ionized gas shell with an electron density ˜70-1500 cm-3 driven by the WR/OB cluster, which produces a total Lyα photon flux of 1.5× {10}50 s-1. This shell is interacting with the dense neutral molecular gas in the W43-Main dense cloud. Combining the high spectral and angular resolution mm-RRL and cm-RRL cubes, we derive the two-dimensional relative distributions of dynamical and pressure broadening of the ionized gas emission and find that the RRL line shapes are dominated by pressure broadening (4-55 {km} {{{s}}}-1) near the YMC and by dynamical broadening (8-36 {km} {{{s}}}-1) near the shell’s edge. Ionized gas clumps hosting ultra-compact H II regions found at the edge of the shell suggest that large-scale ionized gas motion triggers the formation of new star generation near the periphery of the shell.
Grid-Enabled High Energy Physics Research using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Mahmood, Akhtar
2005-04-01
At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
RRW: repeated random walks on genome-scale protein networks for local cluster discovery
Macropol, Kathy; Can, Tolga; Singh, Ambuj K
2009-01-01
Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW) for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL), and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters. PMID:19740439
The case for electron re-acceleration at galaxy cluster shocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Weeren, Reinout J.; Andrade-Santos, Felipe; Dawson, William A.
On the largest scales, the Universe consists of voids and filaments making up the cosmic web. Galaxy clusters are located at the knots in this web, at the intersection of filaments. Clusters grow through accretion from these large-scale filaments and by mergers with other clusters and groups. In a growing number of galaxy clusters, elongated Mpc-sized radio sources have been found. Also known as radio relics, these regions of diffuse radio emission are thought to trace relativistic electrons in the intracluster plasma accelerated by low-Mach-number shocks generated by cluster–cluster merger events. A long-standing problem is how low-Mach-number shocks can acceleratemore » electrons so efficiently to explain the observed radio relics. Here, we report the discovery of a direct connection between a radio relic and a radio galaxy in the merging galaxy cluster Abell 3411–3412 by combining radio, X-ray and optical observations. This discovery indicates that fossil relativistic electrons from active galactic nuclei are re-accelerated at cluster shocks. Lastly, it also implies that radio galaxies play an important role in governing the non-thermal component of the intracluster medium in merging clusters.« less
The case for electron re-acceleration at galaxy cluster shocks
van Weeren, Reinout J.; Andrade-Santos, Felipe; Dawson, William A.; ...
2017-01-04
On the largest scales, the Universe consists of voids and filaments making up the cosmic web. Galaxy clusters are located at the knots in this web, at the intersection of filaments. Clusters grow through accretion from these large-scale filaments and by mergers with other clusters and groups. In a growing number of galaxy clusters, elongated Mpc-sized radio sources have been found. Also known as radio relics, these regions of diffuse radio emission are thought to trace relativistic electrons in the intracluster plasma accelerated by low-Mach-number shocks generated by cluster–cluster merger events. A long-standing problem is how low-Mach-number shocks can acceleratemore » electrons so efficiently to explain the observed radio relics. Here, we report the discovery of a direct connection between a radio relic and a radio galaxy in the merging galaxy cluster Abell 3411–3412 by combining radio, X-ray and optical observations. This discovery indicates that fossil relativistic electrons from active galactic nuclei are re-accelerated at cluster shocks. Lastly, it also implies that radio galaxies play an important role in governing the non-thermal component of the intracluster medium in merging clusters.« less
Linkenauger, Sally A.; Leyrer, Markus; Bülthoff, Heinrich H.; Mohler, Betty J.
2013-01-01
The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments. PMID:23874681
Scale-free Graphs for General Aviation Flight Schedules
NASA Technical Reports Server (NTRS)
Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.
2003-01-01
In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.
The split in the ancient cold front in the Perseus cluster
NASA Astrophysics Data System (ADS)
Walker, Stephen A.; ZuHone, John; Fabian, Andy; Sanders, Jeremy
2018-04-01
Sloshing cold fronts in clusters, produced as the dense cluster core moves around in the cluster potential in response to in-falling subgroups, provide a powerful probe of the physics of the intracluster medium and the magnetic fields permeating it1,2. These sharp discontinuities in density and temperature rise gradually outwards with age in a characteristic spiral pattern, embedding into the intracluster medium a record of the minor merging activity of clusters: the further from the cluster centre a cold front is, the older it is. Recently, it was discovered that these cold fronts can survive out to extremely large radii in the Perseus cluster3. Here, we report on high-spatial-resolution Chandra observations of the large-scale cold front in Perseus. We find that rather than broadening through diffusion, the cold front remains extremely sharp (consistent with abrupt jumps in density) and instead is split into two sharp edges. These results show that magnetic draping can suppress diffusion for vast periods of time—around 5 Gyr—even as the cold front expands out to nearly half the cluster virial radius.
Efficiency of parallel direct optimization
NASA Technical Reports Server (NTRS)
Janies, D. A.; Wheeler, W. C.
2001-01-01
Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.
Improving the distinguishable cluster results: spin-component scaling
NASA Astrophysics Data System (ADS)
Kats, Daniel
2018-06-01
The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.
Worse than imagined: Unidentified virtual water flows in China.
Cai, Beiming; Wang, Chencheng; Zhang, Bing
2017-07-01
The impact of virtual water flows on regional water scarcity in China had been deeply discussed in previous research. However, these studies only focused on water quantity, the impact of virtual water flows on water quality has been largely neglected. In this study, we incorporate the blue water footprint related with water quantity and grey water footprint related with water quality into virtual water flow analysis based on the multiregional input-output model of 2007. The results find that the interprovincial virtual flows accounts for 23.4% of China's water footprint. The virtual grey water flows are 8.65 times greater than the virtual blue water flows; the virtual blue water and grey water flows are 91.8 and 794.6 Gm 3 /y, respectively. The use of the indicators related with water quantity to represent virtual water flows in previous studies will underestimate their impact on water resources. In addition, the virtual water flows are mainly derived from agriculture, chemical industry and petroleum processing and the coking industry, which account for 66.8%, 7.1% and 6.2% of the total virtual water flows, respectively. Virtual water flows have intensified both quantity- and quality-induced water scarcity of export regions, where low-value-added but water-intensive and high-pollution goods are produced. Our study on virtual water flows can inform effective water use policy for both water resources and water pollution in China. Our methodology about virtual water flows also can be used in global scale or other countries if data available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Covalent Binding with Neutrons on the Femto-scale
NASA Astrophysics Data System (ADS)
von Oertzen, W.; Kanada-En'yo, Y.; Kimura, M.
2017-06-01
In light nuclei we have well defined clusters, nuclei with closed shells, which serve as centers for binary molecules with covalent binding by valence neutrons. Single neutron orbitals in light neutron-excess nuclei have well defined shell model quantum numbers. With the combination of two clusters and their neutron valence states, molecular two-center orbitals are defined; in the two-center shell model we can place valence neutrons in a large variety of molecular two-center states, and the formation of Dimers becomes possible. The corresponding rotational bands point with their large moments of inertia and the Coriolis decoupling effect (for K = 1/2 bands) to the internal molecular orbital structure in these states. On the basis of these the neutron rich isotopes allow the formation of a large variety molecular structures on the nuclear scale. An extended Ikeda diagram can be drawn for these cases. Molecular bands in Be and Ne-isotopes are discussed as text-book examples.
Norris, E; Dunsmuir, S; Duke-Williams, O; Stamatakis, E; Shelton, N
2016-01-01
Introduction Physical activity (PA) has been shown to be an important factor for health and educational outcomes in children. However, a large proportion of children's school day is spent in sedentary lesson-time. There is emerging evidence about the effectiveness of physically active lessons: integrating physical movements and educational content in the classroom. ‘Virtual Traveller’ is a novel 6-week intervention of 10-min sessions performed 3 days per week, using classroom interactive whiteboards to integrate movement into primary-school Maths and English teaching. The primary aim of this project is to evaluate the effect of the Virtual Traveller intervention on children's PA, on-task behaviour and student engagement. Methods and analysis This study will be a cluster-randomised controlled trial with a waiting-list control group. Ten year 4 (aged 8–9 years) classes across 10 primary schools will be randomised by class to either the 6-week Virtual Traveller intervention or the waiting-list control group. Data will be collected 5 times: at baseline, at weeks 2 and 4 of the intervention, and 1 week and 3 months postintervention. At baseline, anthropometric measures, 4-day objective PA monitoring (including 2 weekend days; Actigraph accelerometer), PA and on-task behaviour observations and student engagement questionnaires will be performed. All but anthropometric measures will be repeated at all other data collection points. Changes in overall PA levels and levels during different time-periods (eg, lesson-time) will be examined. Changes in on-task behaviour and student engagement between intervention groups will also be examined. Multilevel regression modelling will be used to analyse the data. Process evaluation will be carried out during the intervention period. Ethics and dissemination The results of this study will be disseminated through peer-review publications and conference presentations. Ethical approval was obtained through the University College London Research Ethics Committee (reference number: 3500-004). PMID:27354084
Strong collective attraction in colloidal clusters on a liquid-air interface.
Pergamenshchik, V M
2009-01-01
It is shown that in a cluster of many colloids, trapped at a liquid-air interface, the well-known vertical-force-induced pairwise logarithmic attraction changes to a strongly enhanced power-law attraction. In large two-dimensional clusters, the attraction energy scales as the inverse square of the distance between colloids. The enhancement is given by the ratio eta = (square of the capillary length) / (interface surface area per colloid) and can be as large as 10;{5} . This explains why a very small vertical force on colloids, which is too weak to bring two of them together, can stabilize many-body structures on a liquid-air interface. The profile of a cluster is shown to consist of a large slow collective envelope modulated by a fast low-amplitude perturbation due to individual colloids. A closed equation for the slow envelope, which incorporates an arbitrary power-law repulsion between colloids, is derived. For example, this equation is solved for a large circular cluster with the hard-core colloid repulsion. It is suggested that the predicted effect is responsible for mysterious stabilization of colloidal structures observed in experiments on a surface of isotropic liquid and nematic liquid crystal.
Not all stars form in clusters - measuring the kinematics of OB associations with Gaia
NASA Astrophysics Data System (ADS)
Ward, Jacob L.; Kruijssen, J. M. Diederik
2018-04-01
It is often stated that star clusters are the fundamental units of star formation and that most (if not all) stars form in dense stellar clusters. In this monolithic formation scenario, low-density OB associations are formed from the expansion of gravitationally bound clusters following gas expulsion due to stellar feedback. N-body simulations of this process show that OB associations formed this way retain signs of expansion and elevated radial anisotropy over tens of Myr. However, recent theoretical and observational studies suggest that star formation is a hierarchical process, following the fractal nature of natal molecular clouds and allowing the formation of large-scale associations in situ. We distinguish between these two scenarios by characterizing the kinematics of OB associations using the Tycho-Gaia Astrometric Solution catalogue. To this end, we quantify four key kinematic diagnostics: the number ratio of stars with positive radial velocities to those with negative radial velocities, the median radial velocity, the median radial velocity normalized by the tangential velocity, and the radial anisotropy parameter. Each quantity presents a useful diagnostic of whether the association was more compact in the past. We compare these diagnostics to models representing random motion and the expanding products of monolithic cluster formation. None of these diagnostics show evidence of expansion, either from a single cluster or multiple clusters, and the observed kinematics are better represented by a random velocity distribution. This result favours the hierarchical star formation model in which a minority of stars forms in bound clusters and large-scale, hierarchically structured associations are formed in situ.
Measures of large-scale structure in the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1991-01-01
Variations of the counts-in-cells with cell size are used here to define two statistical measures of large-scale clustering in three 6 deg slices of the CfA redshift survey. A percolation criterion is used to estimate the filling factor which measures the fraction of the total volume in the survey occupied by the large-scale structures. For the full 18 deg slice of the CfA redshift survey, f is about 0.25 + or - 0.05. After removing groups with more than five members from two of the slices, variations of the counts in occupied cells with cell size have a power-law behavior with a slope beta about 2.2 on scales from 1-10/h Mpc. Application of both this statistic and the percolation analysis to simulations suggests that a network of two-dimensional structures is a better description of the geometry of the clustering in the CfA slices than a network of one-dimensional structures. Counts-in-cells are also used to estimate at 0.3 galaxy h-squared/Mpc the average galaxy surface density in sheets like the Great Wall.
Large-scale clustering as a probe of the origin and the host environment of fast radio bursts
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Kashiyama, Kazumi; Yoshida, Naoki
2017-04-01
We propose to use degree-scale angular clustering of fast radio bursts (FRBs) to identify their origin and the host galaxy population. We study the information content in autocorrelation of the angular positions and dispersion measures (DM) and in cross-correlation with galaxies. We show that the cross-correlation with Sloan Digital Sky Survey (SDSS) galaxies will place stringent constraints on the mean physical quantities associated with FRBs. If ˜10 ,000 FRBs are detected with ≲deg resolution in the SDSS field, the clustering analysis with the intrinsic DM scatter of 100 pc /cm3 can constrain the global abundance of free electrons at z ≲1 and the large-scale bias of FRB host galaxies (the statistical relation between the distribution of host galaxies and cosmic matter density field) with fractional errors (with a 68% confidence level) of ˜10 % and ˜20 %, respectively. The mean near-source dispersion measure and the delay-time distribution of FRB rates relative to the global star forming rate can be also determined by combining the clustering and the probability distribution function of DM. Our approach will be complementary to high-resolution (≪deg ) event localization using e.g., VLA and VLBI for identifying the origin of FRBs and the source environment. We strongly encourage future observational programs such as CHIME, UTMOST, and HIRAX to survey FRBs in the SDSS field.
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
“Local” Dark Energy Outflows Around Galaxy Groups and Rich Clusters
NASA Astrophysics Data System (ADS)
Byrd, Gene G.; Chernin, A. D.; Teerikorpi, P.; Dolgachev, V. P.; Kanter, A. A.; Domozhilova, L. M.; Valtonen, M.
2013-01-01
First detected at large Gpc distances, dark energy is a vacuum energy formulated as Einstein's cosmological constant, Λ. We have found its effects on “small” 1-3 Mpc scales in our Local Group. We have now found these effects in other nearby groups using member Doppler shifts and 3D distances from group centers (Cen A-M83; M81-M82; CV I). For the larger 20-30 Mpc Virgo and Fornax clusters, we now have found similar effects. Observationally, for both groups and clusters, gravity dominates a bound central system. The system gravitation and dark energy create a “zero-gravity” radius (R_{ZG}) from the center where the two balance. Smaller members bound inside R_{ZG} may be pulled out along with the less bound members which recede farther. A linear increase of recession with distance results which approaches a linear global Hubble law. These outflows are seen around groups in cosmological simulations which include galaxies as small as ~10^{-4} of the group mass. Scaled plots of asymptotic recessional velocity, V/(H(R_{ZG})), versus distance/ R_{ZG} of the outer galaxies are very similar for both the small groups and large clusters. This similarity on 1-30 Mpc scales suggests that a quasi-stationary bound central component and an expanding outflow applies to a wide range of groups and clusters due to small scale action of dark energy. Our new text book: Byrd, G., Chernin, A., Terrikorpi, P. and Valtonen, M. 2012, "Paths to Dark Energy: Theory and Observation," de Gruyter, Berlin/Boston, contains background and cosmological simulation plots. Group data and scaled plots are in our new article: A. D. Chernin, P. Teerikorpi, V. P. Dolgachev, A. A. Kanter, L. M. Domozhilova, M. J. Valtonen, and G. G. Byrd, 2012, Astronomy Reports, Vol. 56 , p. 653-669.
A Giant Warm Baryonic Halo for the Coma Cluster
NASA Technical Reports Server (NTRS)
Bonamente, Max; Lieu, Richard; Joy, Marshall K.; Six, N. Frank (Technical Monitor)
2002-01-01
Several deep PSPC observations of the Coma cluster unveil a very large-scale halo of soft X-ray emission, substantially in excess of the well know radiation from the hot intra-cluster medium. The excess emission, previously reported in the central cluster regions through lower-sensitivity EUVE and ROSAT data, is now evident out to a radius of 2.5 Mpc, demonstrating that the soft excess radiation from clusters is a phenomenon of cosmological significance. The spectrum at these large radii cannot be modeled non-thermally, but is consistent with the original scenario of thermal emission at warm temperatures. The mass of this plasma is at least on par with that of the hot X-ray emitting plasma, and significantly more massive if the plasma resides in low-density filamentary structures. Thus the data lend vital support to current theories of cosmic evolution, which predict greater than 50 percent by mass of today's baryons reside in warm-hot filaments converging at clusters of galaxies.
Research on distributed virtual reality system in electronic commerce
NASA Astrophysics Data System (ADS)
Xue, Qiang; Wang, Jiening; Sun, Jizhou
2004-03-01
In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.
Design and Implement of Astronomical Cloud Computing Environment In China-VO
NASA Astrophysics Data System (ADS)
Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu
2017-06-01
Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.
Circadian Rhythms in Socializing Propensity
Zhang, Cheng; Phang, Chee Wei; Zeng, Xiaohua; Wang, Ximeng; Xu, Yunjie; Huang, Yun; Contractor, Noshir
2015-01-01
Using large-scale interaction data from a virtual world, we show that people’s propensity to socialize (forming new social connections) varies by hour of the day. We arrive at our results by longitudinally tracking people’s friend-adding activities in a virtual world. Specifically, we find that people are most likely to socialize during the evening, at approximately 8 p.m. and 12 a.m., and are least likely to do so in the morning, at approximately 8 a.m. Such patterns prevail on weekdays and weekends and are robust to variations in individual characteristics and geographical conditions. PMID:26353080
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
Privett, Austin J.; Teixeira, Erico S.; Stopera, Christopher; Morales, Jorge A.
2017-01-01
To elucidate microscopic details of proton cancer therapy (PCT), we apply the simplest-level electron nuclear dynamics (SLEND) method to H+ + (H2O)1-6 at ELab = 100 keV. These systems are computationally tractable prototypes to simulate water radiolysis reactions—i.e. the PCT processes that generate the DNA-damaging species against cancerous cells. To capture incipient bulk-water effects, ten (H2O)1-6 isomers are considered, ranging from quasi-planar/multiplanar (H2O)1-6 to “smallest-drop” prism and cage (H2O)6 structures. SLEND is a time-dependent, variational, non-adiabatic and direct method that adopts a nuclear classical-mechanics description and an electronic single-determinantal wavefunction in the Thouless representation. Short-time SLEND/6-31G* (n = 1–6) and /6-31G** (n = 1–5) simulations render cluster-to-projectile 1-electron-transfer (1-ET) total integral cross sections (ICSs) and 1-ET probabilities. In absolute quantitative terms, SLEND/6-31G* 1-ET ICS compares satisfactorily with alternative experimental and theoretical results only available for n = 1 and exhibits almost the same accuracy of the best alternative theoretical result. SLEND/6-31G** overestimates 1-ET ICS for n = 1, but a comparable overestimation is also observed with another theoretical method. An investigation on H+ + H indicates that electron direct ionization (DI) becomes significant with the large virtual-space quasi-continuum in large basis sets; thus, SLEND/6-31G** 1-ET ICS is overestimated by DI contributions. The solution to this problem is discussed. In relative quantitative terms, both SLEND/6-31* and /6-31G** 1-ET ICSs precisely fit into physically justified scaling formulae as a function of the cluster size; this indicates SLEND’s suitability for predicting properties of water clusters with varying size. Long-time SLEND/6-31G* (n = 1–4) simulations predict the formation of the DNA-damaging radicals H, OH, O and H3O. While “smallest-drop” isomers are included, no early manifestations of bulk water PCT properties are observed and simulations with larger water clusters will be needed to capture those effects. This study is the largest SLEND investigation on water radiolysis to date. PMID:28376128
Atmospheric considerations regarding the impact of heat dissipation from a nuclear energy center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rotty, R.M.; Bauman, H.; Bennett, L.L.
1976-05-01
Potential changes in climate resulting from a large nuclear energy center are discussed. On a global scale, no noticeable changes are likely, but on both a regional and a local scale, changes can be expected. Depending on the cooling system employed, the amount of fog may increase, the amount and distribution of precipitation will change, and the frequency or location of severe storms may change. Very large heat releases over small surface areas can result in greater atmospheric instability; a large number of closely spaced natural-draft cooling towers have this disadvantage. On the other hand, employment of natural-draft towers makesmore » an increase in the occurrence of ground fog unlikely. The analysis suggests that the cooling towers for a large nuclear energy center should be located in clusters of four with at least 2.5-mile spacing between the clusters. This is equivalent to the requirement of one acre of land surface per each two megawatts of heat being rejected.« less
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; AghaKouchak, Amir; Lall, Upmanu
2017-12-01
Floods are the main natural disaster in Brazil, causing substantial economic damage and loss of life. Studies suggest that some extreme floods result from a causal climate chain. Exceptional rain and floods are determined by large-scale anomalies and persistent patterns in the atmospheric and oceanic circulations, which influence the magnitude, extent, and duration of these extremes. Moreover, floods can result from different generating mechanisms. These factors contradict the assumptions of homogeneity, and often stationarity, in flood frequency analysis. Here we outline a methodological framework based on clustering using self-organizing maps (SOMs) that allows the linkage of large-scale processes to local-scale observations. The methodology is applied to flood data from several sites in the flood-prone Upper Paraná River basin (UPRB) in southern Brazil. The SOM clustering approach is employed to classify the 6-day rainfall field over the UPRB into four categories, which are then used to classify floods into four types based on the spatiotemporal dynamics of the rainfall field prior to the observed flood events. An analysis of the vertically integrated moisture fluxes, vorticity, and high-level atmospheric circulation revealed that these four clusters are related to known tropical and extratropical processes, including the South American low-level jet (SALLJ); extratropical cyclones; and the South Atlantic Convergence Zone (SACZ). Persistent anomalies in the sea surface temperature fields in the Pacific and Atlantic oceans are also found to be associated with these processes. Floods associated with each cluster present different patterns in terms of frequency, magnitude, spatial variability, scaling, and synchronization of events across the sites and subbasins. These insights suggest new directions for flood risk assessment, forecasting, and management.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1988-01-01
These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.
Guasom Analysis Of The Alhambra Survey
NASA Astrophysics Data System (ADS)
Garabato, Daniel; Manteiga, Minia; Dafonte, Carlos; Álvarez, Marco A.
2017-10-01
GUASOM is a data mining tool designed for knowledge discovery in large astronomical spectrophotometric archives developed in the framework of Gaia DPAC (Data Processing and Analysis Consortium). Our tool is based on a type of unsupervised learning Artificial Neural Networks named Self-organizing maps (SOMs). SOMs permit the grouping and visualization of big amount of data for which there is no a priori knowledge and hence they are very useful for analyzing the huge amount of information present in modern spectrophotometric surveys. SOMs are used to organize the information in clusters of objects, as homogeneously as possible according to their spectral energy distributions, and to project them onto a 2D grid where the data structure can be visualized. Each cluster has a representative, called prototype which is a virtual pattern that better represents or resembles the set of input patterns belonging to such a cluster. Prototypes make easier the task of determining the physical nature and properties of the objects populating each cluster. Our algorithm has been tested on the ALHAMBRA survey spectrophotometric observations, here we present our results concerning the survey segmentation, visualization of the data structure, separation between types of objects (stars and galaxies), data homogeneity of neurons, cluster prototypes, redshift distribution and crossmatch with other databases (Simbad).
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
"A Richness Study of 14 Distant X-Ray Clusters from the 160 Square Degree Survey"
NASA Technical Reports Server (NTRS)
Jones, Christine; West, Donald (Technical Monitor)
2001-01-01
We have measured the surface density of galaxies toward 14 X-ray-selected cluster candidates at redshifts z(sub i) 0.46, and we show that they are associated with rich galaxy concentrations. These clusters, having X-ray luminosities of Lx(0.5-2 keV) approx. (0.5 - 2.6) x 10(exp 44) ergs/ sec are among the most distant and luminous in our 160 deg(exp 2) ROSAT Position Sensitive Proportional Counter cluster survey. We find that the clusters range between Abell richness classes 0 and 2 and have a most probable richness class of 1. We compare the richness distribution of our distant clusters to those for three samples of nearby clusters with similar X-ray luminosities. We find that the nearby and distant samples have similar richness distributions, which shows that clusters have apparently not evolved substantially in richness since redshift z=0.5. There is, however, a marginal tendency for the distant clusters to be slightly poorer than nearby clusters, although deeper multicolor data for a large sample would be required to confirm this trend. We compare the distribution of distant X-ray clusters in the L(sub X)-richness plane to the distribution of optically selected clusters from the Palomar Distant Cluster Survey. The optically selected clusters appear overly rich for their X-ray luminosities, when compared to X-ray-selected clusters. Apparently, X-ray and optical surveys do not necessarily sample identical mass concentrations at large redshifts. This may indicate the existence of a population of optically rich clusters with anomalously low X-ray emission, More likely, however, it reflects the tendency for optical surveys to select unvirialized mass concentrations, as might be expected when peering along large-scale filaments.
The Mass Function of Abell Clusters
NASA Astrophysics Data System (ADS)
Chen, J.; Huchra, J. P.; McNamara, B. R.; Mader, J.
1998-12-01
The velocity dispersion and mass functions for rich clusters of galaxies provide important constraints on models of the formation of Large-Scale Structure (e.g., Frenk et al. 1990). However, prior estimates of the velocity dispersion or mass function for galaxy clusters have been based on either very small samples of clusters (Bahcall and Cen 1993; Zabludoff et al. 1994) or large but incomplete samples (e.g., the Girardi et al. (1998) determination from a sample of clusters with more than 30 measured galaxy redshifts). In contrast, we approach the problem by constructing a volume-limited sample of Abell clusters. We collected individual galaxy redshifts for our sample from two major galaxy velocity databases, the NASA Extragalactic Database, NED, maintained at IPAC, and ZCAT, maintained at SAO. We assembled a database with velocity information for possible cluster members and then selected cluster members based on both spatial and velocity data. Cluster velocity dispersions and masses were calculated following the procedures of Danese, De Zotti, and di Tullio (1980) and Heisler, Tremaine, and Bahcall (1985), respectively. The final velocity dispersion and mass functions were analyzed in order to constrain cosmological parameters by comparison to the results of N-body simulations. Our data for the cluster sample as a whole and for the individual clusters (spatial maps and velocity histograms) in our sample is available on-line at http://cfa-www.harvard.edu/ huchra/clusters. This website will be updated as more data becomes available in the master redshift compilations, and will be expanded to include more clusters and large groups of galaxies.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852
Bock, Lars V.; Blau, Christian; Vaiana, Andrea C.; Grubmüller, Helmut
2015-01-01
During ribosomal translation, the two ribosomal subunits remain associated through intersubunit bridges, despite rapid large-scale intersubunit rotation. The absence of large barriers hindering rotation is a prerequisite for rapid rotation. Here, we investigate how such a flat free-energy landscape is achieved, in particular considering the large shifts the bridges undergo at the periphery. The dynamics and energetics of the intersubunit contact network are studied using molecular dynamics simulations of the prokaryotic ribosome in intermediate states of spontaneous translocation. Based on observed occupancies of intersubunit contacts, residues were grouped into clusters. In addition to the central contact clusters, peripheral clusters were found to maintain strong steady interactions by changing contacts in the course of rotation. The peripheral B1 bridges are stabilized by a changing contact pattern of charged residues that adapts to the rotational state. In contrast, steady strong interactions of the B4 bridge are ensured by the flexible helix H34 following the movement of protein S15. The tRNAs which span the subunits contribute to the intersubunit binding enthalpy to an almost constant degree, despite their different positions in the ribosome. These mechanisms keep the intersubunit interaction strong and steady during rotation, thereby preventing dissociation and enabling rapid rotation. PMID:26109353
Melozzi, Francesca; Woodman, Marmaduke M; Jirsa, Viktor K; Bernard, Christophe
2017-01-01
Connectome-based modeling of large-scale brain network dynamics enables causal in silico interrogation of the brain's structure-function relationship, necessitating the close integration of diverse neuroinformatics fields. Here we extend the open-source simulation software The Virtual Brain (TVB) to whole mouse brain network modeling based on individual diffusion magnetic resonance imaging (dMRI)-based or tracer-based detailed mouse connectomes. We provide practical examples on how to use The Virtual Mouse Brain (TVMB) to simulate brain activity, such as seizure propagation and the switching behavior of the resting state dynamics in health and disease. TVMB enables theoretically driven experimental planning and ways to test predictions in the numerous strains of mice available to study brain function in normal and pathological conditions.
The Morphologies and Alignments of Gas, Mass, and the Central Galaxies of CLASH Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Donahue, Megan; Ettori, Stefano; Rasia, Elena; Sayers, Jack; Zitrin, Adi; Meneghetti, Massimo; Voit, G. Mark; Golwala, Sunil; Czakon, Nicole; Yepes, Gustavo; Baldi, Alessandro; Koekemoer, Anton; Postman, Marc
2016-03-01
Morphology is often used to infer the state of relaxation of galaxy clusters. The regularity, symmetry, and degree to which a cluster is centrally concentrated inform quantitative measures of cluster morphology. The Cluster Lensing and Supernova survey with Hubble Space Telescope (CLASH) used weak and strong lensing to measure the distribution of matter within a sample of 25 clusters, 20 of which were deemed to be “relaxed” based on their X-ray morphology and alignment of the X-ray emission with the Brightest Cluster Galaxy. Toward a quantitative characterization of this important sample of clusters, we present uniformly estimated X-ray morphological statistics for all 25 CLASH clusters. We compare X-ray morphologies of CLASH clusters with those identically measured for a large sample of simulated clusters from the MUSIC-2 simulations, selected by mass. We confirm a threshold in X-ray surface brightness concentration of C ≳ 0.4 for cool-core clusters, where C is the ratio of X-ray emission inside 100 h70-1 kpc compared to inside 500 {h}70-1 kpc. We report and compare morphologies of these clusters inferred from Sunyaev-Zeldovich Effect (SZE) maps of the hot gas and in from projected mass maps based on strong and weak lensing. We find a strong agreement in alignments of the orientation of major axes for the lensing, X-ray, and SZE maps of nearly all of the CLASH clusters at radii of 500 kpc (approximately 1/2 R500 for these clusters). We also find a striking alignment of clusters shapes at the 500 kpc scale, as measured with X-ray, SZE, and lensing, with that of the near-infrared stellar light at 10 kpc scales for the 20 “relaxed” clusters. This strong alignment indicates a powerful coupling between the cluster- and galaxy-scale galaxy formation processes.
2015-01-01
Background Cellular processes are known to be modular and are realized by groups of proteins implicated in common biological functions. Such groups of proteins are called functional modules, and many community detection methods have been devised for their discovery from protein interaction networks (PINs) data. In current agglomerative clustering approaches, vertices with just a very few neighbors are often classified as separate clusters, which does not make sense biologically. Also, a major limitation of agglomerative techniques is that their computational efficiency do not scale well to large PINs. Finally, PIN data obtained from large scale experiments generally contain many false positives, and this makes it hard for agglomerative clustering methods to find the correct clusters, since they are known to be sensitive to noisy data. Results We propose a local similarity premetric, the relative vertex clustering value, as a new criterion allowing to decide when a node can be added to a given node's cluster and which addresses the above three issues. Based on this criterion, we introduce a novel and very fast agglomerative clustering technique, FAC-PIN, for discovering functional modules and protein complexes from a PIN data. Conclusions Our proposed FAC-PIN algorithm is applied to nine PIN data from eight different species including the yeast PIN, and the identified functional modules are validated using Gene Ontology (GO) annotations from DAVID Bioinformatics Resources. Identified protein complexes are also validated using experimentally verified complexes. Computational results show that FAC-PIN can discover functional modules or protein complexes from PINs more accurately and more efficiently than HC-PIN and CNM, the current state-of-the-art approaches for clustering PINs in an agglomerative manner. PMID:25734691
Gross, Alexander; Murthy, Dhiraj
2014-10-01
This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Clustering Molecular Dynamics Trajectories for Optimizing Docking Experiments
De Paris, Renata; Quevedo, Christian V.; Ruiz, Duncan D.; Norberto de Souza, Osmar; Barros, Rodrigo C.
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand. PMID:25873944
ΛGR Centennial: Cosmic Web in Dark Energy Background
NASA Astrophysics Data System (ADS)
Chernin, A. D.
The basic building blocks of the Cosmic Web are groups and clusters of galaxies, super-clusters (pancakes) and filaments embedded in the universal dark energy background. The background produces antigravity, and the antigravity effect is strong in groups, clusters and superclusters. Antigravity is very weak in filaments where matter (dark matter and baryons) produces gravity dominating in the filament internal dynamics. Gravity-antigravity interplay on the large scales is a grandiose phenomenon predicted by ΛGR theory and seen in modern observations of the Cosmic Web.
The influence of idealized surface heterogeneity on virtual turbulent flux measurements
NASA Astrophysics Data System (ADS)
De Roo, Frederik; Mauder, Matthias
2018-04-01
The imbalance of the surface energy budget in eddy-covariance measurements is still an unsolved problem. A possible cause is the presence of land surface heterogeneity, which affects the boundary-layer turbulence. To investigate the impact of surface variables on the partitioning of the energy budget of flux measurements in the surface layer under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, which allows the determination of advection by the mean flow, flux-divergence and storage terms of the energy budget at the virtual measurement site, in addition to the standard turbulent flux. We focus on the heterogeneity of the surface fluxes and keep the topography flat. The surface fluxes vary locally in intensity and these patches have different length scales. Intensity and length scales can vary for the two horizontal dimensions but follow an idealized chessboard pattern. Our main focus lies on surface heterogeneity of the kilometer scale, and one order of magnitude smaller. For these two length scales, we investigate the average response of the fluxes at a number of virtual towers, when varying the heterogeneity length within the length scale and when varying the contrast between the different patches. For each simulation, virtual measurement towers were positioned at functionally different positions (e.g., downdraft region, updraft region, at border between domains, etc.). As the storage term is always small, the non-closure is given by the sum of the advection by the mean flow and the flux-divergence. Remarkably, the missing flux can be described by either the advection by the mean flow or the flux-divergence separately, because the latter two have a high correlation with each other. For kilometer scale heterogeneity, we notice a clear dependence of the updrafts and downdrafts on the surface heterogeneity and likewise we also see a dependence of the energy partitioning on the tower location. For the hectometer scale, we do not notice such a clear dependence. Finally, we seek correlators for the energy balance ratio in the simulations. The correlation with the friction velocity is less pronounced than previously found, but this is likely due to our concentration on effectively strongly to freely convective conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, D.N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold`` and ``hot`` non-baryonic candidates is shown to depend on the assumed ``seeds`` that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, D.N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold'' and hot'' non-baryonic candidates is shown to depend on the assumed seeds'' that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less
NASA Astrophysics Data System (ADS)
Schramm, David N.
1992-07-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the Ω = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold'' and ``hot'' non-baryonic candidates is shown to depend on the assumed ``seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.
NASA Astrophysics Data System (ADS)
Schramm, D. N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the omega = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between 'cold' and 'hot' non-baryonic candidates is shown to depend on the assumed 'seeds' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages, and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.
A web portal for hydrodynamical, cosmological simulations
NASA Astrophysics Data System (ADS)
Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.
2017-07-01
This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.
Discovery of a large-scale clumpy structure around the Lynx supercluster at z~ 1.27
NASA Astrophysics Data System (ADS)
Nakata, Fumiaki; Kodama, Tadayuki; Shimasaku, Kazuhiro; Doi, Mamoru; Furusawa, Hisanori; Hamabe, Masaru; Kimura, Masahiko; Komiyama, Yutaka; Miyazaki, Satoshi; Okamura, Sadanori; Ouchi, Masami; Sekiguchi, Maki; Ueda, Yoshihiro; Yagi, Masafumi; Yasuda, Naoki
2005-03-01
We report the discovery of a probable large-scale structure composed of many galaxy clumps around the known twin clusters at z= 1.26 and 1.27 in the Lynx region. Our analysis is based on deep, panoramic, and multicolour imaging, 26.4 × 24.1 arcmin2 in VRi'z' bands with the Suprime-Cam on the 8.2-m Subaru telescope. This unique, deep and wide-field imaging data set allows us for the first time to map out the galaxy distribution in the highest-redshift supercluster known. We apply a photometric redshift technique to extract plausible cluster members at z~ 1.27 down to i'= 26.15 (5σ) corresponding to ~M*+ 2.5 at this redshift. From the two-dimensional distribution of these photometrically selected galaxies, we newly identify seven candidates of galaxy groups or clusters where the surface density of red galaxies is significantly high (>5σ), in addition to the two known clusters. These candidates show clear red colour-magnitude sequences consistent with a passive evolution model, which suggests the existence of additional high-density regions around the Lynx superclusters.
Evaluative Appraisals of Environmental Mystery and Surprise
ERIC Educational Resources Information Center
Nasar, Jack L.; Cubukcu, Ebru
2011-01-01
This study used a desktop virtual environment (VE) of 15 large-scale residential streets to test the effects of environmental mystery and surprise on response. In theory, mystery and surprise should increase interest and visual appeal. For each VE, participants walked through an approach street and turned right onto a post-turn street. We designed…
Ask Here PA: Large-Scale Synchronous Virtual Reference for Pennsylvania
ERIC Educational Resources Information Center
Mariner, Vince
2008-01-01
Ask Here PA is Pennsylvania's new statewide live chat reference and information service. This article discusses the key strategies utilized by Ask Here PA administrators to recruit participating libraries to contribute staff time to the service, the importance of centralized staff training, the main aspects of staff training, and activating the…
Educational Games and Virtual Reality as Disruptive Technologies
ERIC Educational Resources Information Center
Psotka, Joseph
2013-01-01
New technologies often have the potential for disrupting existing established practices, but nowhere is this so pertinent as in education and training today. And yet, education has been glacially slow to adopt these changes in a large scale way, and innovations seem to be imposed mainly by students' and their changing social lifestyles than…
ERIC Educational Resources Information Center
Wallace, Mike
This paper explores how characteristics of complex educational change may virtually dictate the leadership strategies adopted by those charged with bringing about change. The change in question here is the large-scale reorganization of local education authorities (LEAs) across England. The article focuses on how across-the-board initiatives to…
One Spatial Map or Many? Spatial Coding of Connected Environments
ERIC Educational Resources Information Center
Han, Xue; Becker, Suzanna
2014-01-01
We investigated how humans encode large-scale spatial environments using a virtual taxi game. We hypothesized that if 2 connected neighborhoods are explored jointly, people will form a single integrated spatial representation of the town. However, if the neighborhoods are first learned separately and later observed to be connected, people will…
NASA Astrophysics Data System (ADS)
Gunawardhana, M. L. P.; Norberg, P.; Zehavi, I.; Farrow, D. J.; Loveday, J.; Hopkins, A. M.; Davies, L. J. M.; Wang, L.; Alpaslan, M.; Bland-Hawthorn, J.; Brough, S.; Holwerda, B. W.; Owers, M. S.; Wright, A. H.
2018-06-01
Statistical studies of galaxy-galaxy interactions often utilise net change in physical properties of progenitors as a function of the separation between their nuclei to trace both the strength and the observable timescale of their interaction. In this study, we use two-point auto, cross and mark correlation functions to investigate the extent to which small-scale clustering properties of star forming galaxies can be used to gain physical insight into galaxy-galaxy interactions between galaxies of similar optical brightness and stellar mass. The Hα star formers, drawn from the highly spatially complete Galaxy And Mass Assembly (GAMA) survey, show an increase in clustering on small separations. Moreover, the clustering strength shows a strong dependence on optical brightness and stellar mass, where (1) the clustering amplitude of optically brighter galaxies at a given separation is larger than that of optically fainter systems, (2) the small scale clustering properties (e.g. the strength, the scale at which the signal relative to the fiducial power law plateaus) of star forming galaxies appear to differ as a function of increasing optical brightness of galaxies. According to cross and mark correlation analyses, the former result is largely driven by the increased dust content in optically bright star forming galaxies. The latter could be interpreted as evidence of a correlation between interaction-scale and optical brightness of galaxies, where physical evidence of interactions between optically bright star formers, likely hosted within relatively massive halos, persist over larger separations than those between optically faint star formers.
Estimating planktonic diversity through spatial dominance patterns in a model ocean.
Soccodato, Alice; d'Ovidio, Francesco; Lévy, Marina; Jahn, Oliver; Follows, Michael J; De Monte, Silvia
2016-10-01
In the open ocean, the observation and quantification of biodiversity patterns is challenging. Marine ecosystems are indeed largely composed by microbial planktonic communities whose niches are affected by highly dynamical physico-chemical conditions, and whose observation requires advanced methods for morphological and molecular classification. Optical remote sensing offers an appealing complement to these in-situ techniques. Global-scale coverage at high spatiotemporal resolution is however achieved at the cost of restrained information on the local assemblage. Here, we use a coupled physical and ecological model ocean simulation to explore one possible metrics for comparing measures performed on such different scales. We show that a large part of the local diversity of the virtual plankton ecosystem - corresponding to what accessible by genomic methods - can be inferred from crude, but spatially extended, information - as conveyed by remote sensing. Shannon diversity of the local community is indeed highly correlated to a 'seascape' index, which quantifies the surrounding spatial heterogeneity of the most abundant functional group. The error implied in drastically reducing the resolution of the plankton community is shown to be smaller in frontal regions as well as in regions of intermediate turbulent energy. On the spatial scale of hundreds of kms, patterns of virtual plankton diversity are thus largely sustained by mixing communities that occupy adjacent niches. We provide a proof of principle that in the open ocean information on spatial variability of communities can compensate for limited local knowledge, suggesting the possibility of integrating in-situ and satellite observations to monitor biodiversity distribution at the global scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Galaxy clusters in the cosmic web
NASA Astrophysics Data System (ADS)
Acebrón, A.; Durret, F.; Martinet, N.; Adami, C.; Guennou, L.
2014-12-01
Simulations of large scale structure formation in the universe predict that matter is essentially distributed along filaments at the intersection of which lie galaxy clusters. We have analysed 9 clusters in the redshift range 0.4
NASA Astrophysics Data System (ADS)
Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank
2016-03-01
Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.
Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert
2015-01-01
For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster’s center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters – produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would. PMID:25870463
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzi, Silvio; Hereld, Mark; Insley, Joseph
In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less
On the large scale structure of X-ray background sources
NASA Technical Reports Server (NTRS)
Bi, H. G.; Meszaros, A.; Meszaros, P.
1991-01-01
The large scale clustering of the sources responsible for the X-ray background is discussed, under the assumption of a discrete origin. The formalism necessary for calculating the X-ray spatial fluctuations in the most general case where the source density contrast in structures varies with redshift is developed. A comparison of this with observational limits is useful for obtaining information concerning various galaxy formation scenarios. The calculations presented show that a varying density contrast has a small impact on the expected X-ray fluctuations. This strengthens and extends previous conclusions concerning the size and comoving density of large scale structures at redshifts 0.5 between 4.0.
The large-scale distribution of galaxies
NASA Technical Reports Server (NTRS)
Geller, Margaret J.
1989-01-01
The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.
Spatial Distribution of Large Cloud Drops
NASA Technical Reports Server (NTRS)
Marshak, A.; Knyazikhin, Y.; Larsen, M.; Wiscombe, W.
2004-01-01
By analyzing aircraft measurements of individual drop sizes in clouds, we have shown in a companion paper (Knyazikhin et al., 2004) that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)) where 0 less than or equal to D(r) less than or equal to 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and therefore a Poisson distribution of cloud drops, these models show strong drop clustering, the more so the larger the drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics explaining how rain can form so fast. It also helps explain why remotely sensed cloud drop size is generally biased and why clouds absorb more sunlight than conventional radiative transfer models predict.
Paini, Dean R.; Bianchi, Felix J. J. A.; Northfield, Tobin D.; De Barro, Paul J.
2011-01-01
Predicting future species invasions presents significant challenges to researchers and government agencies. Simply considering the vast number of potential species that could invade an area can be insurmountable. One method, recently suggested, which can analyse large datasets of invasive species simultaneously is that of a self organising map (SOM), a form of artificial neural network which can rank species by establishment likelihood. We used this method to analyse the worldwide distribution of 486 fungal pathogens and then validated the method by creating a virtual world of invasive species in which to test the SOM. This novel validation method allowed us to test SOM's ability to rank those species that can establish above those that can't. Overall, we found the SOM highly effective, having on average, a 96–98% success rate (depending on the virtual world parameters). We also found that regions with fewer species present (i.e. 1–10 species) were more difficult for the SOM to generate an accurately ranked list, with success rates varying from 100% correct down to 0% correct. However, we were able to combine the numbers of species present in a region with clustering patterns in the SOM, to further refine confidence in lists generated from these sparsely populated regions. We then used the results from the virtual world to determine confidences for lists generated from the fungal pathogen dataset. Specifically, for lists generated for Australia and its states and territories, the reliability scores were between 84–98%. We conclude that a SOM analysis is a reliable method for analysing a large dataset of potential invasive species and could be used by biosecurity agencies around the world resulting in a better overall assessment of invasion risk. PMID:22016773
Paini, Dean R; Bianchi, Felix J J A; Northfield, Tobin D; De Barro, Paul J
2011-01-01
Predicting future species invasions presents significant challenges to researchers and government agencies. Simply considering the vast number of potential species that could invade an area can be insurmountable. One method, recently suggested, which can analyse large datasets of invasive species simultaneously is that of a self organising map (SOM), a form of artificial neural network which can rank species by establishment likelihood. We used this method to analyse the worldwide distribution of 486 fungal pathogens and then validated the method by creating a virtual world of invasive species in which to test the SOM. This novel validation method allowed us to test SOM's ability to rank those species that can establish above those that can't. Overall, we found the SOM highly effective, having on average, a 96-98% success rate (depending on the virtual world parameters). We also found that regions with fewer species present (i.e. 1-10 species) were more difficult for the SOM to generate an accurately ranked list, with success rates varying from 100% correct down to 0% correct. However, we were able to combine the numbers of species present in a region with clustering patterns in the SOM, to further refine confidence in lists generated from these sparsely populated regions. We then used the results from the virtual world to determine confidences for lists generated from the fungal pathogen dataset. Specifically, for lists generated for Australia and its states and territories, the reliability scores were between 84-98%. We conclude that a SOM analysis is a reliable method for analysing a large dataset of potential invasive species and could be used by biosecurity agencies around the world resulting in a better overall assessment of invasion risk.
Automatic building identification under bomb damage conditions
NASA Astrophysics Data System (ADS)
Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II
2009-05-01
Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.
On hierarchical solutions to the BBGKY hierarchy
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1988-01-01
It is thought that the gravitational clustering of galaxies in the universe may approach a scale-invariant, hierarchical form in the small separation, large-clustering regime. Past attempts to solve the Born-Bogoliubov-Green-Kirkwood-Yvon (BBGKY) hierarchy in this regime have assumed a certain separable hierarchical form for the higher order correlation functions of galaxies in phase space. It is shown here that such separable solutions to the BBGKY equations must satisfy the condition that the clustered component of the solution has cluster-cluster correlations equal to galaxy-galaxy correlations to all orders. The solutions also admit the presence of an arbitrary unclustered component, which plays no dyamical role in the large-clustering regime. These results are a particular property of the specific separable model assumed for the correlation functions in phase space, not an intrinsic property of spatially hierarchical solutions to the BBGKY hierarchy. The observed distribution of galaxies does not satisfy the required conditions. The disagreement between theory and observation may be traced, at least in part, to initial conditions which, if Gaussian, already have cluster correlations greater than galaxy correlations.