2015-08-01
Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten and James P Larentzos Approved for...Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten Weapons and Materials Research Directorate, ARL James P Larentzos Engility...Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software 5a. CONTRACT NUMBER 5b
Collisions in Compact Star Clusters.
NASA Astrophysics Data System (ADS)
Portegies Zwart, S. F.
The high stellar densities in young compact star clusters, such as the star cluster R136 in the 30 Doradus region, may lead to a large number of stellar collisions. Such collisions were recently found to be much more frequent than previous estimates. The number of collisions scales with the number of stars for clusters with the same initial relaxation time. These collisions take place in a few million years. The collision products may finally collapse into massive black holes. The fraction of the total mass in the star cluster which ends up in a single massive object scales with the total mass of the cluster and its relaxation time. This mass fraction is rather constant, within a factor two or so. Wild extrapolation from the relatively small masses of the studied systems to the cores of galactic nuclei may indicate that the massive black holes in these systems have formed in a similar way.
Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Bergel, Itsik; Perets, Yona; Shamai, Shlomo
2016-05-01
In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu
We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that thismore » scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.« less
Wen, X.; Datta, A.; Traverso, L. M.; Pan, L.; Xu, X.; Moon, E. E.
2015-01-01
Optical lithography, the enabling process for defining features, has been widely used in semiconductor industry and many other nanotechnology applications. Advances of nanotechnology require developments of high-throughput optical lithography capabilities to overcome the optical diffraction limit and meet the ever-decreasing device dimensions. We report our recent experimental advancements to scale up diffraction unlimited optical lithography in a massive scale using the near field nanolithography capabilities of bowtie apertures. A record number of near-field optical elements, an array of 1,024 bowtie antenna apertures, are simultaneously employed to generate a large number of patterns by carefully controlling their working distances over the entire array using an optical gap metrology system. Our experimental results reiterated the ability of using massively-parallel near-field devices to achieve high-throughput optical nanolithography, which can be promising for many important nanotechnology applications such as computation, data storage, communication, and energy. PMID:26525906
Parameters affecting the resilience of scale-free networks to random failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran
2005-09-01
It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degreemore » of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.« less
Cosmic string loops as the seeds of super-massive black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramberger, Sebastian F.; Brandenberger, Robert H.; Jreidini, Paul
2015-06-01
Recent discoveries of super-massive black holes at high redshifts indicate a possible tension with the standard ΛCDM paradigm of early universe cosmology which has difficulties in explaining the origin of the required nonlinear compact seeds which trigger the formation of these super-massive black holes. Here we show that cosmic string loops which result from a scaling solution of strings formed during a phase transition in the very early universe lead to an additional source of compact seeds. The number density of string-induced seeds dominates at high redshifts and can help trigger the formation of the observed super-massive black holes.
Super massive black hole in galactic nuclei with tidal disruption of stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Shiyan; Berczik, Peter; Spurzem, Rainer
Tidal disruption of stars by super massive central black holes from dense star clusters is modeled by high-accuracy direct N-body simulation. The time evolution of the stellar tidal disruption rate, the effect of tidal disruption on the stellar density profile, and, for the first time, the detailed origin of tidally disrupted stars are carefully examined and compared with classic papers in the field. Up to 128k particles are used in simulation to model the star cluster around a super massive black hole, and we use the particle number and the tidal radius of the black hole as free parameters formore » a scaling analysis. The transition from full to empty loss-cone is analyzed in our data, and the tidal disruption rate scales with the particle number, N, in the expected way for both cases. For the first time in numerical simulations (under certain conditions) we can support the concept of a critical radius of Frank and Rees, which claims that most stars are tidally accreted on highly eccentric orbits originating from regions far outside the tidal radius. Due to the consumption of stars moving on radial orbits, a velocity anisotropy is found inside the cluster. Finally we estimate the real galactic center based on our simulation results and the scaling analysis.« less
Super Massive Black Hole in Galactic Nuclei with Tidal Disruption of Stars
NASA Astrophysics Data System (ADS)
Zhong, Shiyan; Berczik, Peter; Spurzem, Rainer
2014-09-01
Tidal disruption of stars by super massive central black holes from dense star clusters is modeled by high-accuracy direct N-body simulation. The time evolution of the stellar tidal disruption rate, the effect of tidal disruption on the stellar density profile, and, for the first time, the detailed origin of tidally disrupted stars are carefully examined and compared with classic papers in the field. Up to 128k particles are used in simulation to model the star cluster around a super massive black hole, and we use the particle number and the tidal radius of the black hole as free parameters for a scaling analysis. The transition from full to empty loss-cone is analyzed in our data, and the tidal disruption rate scales with the particle number, N, in the expected way for both cases. For the first time in numerical simulations (under certain conditions) we can support the concept of a critical radius of Frank & Rees, which claims that most stars are tidally accreted on highly eccentric orbits originating from regions far outside the tidal radius. Due to the consumption of stars moving on radial orbits, a velocity anisotropy is found inside the cluster. Finally we estimate the real galactic center based on our simulation results and the scaling analysis.
Gravity or turbulence? - III. Evidence of pure thermal Jeans fragmentation at ˜0.1 pc scale
NASA Astrophysics Data System (ADS)
Palau, Aina; Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Sánchez-Monge, Álvaro; Estalella, Robert; Fall, S. Michael; Zapata, Luis A.; Camacho, Vianey; Gómez, Laura; Naranjo-Romero, Raúl; Busquet, Gemma; Fontani, Francesco
2015-11-01
We combine previously published interferometric and single-dish data of relatively nearby massive dense cores that are actively forming stars to test whether their `fragmentation level' is controlled by turbulent or thermal support. We find no clear correlation between the fragmentation level and velocity dispersion, nor between the observed number of fragments and the number of fragments expected when the gravitationally unstable mass is calculated including various prescriptions for `turbulent support'. On the other hand, the best correlation is found for the case of pure thermal Jeans fragmentation, for which we infer a core formation efficiency around 13 per cent, consistent with previous works. We conclude that the dominant factor determining the fragmentation level of star-forming massive dense cores at 0.1 pc scale seems to be thermal Jeans fragmentation.
Large-Angular-Scale Clustering as a Clue to the Source of UHECRs
NASA Astrophysics Data System (ADS)
Berlind, Andreas A.; Farrar, Glennys R.
We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.
Two new confirmed massive relic galaxies: red nuggets in the present-day Universe
NASA Astrophysics Data System (ADS)
Ferré-Mateu, Anna; Trujillo, Ignacio; Martín-Navarro, Ignacio; Vazdekis, Alexandre; Mezcua, Mar; Balcells, Marc; Domínguez, Lilian
2017-05-01
We confirm two new local massive relic galaxies, I.e. untouched survivors of the early Universe massive population: Mrk 1216 and PGC 032873. Both show early and peaked formation events within very short time-scales (<1 Gyr) and thus old mean mass-weighted ages (˜13 Gyr). Their star formation histories remain virtually unchanged out to several effective radii, even when considering the steeper initial-mass-function values inferred out to ˜3 effective radii. Their morphologies, kinematics and density profiles are like those found in the z > 2 massive population, setting them apart from the typical z ˜ 0 massive early-type galaxies. We find that there seems to exist a degree of relic that is related to how far into the path, to become one of these typical z ˜ 0 massive galaxies, the compact relic has moved. This path is partly dictated by the environment the galaxy lives in. For galaxies in rich environments, such as the previously reported relic galaxy NGC 1277, the most extreme properties (e.g. sizes, short formation time-scales, larger supermassive black holes) are expected, while lower density environments will have galaxies with delayed and/or extended star formations, slightly larger sizes and not that extreme black hole masses. The confirmation of three relic galaxies up to a distance of 106 Mpc, implies a lower limit in the number density of these red nuggets in the local Universe of 6 × 10-7 Mpc3, which is within the theoretical expectations.
Biomimetic Models for An Ecological Approach to Massively-Deployed Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2005-01-01
Promises of ubiquitous control of the physical environment by massively-deployed wireless sensor networks open avenues for new applications that will redefine the way we live and work. Due to small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors ubiquitous throughout our environment working in concert. Recent research has concentrated on developing techniques for performing relatively simple tasks with minimal energy expense, assuming some form of centralized control. Unfortunately, centralized control is not conducive to parallel activities and does not scale to massive size networks. Execution of simple tasks in sparse networks will not lead to the sophisticated applications predicted. We propose a new way of looking at massively-deployed sensor networks, motivated by lessons learned from the way biological ecosystems are organized. We demonstrate that in such a model, fully distributed data aggregation can be performed in a scalable fashion in massively deployed sensor networks, where motes operate on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects. We show that such architectures may be used to facilitate communication and synchronization in a fault-tolerant manner, while balancing workload and required energy expenditure throughout the network.
Large-eddy simulations of compressible convection on massively parallel computers. [stellar physics
NASA Technical Reports Server (NTRS)
Xie, Xin; Toomre, Juri
1993-01-01
We report preliminary implementation of the large-eddy simulation (LES) technique in 2D simulations of compressible convection carried out on the CM-2 massively parallel computer. The convective flow fields in our simulations possess structures similar to those found in a number of direct simulations, with roll-like flows coherent across the entire depth of the layer that spans several density scale heights. Our detailed assessment of the effects of various subgrid scale (SGS) terms reveals that they may affect the gross character of convection. Yet, somewhat surprisingly, we find that our LES solutions, and another in which the SGS terms are turned off, only show modest differences. The resulting 2D flows realized here are rather laminar in character, and achieving substantial turbulence may require stronger forcing and less dissipation.
Fingerprints of heavy scales in electroweak effective Lagrangians
NASA Astrophysics Data System (ADS)
Pich, Antonio; Rosell, Ignasi; Santos, Joaquín; Sanz-Cillero, Juan José
2017-04-01
The couplings of the electroweak effective theory contain information on the heavy-mass scales which are no-longer present in the low-energy Lagrangian. We build a general effective Lagrangian, implementing the electroweak chiral symmetry breaking SU(2) L ⊗ SU(2) R → SU(2) L+ R , which couples the known particle fields to heavier states with bosonic quantum numbers J P = 0± and 1±. We consider colour-singlet heavy fields that are in singlet or triplet representations of the electroweak group. Integrating out these heavy scales, we analyze the pattern of low-energy couplings among the light fields which are generated by the massive states. We adopt a generic non-linear realization of the electroweak symmetry breaking with a singlet Higgs, without making any assumption about its possible doublet structure. Special attention is given to the different possible descriptions of massive spin-1 fields and the differences arising from naive implementations of these formalisms, showing their full equivalence once a proper short-distance behaviour is required.
Extended DBI massive gravity with generalized fiducial metric
NASA Astrophysics Data System (ADS)
Chullaphan, Tossaporn; Tannukij, Lunchakorn; Wongjun, Pitayuth
2015-06-01
We consider an extended model of DBI massive gravity by generalizing the fiducial metric to be an induced metric on the brane corresponding to a domain wall moving in five-dimensional Schwarzschild-Anti-de Sitter spacetime. The model admits all solutions of FLRW metric including flat, closed and open geometries while the original one does not. The background solutions can be divided into two branches namely self-accelerating branch and normal branch. For the self-accelerating branch, the graviton mass plays the role of cosmological constant to drive the late-time acceleration of the universe. It is found that the number degrees of freedom of gravitational sector is not correct similar to the original DBI massive gravity. There are only two propagating degrees of freedom from tensor modes. For normal branch, we restrict our attention to a particular class of the solutions which provides an accelerated expansion of the universe. It is found that the number of degrees of freedom in the model is correct. However, at least one of them is ghost degree of freedom which always present at small scale implying that the theory is not stable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, Edmond
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
The massive fermion phase for the U(N) Chern-Simons gauge theory in D=3 at large N
Bardeen, William A.
2014-10-07
We explore the phase structure of fermions in the U(N) Chern-Simons Gauge theory in three dimensions using the large N limit where N is the number of colors and the fermions are taken to be in the fundamental representation of the U(N) gauge group. In the large N limit, the theory retains its classical conformal behavior and considerable attention has been paid to possible AdS/CFT dualities of the theory in the conformal phase. In this paper we present a solution for the massive phase of the fermion theory that is exact to the leading order of ‘t Hooft’s large Nmore » expansion. We present evidence for the spontaneous breaking of the exact scale symmetry and analyze the properties of the dilaton that appears as the Goldstone boson of scale symmetry breaking.« less
NASA Astrophysics Data System (ADS)
Khanal, U.
2006-07-01
Maxwell and Dirac fields in Friedmann Robertson Walker (FRW) spacetime are investigated using the Newman Penrose method. The variables are all separable, with the angular dependence given by spin-weighted spherical harmonics. All the radial parts reduce to the barrier penetration problem, with mostly repulsive potentials representing the centrifugal energies. Both the helicity states of the photon field see the same potential, but that of the Dirac field see different ones; one component even sees attractive potential in the open universe. The massless fields have the usual exponential time dependences; that of the massive Dirac field is coupled to the evolution of the cosmic scale factor a. The case of the radiation-filled flat universe is solved in terms of the Whittaker function. A formal series solution, valid in any FRW universe, is also presented. The energy density of the Maxwell field is explicitly shown to scale as a-4. The co-moving particle number density of the massless Dirac field is found to be conserved, but that of the massive one is not. Particles flow out of certain regions, and into others, creating regions that are depleted of certain linear and angular momenta states, and others with excess. Such a current of charged particles would constitute an electric current that could generate a cosmic magnetic field. In contrast, the energy density of these massive particles still scales as a-4.
2010-01-01
high-speed flows is problematic due to their low forcing frequency (for mechanical actuators) and low forcing amplitude (for piezo actuators...very low fraction of DC power is coupled to the actuators (5-10%), with the rest of the power dissipated in massive ballast resistors acting as heat... resistors . The use of high-power resistors also significantly increases the weight and size of the plasma generator and makes scaling to a large number of
NASA Astrophysics Data System (ADS)
Ota, Kazuaki; Venemans, Bram P.; Taniguchi, Yoshiaki; Kashikawa, Nobunari; Nakata, Fumiaki; Harikane, Yuichi; Bañados, Eduardo; Overzier, Roderik; Riechers, Dominik A.; Walter, Fabian; Toshikawa, Jun; Shibuya, Takatoshi; Jiang, Linhua
2018-04-01
Quasars (QSOs) hosting supermassive black holes are believed to reside in massive halos harboring galaxy overdensities. However, many observations revealed average or low galaxy densities around z ≳ 6 QSOs. This could be partly because they measured galaxy densities in only tens of arcmin2 around QSOs and might have overlooked potential larger-scale galaxy overdensities. Some previous studies also observed only Lyman break galaxies (LBGs; massive older galaxies) and missed low-mass young galaxies, like Lyα emitters (LAEs), around QSOs. Here we present observations of LAE and LBG candidates in ∼700 arcmin2 around a z = 6.61 luminous QSO using the Subaru Telescope Suprime-Cam with narrowband/broadband. We compare their sky distributions, number densities, and angular correlation functions with those of LAEs/LBGs detected in the same manner and comparable data quality in our control blank field. In the QSO field, LAEs and LBGs are clustering in 4–20 comoving Mpc angular scales, but LAEs show mostly underdensity over the field while LBGs are forming 30 × 60 comoving Mpc2 large-scale structure containing 3σ–7σ high-density clumps. The highest-density clump includes a bright (23.78 mag in the narrowband) extended (≳16 kpc) Lyα blob candidate, indicative of a dense environment. The QSO could be part of the structure but is not located exactly at any of the high-density peaks. Near the QSO, LAEs show underdensity while LBGs average to 4σ excess densities compared to the control field. If these environments reflect halo mass, the QSO may not be in the most massive halo but still in a moderately massive one. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
NASA Astrophysics Data System (ADS)
Wright, Bill S.; Winther, Hans A.; Koyama, Kazuya
2017-10-01
The effect of massive neutrinos on the growth of cold dark matter perturbations acts as a scale-dependent Newton's constant and leads to scale-dependent growth factors just as we often find in models of gravity beyond General Relativity. We show how to compute growth factors for ΛCDM and general modified gravity cosmologies combined with massive neutrinos in Lagrangian perturbation theory for use in COLA and extensions thereof. We implement this together with the grid-based massive neutrino method of Brandbyge and Hannestad in MG-PICOLA and compare COLA simulations to full N-body simulations of ΛCDM and f(R) gravity with massive neutrinos. Our implementation is computationally cheap if the underlying cosmology already has scale-dependent growth factors and it is shown to be able to produce results that match N-body to percent level accuracy for both the total and CDM matter power-spectra up to klesssim 1 h/Mpc.
Mineral deposit densities for estimating mineral resources
Singer, Donald A.
2008-01-01
Estimates of numbers of mineral deposits are fundamental to assessing undiscovered mineral resources. Just as frequencies of grades and tonnages of well-explored deposits can be used to represent the grades and tonnages of undiscovered deposits, the density of deposits (deposits/area) in well-explored control areas can serve to represent the number of deposits. Empirical evidence presented here indicates that the processes affecting the number and quantity of resources in geological settings are very general across many types of mineral deposits. For podiform chromite, porphyry copper, and volcanogenic massive sulfide deposit types, the size of tract that geologically could contain the deposits is an excellent predictor of the total number of deposits. The number of mineral deposits is also proportional to the type’s size. The total amount of mineralized rock is also proportional to size of the permissive area and the median deposit type’s size. Regressions using these variables provide a means to estimate the density of deposits and the total amount of mineralization. These powerful estimators are based on analysis of ten different types of mineral deposits (Climax Mo, Cuban Mn, Cyprus massive sulfide, Franciscan Mn, kuroko massive sulfide, low-sulfide quartz-Au vein, placer Au, podiform Cr, porphyry Cu, and W vein) from 108 permissive control tracts around the world therefore generalizing across deposit types. Despite the diverse and complex geological settings of deposit types studied here, the relationships observed indicate universal controls on the accumulation and preservation of mineral resources that operate across all scales. The strength of the relationships (R 2=0.91 for density and 0.95 for mineralized rock) argues for their broad use. Deposit densities can now be used to provide a guideline for expert judgment or used directly for estimating the number of most kinds of mineral deposits.
USA: Economics, Politics, Ideology, Number 12, December 1977
1978-01-19
which will guarantee the pioneer firm the neces- sary profit level. The structure of market prices, however, represents a poor reflection, as we...the timely and rapid rearrangement of structural proportions. The economic mechanism of state-monopolistic capitalism, however, was incapable of...ensuring the necessary dynamism in the large-scale economy. The development of massive structural changes in the American economy is a complex and
Ultralow-power all-optical processing of high-speed data signals in deposited silicon waveguides.
Wang, Ke-Yao; Petrillo, Keith G; Foster, Mark A; Foster, Amy C
2012-10-22
Utilizing a 6-mm-long hydrogenated amorphous silicon nanowaveguide, we demonstrate error-free (BER < 10(-9)) 160-to-10 Gb/s OTDM demultiplexing using ultralow switching peak powers of 50 mW. This material is deposited at low temperatures enabling a path toward multilayer integration and therefore massive scaling of the number of devices in a single photonic chip.
Self-similar hierarchical energetics in the ICM of massive galaxy clusters
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Beresnyak, Andrey
Massive galaxy clusters (GC) are filled with a hot, turbulent and magnetised intra-cluster medium (ICM). They are still forming under the action of gravitational instability, which drives supersonic mass accretion flows. These partially dissipate into heat through a complex network of large scale shocks, and partly excite giant turbulent eddies and cascade. Turbulence dissipation not only contributes to heating of the ICM but also amplifies magnetic energy by way of dynamo action. The pattern of gravitational energy turning into kinetic, thermal, turbulent and magnetic is a fundamental feature of GC hydrodynamics but quantitative modelling has remained a challenge. In this contribution we present results from a recent high resolution, fully cosmological numerical simulation of a massive Coma-like galaxy cluster in which the time dependent turbulent motions of the ICM are resolved (Miniati 2014) and their statistical properties are quantified for the first time (Miniati 2015, Beresnyak & Miniati 2015). We combine these results with independent state-of-the art numerical simulations of MHD turbulence (Beresnyak 2012), which shows that in the nonlinear regime of turbulent dynamo (for magnetic Prandtl numbers>~ 1) the growth rate of the magnetic energy corresponds to a fraction CE ~= 4 - 5 × 10-2 of the turbulent dissipation rate. We thus determine without adjustable parameters the thermal, turbulent and magnetic history of giant GC (Miniati & Beresnyak 2015). We find that the energy components of the ICM are ordered according to a permanent hierarchy, in which the sonic Mach number at the turbulent injection scale is of order unity, the beta of the plasma of order forty and the ratio of turbulent injection scale to Alfvén scale is of order one hundred. These dimensionless numbers remain virtually unaltered throughout the cluster's history, despite evolution of each individual component and the drive towards equipartition of the turbulent dynamo, thus revealing a new type of self-similarity in cosmology. Their specific values, while consistent with current data, indicate that thermal energy dominates the ICM energetics and the turbulent dynamo is always far from saturation, unlike the condition in other familiar astrophysical fluids (stars, interstellar medium of galaxies, compact objects, etc.). In addition, they have important physical meaning as their specific values encodes information about the efficiency of turbulent heating (the fraction of ICM thermal energy produced by turbulent dissipation) and the efficiency of dynamo action in the ICM (CE ).
An S N Algorithm for Modern Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Randal Scott
2016-08-29
LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.
Statistical Issues in Galaxy Cluster Cosmology
NASA Technical Reports Server (NTRS)
Mantz, Adam
2013-01-01
The number and growth of massive galaxy clusters are sensitive probes of cosmological structure formation. Surveys at various wavelengths can detect clusters to high redshift, but the fact that cluster mass is not directly observable complicates matters, requiring us to simultaneously constrain scaling relations of observable signals with mass. The problem can be cast as one of regression, in which the data set is truncated, the (cosmology-dependent) underlying population must be modeled, and strong, complex correlations between measurements often exist. Simulations of cosmological structure formation provide a robust prediction for the number of clusters in the Universe as a function of mass and redshift (the mass function), but they cannot reliably predict the observables used to detect clusters in sky surveys (e.g. X-ray luminosity). Consequently, observers must constrain observable-mass scaling relations using additional data, and use the scaling relation model in conjunction with the mass function to predict the number of clusters as a function of redshift and luminosity.
Black-hole-regulated star formation in massive galaxies.
Martín-Navarro, Ignacio; Brodie, Jean P; Romanowsky, Aaron J; Ruiz-Lara, Tomás; van de Ven, Glenn
2018-01-18
Supermassive black holes, with masses more than a million times that of the Sun, seem to inhabit the centres of all massive galaxies. Cosmologically motivated theories of galaxy formation require feedback from these supermassive black holes to regulate star formation. In the absence of such feedback, state-of-the-art numerical simulations fail to reproduce the number density and properties of massive galaxies in the local Universe. There is, however, no observational evidence of this strongly coupled coevolution between supermassive black holes and star formation, impeding our understanding of baryonic processes within galaxies. Here we report that the star formation histories of nearby massive galaxies, as measured from their integrated optical spectra, depend on the mass of the central supermassive black hole. Our results indicate that the black-hole mass scales with the gas cooling rate in the early Universe. The subsequent quenching of star formation takes place earlier and more efficiently in galaxies that host higher-mass central black holes. The observed relation between black-hole mass and star formation efficiency applies to all generations of stars formed throughout the life of a galaxy, revealing a continuous interplay between black-hole activity and baryon cooling.
Black-hole-regulated star formation in massive galaxies
NASA Astrophysics Data System (ADS)
Martín-Navarro, Ignacio; Brodie, Jean P.; Romanowsky, Aaron J.; Ruiz-Lara, Tomás; van de Ven, Glenn
2018-01-01
Supermassive black holes, with masses more than a million times that of the Sun, seem to inhabit the centres of all massive galaxies. Cosmologically motivated theories of galaxy formation require feedback from these supermassive black holes to regulate star formation. In the absence of such feedback, state-of-the-art numerical simulations fail to reproduce the number density and properties of massive galaxies in the local Universe. There is, however, no observational evidence of this strongly coupled coevolution between supermassive black holes and star formation, impeding our understanding of baryonic processes within galaxies. Here we report that the star formation histories of nearby massive galaxies, as measured from their integrated optical spectra, depend on the mass of the central supermassive black hole. Our results indicate that the black-hole mass scales with the gas cooling rate in the early Universe. The subsequent quenching of star formation takes place earlier and more efficiently in galaxies that host higher-mass central black holes. The observed relation between black-hole mass and star formation efficiency applies to all generations of stars formed throughout the life of a galaxy, revealing a continuous interplay between black-hole activity and baryon cooling.
Electric-dipole-induced universality for Dirac fermions in graphene.
De Martino, Alessandro; Klöpfer, Denis; Matrasulov, Davron; Egger, Reinhold
2014-05-09
We study electric dipole effects for massive Dirac fermions in graphene and related materials. The dipole potential accommodates towers of infinitely many bound states exhibiting a universal Efimov-like scaling hierarchy. The dipole moment determines the number of towers, but there is always at least one tower. The corresponding eigenstates show a characteristic angular asymmetry, observable in tunnel spectroscopy. However, charge transport properties inferred from scattering states are highly isotropic.
Solving Navier-Stokes equations on a massively parallel processor; The 1 GFLOP performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saati, A.; Biringen, S.; Farhat, C.
This paper reports on experience in solving large-scale fluid dynamics problems on the Connection Machine model CM-2. The authors have implemented a parallel version of the MacCormack scheme for the solution of the Navier-Stokes equations. By using triad floating point operations and reducing the number of interprocessor communications, they have achieved a sustained performance rate of 1.42 GFLOPS.
Compact holographic optical neural network system for real-time pattern recognition
NASA Astrophysics Data System (ADS)
Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.
1996-08-01
One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.
High performance computing applications in neurobiological research
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.
1994-01-01
The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.
NASA Astrophysics Data System (ADS)
Hunter, Deidre A.; Adamo, Angela; Elmegreen, Bruce G.; Gallardo, Samavarti; Lee, Janice C.; Cook, David O.; Thilker, David; Kayitesi, Bridget; Kim, Hwihyun; Kahre, Lauren; Ubeda, Leonardo; Bright, Stacey N.; Ryon, Jenna E.; Calzetti, Daniela; Tosi, Monica; Grasha, Kathryn; Messa, Matteo; Fumagalli, Michele; Dale, Daniel A.; Sabbi, Elena; Cignoni, Michele; Smith, Linda J.; Gouliermis, Dimitrios M.; Grebel, Eva K.; Aloisi, Alessandra; Whitmore, Bradley C.; Chandar, Rupali; Johnson, Kelsey E.
2018-07-01
We have explored the role environmental factors play in determining characteristics of young stellar objects in nearby dwarf irregular and blue compact dwarf galaxies. Star clusters are characterized by concentrations, masses, and formation rates; OB associations by mass and mass surface density; O stars by their numbers and near-ultraviolet absolute magnitudes; and H II regions by Hα surface brightnesses. These characteristics are compared to surrounding galactic pressure, stellar mass density, H I surface density, and star formation rate (SFR) surface density. We find no trend of cluster characteristics with environmental properties, implying that larger-scale effects are more important in determining cluster characteristics or that rapid dynamical evolution erases any memory of the initial conditions. On the other hand, the most massive OB associations are found at higher pressure and H I surface density, and there is a trend of higher H II region Hα surface brightness with higher pressure, suggesting that a higher concentration of massive stars and gas is found preferentially in regions of higher pressure. At low pressures we find massive stars but not bound clusters and OB associations. We do not find evidence for an increase of cluster formation efficiency as a function of SFR density. However, there is an increase in the ratio of the number of clusters to the number of O stars with increasing pressure, perhaps reflecting an increase in clustering properties with SFR.
The Number Density of Quiescent Compact Galaxies at Intermediate Redshift
NASA Astrophysics Data System (ADS)
Damjanov, Ivana; Hwang, Ho Seong; Geller, Margaret J.; Chilingarian, Igor
2014-09-01
Massive compact systems at 0.2 < z < 0.6 are the missing link between the predominantly compact population of massive quiescent galaxies at high redshift and their analogs and relics in the local volume. The evolution in number density of these extreme objects over cosmic time is the crucial constraining factor for the models of massive galaxy assembly. We select a large sample of ~200 intermediate-redshift massive compacts from the Baryon Oscillation Spectroscopic Survey (BOSS) spectroscopy by identifying point-like Sloan Digital Sky Survey photometric sources with spectroscopic signatures of evolved redshifted galaxies. A subset of our targets have publicly available high-resolution ground-based images that we use to augment the dynamical and stellar population properties of these systems by their structural parameters. We confirm that all BOSS compact candidates are as compact as their high-redshift massive counterparts and less than half the size of similarly massive systems at z ~ 0. We use the completeness-corrected numbers of BOSS compacts to compute lower limits on their number densities in narrow redshift bins spanning the range of our sample. The abundance of extremely dense quiescent galaxies at 0.2 < z < 0.6 is in excellent agreement with the number densities of these systems at high redshift. Our lower limits support the models of massive galaxy assembly through a series of minor mergers over the redshift range 0 < z < 2.
A Collection of Economic and Social Data from Glitch, a Massively Multiplayer Online Game
2013-03-05
A Collection of Economic and Social Data from Glitch, a Massively Multiplayer Online Game Peter M. Landwehr March 5, 2013 CMU-ISR-13...massively multiplayer online games (MMOG) - social and cultural model embedding technologies. Additional support was provided by CASOS — the center for...SUBTITLE A Collection of Economic and Social Data from Glitch, a Massively Multiplayer Online Game 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
Clusters of Galaxies at High Redshift
NASA Astrophysics Data System (ADS)
Fort, Bernard
For a long time, the small number of clusters at z > 0.3 in the Abell survey catalogue and simulations of the standard CDM formation of large scale structures provided a paradigm where clusters were considered as young merging structures. At earlier times, loose concentrations of galaxy clumps were mostly anticipated. Recent observations broke the taboo. Progressively we became convinced that compact and massive clusters at z = 1 or possibly beyond exist and should be searched for.
Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation
2013-06-01
exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in
Preparation of Entangled Polymer Melts of Various Architecture for Coarse-Grained Models
2011-09-01
Simulator ( LAMMPS ). This report presents a theory overview and a manual how to use the method. 15. SUBJECT TERMS Ammunition, coarse-grained model...polymer builder, LAMMPS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 26 19a. NAME OF RESPONSIBLE PERSON...scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ). Gel is an in house written C program of coarse- grained polymer builder, and LAMMPS is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Bill S.; Winther, Hans A.; Koyama, Kazuya, E-mail: bill.wright@port.ac.uk, E-mail: hans.winther@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk
The effect of massive neutrinos on the growth of cold dark matter perturbations acts as a scale-dependent Newton's constant and leads to scale-dependent growth factors just as we often find in models of gravity beyond General Relativity. We show how to compute growth factors for ΛCDM and general modified gravity cosmologies combined with massive neutrinos in Lagrangian perturbation theory for use in COLA and extensions thereof. We implement this together with the grid-based massive neutrino method of Brandbyge and Hannestad in MG-PICOLA and compare COLA simulations to full N -body simulations of ΛCDM and f ( R ) gravity withmore » massive neutrinos. Our implementation is computationally cheap if the underlying cosmology already has scale-dependent growth factors and it is shown to be able to produce results that match N -body to percent level accuracy for both the total and CDM matter power-spectra up to k ∼< 1 h /Mpc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Tao-Chung; Lai, Shih-Ping; Zhang, Qizhou
We present Submillimeter Array 880 μ m dust polarization observations of six massive dense cores in the DR21 filament. The dust polarization shows complex magnetic field structures in the massive dense cores with sizes of 0.1 pc, in contrast to the ordered magnetic fields of the parsec-scale filament. The major axes of the massive dense cores appear to be aligned either parallel or perpendicular to the magnetic fields of the filament, indicating that the parsec-scale magnetic fields play an important role in the formation of the massive dense cores. However, the correlation between the major axes of the cores andmore » the magnetic fields of the cores is less significant, suggesting that during the core formation, the magnetic fields below 0.1 pc scales become less important than the magnetic fields above 0.1 pc scales in supporting a core against gravity. Our analysis of the angular dispersion functions of the observed polarization segments yields a plane-of-sky magnetic field strength of 0.4–1.7 mG for the massive dense cores. We estimate the kinematic, magnetic, and gravitational virial parameters of the filament and the cores. The virial parameters show that the gravitational energy in the filament dominates magnetic and kinematic energies, while the kinematic energy dominates in the cores. Our work suggests that although magnetic fields may play an important role in a collapsing filament, the kinematics arising from gravitational collapse must become more important than magnetic fields during the evolution from filaments to massive dense cores.« less
Analysis of Massively Separated Flows of Aircraft Using Detached Eddy Simulation
NASA Astrophysics Data System (ADS)
Morton, Scott
2002-08-01
An important class of turbulent flows of aerodynamic interest are those characterized by massive separation, e.g., the flow around an aircraft at high angle of attack. Numerical simulation is an important tool for analysis, though traditional models used in the solution of the Reynolds-averaged Navier-Stokes (RANS) equations appear unable to accurately account for the time-dependent and three-dimensional motions governing flows with massive separation. Large-eddy simulation (LES) is able to resolve these unsteady three-dimensional motions, yet is cost prohibitive for high Reynolds number wall-bounded flows due to the need to resolve the small scale motions in the boundary layer. Spalart et. al. proposed a hybrid technique, Detached-Eddy Simulation (DES), which takes advantage of the often adequate performance of RANS turbulence models in the "thin," typically attached regions of the flow. In the separated regions of the flow the technique becomes a Large Eddy Simulation, directly resolving the time-dependent and unsteady features that dominate regions of massive separation. The current work applies DES to a 70 degree sweep delta wing at 27 degrees angle of attack, a geometrically simple yet challenging flowfield that exhibits the unsteady three-dimensional massively separated phenomena of vortex breakdown. After detailed examination of this basic flowfield, the method is demonstrated on three full aircraft of interest characterized by massive separation, the F-16 at 45 degrees angle of attack, the F-15 at 65 degree angle of attack (with comparison to flight test), and the C-130 in a parachute drop condition at near stall speed with cargo doors open.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gihring, Thomas; Green, Stefan; Schadt, Christopher Warren
2011-01-01
Technologies for massively parallel sequencing are revolutionizing microbial ecology and are vastly increasing the scale of ribosomal RNA (rRNA) gene studies. Although pyrosequencing has increased the breadth and depth of possible rRNA gene sampling, one drawback is that the number of reads obtained per sample is difficult to control. Pyrosequencing libraries typically vary widely in the number of sequences per sample, even within individual studies, and there is a need to revisit the behaviour of richness estimators and diversity indices with variable gene sequence library sizes. Multiple reports and review papers have demonstrated the bias in non-parametric richness estimators (e.g.more » Chao1 and ACE) and diversity indices when using clone libraries. However, we found that biased community comparisons are accumulating in the literature. Here we demonstrate the effects of sample size on Chao1, ACE, CatchAll, Shannon, Chao-Shen and Simpson's estimations specifically using pyrosequencing libraries. The need to equalize the number of reads being compared across libraries is reiterated, and investigators are directed towards available tools for making unbiased diversity comparisons.« less
Emerging Roles: Key Insights from Librarians in a Massive Open Online Course
ERIC Educational Resources Information Center
Stephens, Michael; Jones, Kyle M. L.
2015-01-01
From the cutting edge of innovations in online education comes the MOOC (Massive Open Online Course), a potentially disruptive and transformational mechanism for large-scale learning. What's the role of librarians in a MOOC? What can librarians learn from participating in a large-scale professional development opportunity delivered in an open…
NASA Astrophysics Data System (ADS)
Sybilska, A.; Lisker, T.; Kuntschner, H.; Vazdekis, A.; van de Ven, G.; Peletier, R.; Falcón-Barroso, J.; Vijayaraghavan, R.; Janz, J.
2017-09-01
We present the first in a series of papers in The role of Environment in shaping Low-mass Early-type Nearby galaxies (hELENa) project. In this paper, we combine our sample of 20 low-mass early types (dEs) with 258 massive early types (ETGs) from the ATLAS3D survey - all observed with the SAURON integral field unit - to investigate early-type galaxies' stellar population scaling relations and the dependence of the population properties on local environment, extended to the low-σ regime of dEs. The ages in our sample show more scatter at lower σ values, indicative of less massive galaxies being affected by the environment to a higher degree. The shape of the age-σ relations for cluster versus non-cluster galaxies suggests that cluster environment speeds up the placing of galaxies on the red sequence. While the scaling relations are tighter for cluster than for the field/group objects, we find no evidence for a difference in average population characteristics of the two samples. We investigate the properties of our sample in the Virgo cluster as a function of number density (rather than simple clustrocentric distance) and find that dE ages correlate with the local density such that galaxies in regions of lower density are younger, likely because they are later arrivals to the cluster or have experienced less pre-processing in groups, and consequently used up their gas reservoir more recently. Overall, dE properties correlate more strongly with density than those of massive ETGs, which was expected as less massive galaxies are more susceptible to external influences.
Graviton mass or cosmological constant?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabadadze, Gregory; Gruzinov, Andrei
2005-12-15
To describe a massive graviton in 4D Minkowski space-time one introduces a quadratic term in the Lagrangian. This term, however, can lead to a readjustment or instability of the background instead of describing a massive graviton on flat space. We show that for all local 4D Lorentz-invariant mass terms Minkowski space is unstable. The instability can develop in a time scale that is many orders of magnitude shorter than the inverse graviton mass. We start with the Pauli-Fierz (PF) term that is the only local mass term with no ghosts in the linearized approximation. We show that nonlinear completions ofmore » the PF Lagrangian give rise to instability of Minkowski space. We continue with the mass terms that are not of a PF type. Although these models are known to have ghosts in the linearized approximations, nonlinear interactions can lead to background change in which the ghosts are eliminated. In the latter case, however, the graviton perturbations on the new background are not massive. We argue that a consistent theory of a massive graviton on flat space can be formulated in theories with extra dimensions. They require an infinite number of fields or nonlocal description from a 4D point of view.« less
Accumulation of a swarm of small planetesimals
NASA Technical Reports Server (NTRS)
Wetherill, G. W.; Stewart, Glen R.
1989-01-01
The present gasdynamic study of the planetesimal-accumulation stage in which 10-km bodies in the neighborhood of 1 AU grow to 10 to the 25th-10 to the 27th g mass, or 'planetary embryo' size, attempts to identify the circumstances under which runaway growth forms a small number of massive embryos in the terrestrial-planet region on a 0.1-1.0 million year time-scale. No runaways are found, however, unless more plausible physical processes are invoked; in that case, runaways in the terrestrial planet region are probable on a 0.1 million-year time-scale, and the final stage of planetary accumulation may involve the growth of these embryos into the present planets on a 10-100 million-year time-scale.
Cosmology and unified gauge theory
NASA Astrophysics Data System (ADS)
Oraifeartaigh, L.
1981-09-01
Theoretical points in common between cosmology and unified gauge theory (UGT) are reviewed, with attention given to areas of one which have proven useful for the other. The underlying principles for both theoretical frameworks are described, noting the differences in scale, i.e., 10 to the 25th cm in cosmology and 10 to the -15th cm for UGT. Cosmology has produced bounds on the number of existing neutrino species, and also on the mass of neutrinos, two factors of interest in particle physics. Electrons, protons, and neutrinos, having been spawned from the same massive leptons, each composed of three quarks, have been predicted to be present in equal numbers in the Universe by UGT, in line with necessities of cosmology. The Grand UGT also suggests specific time scales for proton decay, thus accounting for the observed baryon assymmetry.
MASGOMAS PROJECT, New automatic-tool for cluster search on IR photometric surveys
NASA Astrophysics Data System (ADS)
Rübke, K.; Herrero, A.; Borissova, J.; Ramirez-Alegria, S.; García, M.; Marin-Franch, A.
2015-05-01
The Milky Way is expected to contain a large number of young massive (few x 1000 solar masses) stellar clusters, borne in dense cores of gas and dust. Yet, their known number remains small. We have started a programme to search for such clusters, MASGOMAS (MAssive Stars in Galactic Obscured MAssive clusterS). Initially, we selected promising candidates by means of visual inspection of infrared images. In a second phase of the project we have presented a semi-automatic method to search for obscured massive clusters that resulted in the identification of new massive clusters, like MASGOMAS-1 (with more than 10,000 solar masses) and MASGOMAS-4 (a double-cored association of about 3,000 solar masses). We have now developped a new automatic tool for MASGOMAS that allows the identification of a large number of massive cluster candidates from the 2MASS and VVV catalogues. Cluster candidates fulfilling criteria appropriated for massive OB stars are thus selected in an efficient and objective way. We present the results from this tool and the observations of the first selected cluster, and discuss the implications for the Milky Way structure.
Mechanism for thermal relic dark matter of strongly interacting massive particles.
Hochberg, Yonit; Kuflik, Eric; Volansky, Tomer; Wacker, Jay G
2014-10-24
We present a new paradigm for achieving thermal relic dark matter. The mechanism arises when a nearly secluded dark sector is thermalized with the standard model after reheating. The freeze-out process is a number-changing 3→2 annihilation of strongly interacting massive particles (SIMPs) in the dark sector, and points to sub-GeV dark matter. The couplings to the visible sector, necessary for maintaining thermal equilibrium with the standard model, imply measurable signals that will allow coverage of a significant part of the parameter space with future indirect- and direct-detection experiments and via direct production of dark matter at colliders. Moreover, 3→2 annihilations typically predict sizable 2→2 self-interactions which naturally address the "core versus cusp" and "too-big-to-fail" small-scale structure formation problems.
What drives the formation of massive stars and clusters?
NASA Astrophysics Data System (ADS)
Ochsendorf, Bram; Meixner, Margaret; Roman-Duval, Julia; Evans, Neal J., II; Rahman, Mubdi; Zinnecker, Hans; Nayak, Omnarayani; Bally, John; Jones, Olivia C.; Indebetouw, Remy
2018-01-01
Galaxy-wide surveys allow to study star formation in unprecedented ways. In this talk, I will discuss our analysis of the Large Magellanic Cloud (LMC) and the Milky Way, and illustrate how studying both the large and small scale structure of galaxies are critical in addressing the question: what drives the formation of massive stars and clusters?I will show that ‘turbulence-regulated’ star formation models do not reproduce massive star formation properties of GMCs in the LMC and Milky Way: this suggests that theory currently does not capture the full complexity of star formation on small scales. I will also report on the discovery of a massive star forming complex in the LMC, which in many ways manifests itself as an embedded twin of 30 Doradus: this may shed light on the formation of R136 and 'Super Star Clusters' in general. Finally, I will highlight what we can expect in the next years in the field of star formation with large-scale sky surveys, ALMA, and our JWST-GTO program.
Neutron star dynamos and the origins of pulsar magnetism
NASA Technical Reports Server (NTRS)
Thompson, Christopher; Duncan, Robert C.
1993-01-01
Neutron star convection is a transient phenomenon and has an extremely high magnetic Reynolds number. In this sense, a neutron star dynamo is the quintessential fast dynamo. The convective motions are only mildly turbulent on scales larger than the approximately 100 cm neutrino mean free path, but the turbulence is well developed on smaller scales. Several fundamental issues in the theory of fast dynamos are raised in the study of a neutron star dynamo, in particular the possibility of dynamo action in mirror-symmetric turbulence. It is argued that in any high magnetic Reynolds number dynamo, most of the magnetic energy becomes concentrated in thin flux ropes when the field pressure exceeds the turbulent pressure at the smallest scale of turbulence. In addition, the possibilities for dynamo action during the various (pre-collapse) stages of convective motion that occur in the evolution of a massive star are examined, and the properties of white dwarf and neutron star progenitors are contrasted.
Galaxy formation in an intergalactic medium dominated by explosions
NASA Technical Reports Server (NTRS)
Ostriker, J. P.; Cowie, L. L.
1981-01-01
The evolution of galaxies in an intergalactic medium dominated by explosions of star systems is considered analogously to star formation by nonlinearly interacting processes in the interstellar medium. Conditions for the existence of a hydrodynamic instability by which galaxy formation leads to more galaxy formation due to the propagation of the energy released at the death of massive stars are examined, and it is shown that such an explosive amplification is possible at redshifts less than about 5 and stellar system masses between 10 to the 8th and 10 to the 12th solar masses. Explosions before a redshift of about 5 are found to lead primarily to the formation of massive stars rather than galaxies, while those at a redshift close to 5 will result in objects of normal galactic scale. The model also predicts a dusty interstellar medium preventing the detection of objects of redshift greater than 3, numbers and luminosities of protogalaxies comparable to present observations, unvirialized groups of galaxies lying on two-dimensional surfaces, and a significant number of black holes in the mass range 1000-10,000 solar masses.
DEMNUni: massive neutrinos and the bispectrum of large scale structures
NASA Astrophysics Data System (ADS)
Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano
2018-03-01
The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.
NASA Technical Reports Server (NTRS)
Stefanon, Mauro; Marchesini, Danilo; Rudnick, Gregory H.; Brammer, Gabriel B.; Tease, Katherine Whitaker
2013-01-01
Using public data from the NEWFIRM Medium-Band Survey (NMBS) and the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS), we investigate the population of massive galaxies at z > 3. The main aim of this work is to identify the potential progenitors of z 2 compact, massive, quiescent galaxies (CMQGs), furthering our understanding of the onset and evolution of massive galaxies. Our work is enabled by high-resolution images from CANDELS data and accurate photometric redshifts, stellar masses, and star formation rates (SFRs) from 37-band NMBS photometry. The total number of massive galaxies at z > 3 is consistent with the number of massive, quiescent galaxies (MQGs) at z 2, implying that the SFRs for all of these galaxies must be much lower by z 2. We discover four CMQGs at z > 3, pushing back the time for which such galaxies have been observed. However, the volume density for these galaxies is significantly less than that of galaxies at z < 2 with similar masses, SFRs, and sizes, implying that additional CMQGs must be created in the intervening 1 Gyr between z = 3 and z = 2. We find five star-forming galaxies at z 3 that are compact (Re < 1.4 kpc) and have stellar mass M* > 1010.6M; these galaxies are likely to become members of the massive, quiescent, compact galaxy population at z 2. We evolve the stellar masses and SFRs of each individual z > 3 galaxy adopting five different star formation histories (SFHs) and studying the resulting population of massive galaxies at z = 2.3. We find that declining or truncated SFHs are necessary to match the observed number density of MQGs at z 2, whereas a constant delayed-exponential SFH would result in a number density significantly smaller than observed. All of our assumed SFHs imply number densities of CMQGs at z 2 that are consistent with the observed number density. Better agreement with the observed number density of CMQGs at z 2 is obtained if merging is included in the analysis and better still if star formation quenching is assumed to shortly follow the merging event, as implied by recent models of the formation of MQGs.
ERIC Educational Resources Information Center
Xiong, Yao; Suen, Hoi K.
2018-01-01
The development of massive open online courses (MOOCs) has launched an era of large-scale interactive participation in education. While massive open enrolment and the advances of learning technology are creating exciting potentials for lifelong learning in formal and informal ways, the implementation of efficient and effective assessment is still…
A Systematic Review of the Socio-Ethical Aspects of Massive Online Open Courses
ERIC Educational Resources Information Center
Rolfe, Vivien
2015-01-01
Massive open online courses (MOOCs) offer learners across the globe unprecedented access to education. Through sophisticated e-learning technologies and web approaches, MOOCs attract massive scale participation and global interest. Some commercial ventures place social equality at the heart of their missions, claiming to empower communities by…
Massive star winds interacting with magnetic fields on various scales
NASA Astrophysics Data System (ADS)
David-Uraz, A.; Petit, V.; Erba, C.; Fullerton, A.; Walborn, N.; MacInnis, R.
2018-01-01
One of the defining processes which govern massive star evolution is their continuous mass loss via dense, supersonic line-driven winds. In the case of those OB stars which also host a surface magnetic field, the interaction between that field and the ionized outflow leads to complex circumstellar structures known as magnetospheres. In this contribution, we review recent developments in the field of massive star magnetospheres, including current efforts to characterize the largest magnetosphere surrounding an O star: that of NGC 1624-2. We also discuss the potential of the "analytic dynamical magnetosphere" (ADM) model to interpret multi-wavelength observations. Finally, we examine the possible effects of — heretofore undetected — small-scale magnetic fields on massive star winds and compare their hypothetical consequences to existing, unexplained observations.
2013-08-01
potential for HMX / RDX (3, 9). ...................................................................................8 1 1. Purpose This work...6 dispersion and electrostatic interactions. Constants for the SB potential are given in table 1. 8 Table 1. SB potential for HMX / RDX (3, 9...modeling dislocations in the energetic molecular crystal RDX using the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) molecular
NASA Technical Reports Server (NTRS)
Leung, K. C.
1989-01-01
Reverse Algols, binary systems with a semidetached configuration in which the more massive component is in contact with the critical equipotential surface, are examined. Observational evidence for reverse Algols is presented and the parameters of seven reverse Algols are listed. The evolution of Algols and reverse Algols is discussed. It is suggested that, because reverse Algols represent the premass-reversal semidetached phase of close binary evolution, the evolutionary time scale between regular and reverse Algols is the ratio of the number of confirmed systems of these two Algol types.
50 GFlops molecular dynamics on the Connection Machine 5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomdahl, P.S.; Tamayo, P.; Groenbech-Jensen, N.
1993-12-31
The authors present timings and performance numbers for a new short range three dimensional (3D) molecular dynamics (MD) code, SPaSM, on the Connection Machine-5 (CM-5). They demonstrate that runs with more than 10{sup 8} particles are now possible on massively parallel MIMD computers. To the best of their knowledge this is at least an order of magnitude more particles than what has previously been reported. Typical production runs show sustained performance (including communication) in the range of 47--50 GFlops on a 1024 node CM-5 with vector units (VUs). The speed of the code scales linearly with the number of processorsmore » and with the number of particles and shows 95% parallel efficiency in the speedup.« less
Lee, Byung Moo
2017-12-29
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.
2017-01-01
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339
Searching for massive clusters in weak lensing surveys
NASA Astrophysics Data System (ADS)
Hamana, Takashi; Takada, Masahiro; Yoshida, Naoki
2004-05-01
We explore the ability of weak lensing surveys to locate massive clusters. We use both analytic models of dark matter haloes and mock weak lensing surveys generated from a large cosmological N-body simulation. The analytic models describe the average properties of weak lensing haloes and predict the number counts, enabling us to compute an effective survey selection function. We argue that the detectability of massive haloes depends not only on the halo mass but also strongly on the redshift where the halo is located. We test the model prediction for the peak number counts in weak lensing mass maps against mock numerical data, and find that the noise resulting from intrinsic galaxy ellipticities causes a systematic effect which increases the peak counts. We develop a correction scheme for the systematic effect in an empirical manner, and show that, after correction, the model prediction agrees well with the mock data. The mock data is also used to examine the completeness and efficiency of the weak lensing halo search by fully taking into account the noise and the projection effect by large-scale structures. We show that the detection threshold of S/N = 4 ~ 5 gives an optimal balance between completeness and efficiency. Our results suggest that, for a weak lensing survey with a galaxy number density of ng= 30 arcmin-2 with a mean redshift of z= 1, the mean number of haloes which are expected to cause lensing signals above S/N = 4 is Nhalo(S/N > 4) = 37 per 10 deg2, whereas 23 of the haloes are actually detected with S/N > 4, giving the effective completeness as good as 63 per cent. Alternatively, the mean number of peaks in the same area is Npeak= 62 for a detection threshold of S/N = 4. Among the 62 peaks, 23 are caused by haloes with the expected peak height S/N > 4, 13 result from haloes with 3 < S/N < 4 and the remaining 26 peaks are either the false peaks caused by the noise or haloes with a lower expected peak height. Therefore the contamination rate is 44 per cent (this could be an overestimation). Weak lensing surveys thus provide a reasonably efficient way to search for massive clusters.
Map scale effects on estimating the number of undiscovered mineral deposits
Singer, D.A.; Menzie, W.D.
2008-01-01
Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring detailed and expensive exploration, and land-use planners benefit from reduced areas of concern. ?? 2008 International Association for Mathematical Geology.
DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.
Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien
2017-09-01
Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.
Gole, Jeff; Gore, Athurva; Richards, Andrew; Chiu, Yu-Jui; Fung, Ho-Lim; Bushman, Diane; Chiang, Hsin-I; Chun, Jerold; Lo, Yu-Hwa; Zhang, Kun
2013-01-01
Genome sequencing of single cells has a variety of applications, including characterizing difficult-to-culture microorganisms and identifying somatic mutations in single cells from mammalian tissues. A major hurdle in this process is the bias in amplifying the genetic material from a single cell, a procedure known as polymerase cloning. Here we describe the microwell displacement amplification system (MIDAS), a massively parallel polymerase cloning method in which single cells are randomly distributed into hundreds to thousands of nanoliter wells and simultaneously amplified for shotgun sequencing. MIDAS reduces amplification bias because polymerase cloning occurs in physically separated nanoliter-scale reactors, facilitating the de novo assembly of near-complete microbial genomes from single E. coli cells. In addition, MIDAS allowed us to detect single-copy number changes in primary human adult neurons at 1–2 Mb resolution. MIDAS will further the characterization of genomic diversity in many heterogeneous cell populations. PMID:24213699
Contribution of Massive Stars to the Production of Neutron Capture Elements
NASA Astrophysics Data System (ADS)
Federman, Steven
2010-09-01
Elements beyond the Fe-peak must be synthesized through neutron-capture processes. With the aim of understanding the contribution of massive stars to the synthesis of neutron-capture elements during the current epoch, we propose an archival survey of interstellar arsenic, cadmium, tin, and lead. Nucleosynthesis via the weak slow process and the rapid process are the routes involving massive stars, while the main slow process arises from the evolution of low-mass stars. Ultraviolet lines for the dominant ions for each element will be used to extract interstellar abundances. The survey involves about forty sight lines, many of which are associated with regions of massive star formation shaped by core-collapse supernovae {SNe II}. The sample will increase the number of published determinations by factors of 2 to 5. HST spectra are the only means for determining the elemental abundances for this set of species in diffuse interstellar clouds. The survey contains directions that are both molecule poor and molecule rich, thereby enabling us to examine the overall level of depletion onto grains as a function of gas density. Complementary laboratory determinations of oscillator strengths will place the interstellar measurements on an absolute scale. The results from the proposed study will be combined with published interstellar abundances for other neutron capture elements and the suite of measurements will be compared to results from stars throughout the history of the Galaxy.
The Dynamics of Massive Starless Cores with ALMA
NASA Astrophysics Data System (ADS)
Tan, Jonathan C.; Kong, Shuo; Butler, Michael J.; Caselli, Paola; Fontani, Francesco
2013-12-01
How do stars that are more massive than the Sun form, and thus how is the stellar initial mass function (IMF) established? Such intermediate- and high-mass stars may be born from relatively massive pre-stellar gas cores, which are more massive than the thermal Jeans mass. The turbulent core accretion model invokes such cores as being in approximate virial equilibrium and in approximate pressure equilibrium with their surrounding clump medium. Their internal pressure is provided by a combination of turbulence and magnetic fields. Alternatively, the competitive accretion model requires strongly sub-virial initial conditions that then lead to extensive fragmentation to the thermal Jeans scale, with intermediate- and high-mass stars later forming by competitive Bondi-Hoyle accretion. To test these models, we have identified four prime examples of massive (~100 M ⊙) clumps from mid-infrared extinction mapping of infrared dark clouds. Fontani et al. found high deuteration fractions of N2H+ in these objects, which are consistent with them being starless. Here we present ALMA observations of these four clumps that probe the N2D+ (3-2) line at 2.''3 resolution. We find six N2D+ cores and determine their dynamical state. Their observed velocity dispersions and sizes are broadly consistent with the predictions of the turbulent core model of self-gravitating, magnetized (with Alfvén Mach number mA ~ 1) and virialized cores that are bounded by the high pressures of their surrounding clumps. However, in the most massive cores, with masses up to ~60 M ⊙, our results suggest that moderately enhanced magnetic fields (so that mA ~= 0.3) may be needed for the structures to be in virial and pressure equilibrium. Magnetically regulated core formation may thus be important in controlling the formation of massive cores, inhibiting their fragmentation, and thus helping to establish the stellar IMF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia
2014-06-10
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placingmore » them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.« less
Massive Scale Cyber Traffic Analysis: A Driver for Graph Database Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Choudhury, S.; Haglin, David J.
2013-06-19
We describe the significance and prominence of network traffic analysis (TA) as a graph- and network-theoretical domain for advancing research in graph database systems. TA involves observing and analyzing the connections between clients, servers, hosts, and actors within IP networks, both at particular times and as extended over times. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. IPFLOW databases are routinely interrogated statistically and visualized for suspicious patterns. But the ability to cast IPFLOW data as a massive graph and query itmore » interactively, in order to e.g.\\ identify connectivity patterns, is less well advanced, due to a number of factors including scaling, and their hybrid nature combining graph connectivity and quantitative attributes. In this paper, we outline requirements and opportunities for graph-structured IPFLOW analytics based on our experience with real IPFLOW databases. Specifically, we describe real use cases from the security domain, cast them as graph patterns, show how to express them in two graph-oriented query languages SPARQL and Datalog, and use these examples to motivate a new class of "hybrid" graph-relational systems.« less
ORBITAL STABILITY OF MULTI-PLANET SYSTEMS: BEHAVIOR AT HIGH MASSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, Sarah J.; Kratter, Kaitlin M., E-mail: morrison@lpl.arizona.edu, E-mail: kkratter@email.arizona.edu
2016-06-01
In the coming years, high-contrast imaging surveys are expected to reveal the characteristics of the population of wide-orbit, massive, exoplanets. To date, a handful of wide planetary mass companions are known, but only one such multi-planet system has been discovered: HR 8799. For low mass planetary systems, multi-planet interactions play an important role in setting system architecture. In this paper, we explore the stability of these high mass, multi-planet systems. While empirical relationships exist that predict how system stability scales with planet spacing at low masses, we show that extrapolating to super-Jupiter masses can lead to up to an ordermore » of magnitude overestimate of stability for massive, tightly packed systems. We show that at both low and high planet masses, overlapping mean-motion resonances trigger chaotic orbital evolution, which leads to system instability. We attribute some of the difference in behavior as a function of mass to the increasing importance of second order resonances at high planet–star mass ratios. We use our tailored high mass planet results to estimate the maximum number of planets that might reside in double component debris disk systems, whose gaps may indicate the presence of massive bodies.« less
Multiple rings around Wolf-Rayet evolution
NASA Technical Reports Server (NTRS)
Marston, A. P.
1995-01-01
We present optical narrow-band imaging of multiple rings existing around galactic Wolf-Rayet (WR) stars. The existence of multiple rings of material around Wolf-Rayet stars clearly illustrates the various phases of evolution that massive stars go through. The objects presented here show evidence of a three stage evolution. O stars produce an outer ring with the cavity being partially filled by ejecta from a red supergiant of luminous blue variable phase. A wind from the Wolf-Rayet star then passes into the ejecta materials. A simple model is presented for this three stage evolution. Using observations of the size and dynamics of the rings allows estimates of time scales for each stage of the massive star evolution. These are consistent with recent theoretical evolutionary models. Mass estimates for the ejecta, from the model presented, are consistent with previous ring nebula mass estimates from IRAS data, showing a number of ring nebulae to have large masses, most of which must in be in the form of neutral material. Finally, we illustrate how further observations will allow the determination of many of the parameters of the evolution of massive stars such as total mass loss, average mass loss rates, stellar abundances, and total time spent in each evolutionary phase.
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Taniguchi, Noboru; D'Lima, Darryl D; Suenaga, Naoki; Chosa, Etsuo
2018-02-01
Failure rates after rotator cuff repair remain high in patients with massive tears. Although superior translation of the humeral head has been used to assess the severity of rotator cuff tears, the relevance of anterior migration of the humeral head to clinical outcomes has not been established. The purpose of this study was to investigate the potential role of the T-scale, a measure of the anterolateral translation of the humeral head, as a prognostic factor for rotator cuff repair. One hundred twenty consecutive patients with full-thickness rotator cuff tears underwent primary rotator cuff repair. The T-scale and acromiohumeral interval (AHI) were measured preoperatively on axial computed tomography scans and radiographs, respectively. The correlations of the T-scale and AHI with previously published scores and active forward elevation (FE) were investigated. The outcome of rotator cuff repairs was compared between patients with positive and patients with negative preoperative T-scale values. The preoperative T-scale but not AHI correlated significantly with postoperative FE and clinical scores in patients with large to massive tears but not in those with small to medium tears. Postoperative FE and clinical scores were significantly higher in patients with positive T-scale values than in those with negative T-scale values. The relative risk of retear was 2.0 to 7.9 times greater in patients with negative T-scale values. Patients with large to massive tears and negative T-scale values had poorer clinical outcomes and higher retear rates. A negative T-scale value represents a useful prognostic factor for considering reverse shoulder arthroplasty in patients at greater risk of retear after rotator cuff repair. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Massive Symbolic Mathematical Computations and Their Applications
1988-08-16
NUMBER ORGANIZATION (if appi cable) AFOSR I A_ /__ I F49620-87- C -0113 Bc. ADDRESS (City, Stare, and ZIP Code) %. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT...TASK WORK UNIT - < ’/I/ "//ELEMENT NO. NO. NO. ACCESSION NO. /,, AF,; c 9r ;- 6 (4/tL’ " ’ ’! /K’, 11 TITLE (Incoue Secuirty Classification) Massive...DARPA R & D Status Report AFOSR.m. 8 8-1 12Contract No. F49620-87- C -0113 MASSIVE SYMBOLIC MATHEMATICAL COMPUTATIONS AND THEIR APPLICATIONS Quarterly
DGDFT: A massively parallel method for large scale density functional theory calculations.
Hu, Wei; Lin, Lin; Yang, Chao
2015-09-28
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.
The EMCC / DARPA Massively Parallel Electromagnetic Scattering Project
NASA Technical Reports Server (NTRS)
Woo, Alex C.; Hill, Kueichien C.
1996-01-01
The Electromagnetic Code Consortium (EMCC) was sponsored by the Advanced Research Program Agency (ARPA) to demonstrate the effectiveness of massively parallel computing in large scale radar signature predictions. The EMCC/ARPA project consisted of three parts.
NASA Technical Reports Server (NTRS)
Wright, E. L.; Meyer, S. S.; Bennett, C. L.; Boggess, N. W.; Cheng, E. S.; Hauser, M. G.; Kogut, A.; Lineweaver, C.; Mather, J. C.; Smoot, G. F.
1992-01-01
The large-scale cosmic background anisotropy detected by the COBE Differential Microwave Radiometer (DMR) instrument is compared to the sensitive previous measurements on various angular scales, and to the predictions of a wide variety of models of structure formation driven by gravitational instability. The observed anisotropy is consistent with all previously measured upper limits and with a number of dynamical models of structure formation. For example, the data agree with an unbiased cold dark matter (CDM) model with H0 = 50 km/s Mpc and Delta-M/M = 1 in a 16 Mpc radius sphere. Other models, such as CDM plus massive neutrinos (hot dark matter (HDM)), or CDM with a nonzero cosmological constant are also consistent with the COBE detection and can provide the extra power seen on 5-10,000 km/s scales.
Gravitational Collapse of Charged Matter in Einstein-DeSitter Universe
NASA Astrophysics Data System (ADS)
Avinash, K.; Krishnan, V.
1997-11-01
Gravitational collapse of charged matter in expanding universe is studied. We consider a quasi neutral electron-ion-massive grain plasma in which all the three species are expanding at the same rate i.e., ni ∝ 1/R^3 [ ni is the number density of the i^ th species and R is the scale factor ]. In Einstein-DeSitter universe the scale factor R goes as ~ t^2/3. The electrons and ions follow Boltzmann's relation. The stability of this equilibrium is studied on Jeans times scale. Depending on the ratio a = fracq d^2Gmd^2 the growth of gravitational collapse is further moderated from t^2/3 growth. For a=1, the instability is completely quenched. In curvature and radiation dominated universe, there is no additional effect due to finite charge of the matter.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1998-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Laflin, Brenda E. Gile; Kemmerly, Guy T.; Campbell, Bryan A.
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
Formation of large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Melott, Adrian L.; Bertschinger, Edmund
1989-01-01
Numerical simulations of large-scale structure formation from cosmic strings and massive neutrinos are described. The linear power spectrum in this model resembles the cold-dark-matter power spectrum. Galaxy formation begins early, and the final distribution consists of isolated density peaks embedded in a smooth background, leading to a natural bias in the distribution of luminous matter. The distribution of clustered matter has a filamentary appearance with large voids.
W49A: A Massive Molecular Cloud Forming a Massive Star Cluster in the Galactic Disk
NASA Astrophysics Data System (ADS)
Galvan-Madrid, Roberto; Liu, Hauyu Baobab; Pineda, Jaime E.; Zhang, Zhi-Yu; Ginsburg, Adam; Roman-Zuñiga, Carlos; Peters, Thomas
2015-08-01
I summarize our current results of the MUSCLE survey of W49A, the most luminous star formation region in the Milky Way. Our approach emphasizes multi-scale, multi-resolution imaging in dust, ionized-, and molecular gas, to trace the multiple gas components from <0.1 pc (core scale) all the way up to the scale of the entire giant molecular cloud (GMC), ˜100 pc. The 106 M⊙ GMC is structured in a radial network of filaments that converges toward the central 'hub' with ˜2x105 M⊙, which contains within a few pc a deeply embedded young massive cluster (YMC) of stellar mass ~5x104 M⊙. We also discuss the dynamics of the filamentary network, the role of turbulence in the formation of this YMC, and how objects like W49A can link Milky Way and extragalactic star formation relations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Hongliang; Wang, Yi, E-mail: hjiangag@connect.ust.hk, E-mail: phyw@ust.hk
During inflation, massive fields can contribute to the power spectrum of curvature perturbation via a dimension-5 operator. This contribution can be considered as a bias for the program of using n {sub s} and r to select inflation models. Even the dimension-5 operator is suppressed by Λ = M {sub p} , there is still a significant shift on the n {sub s} - r diagram if the massive fields have m ∼ H . On the other hand, if the heavy degree of freedom appears only at the same energy scale as the suppression scale of the dimension-5 operator,more » then significant shift on the n {sub s} - r diagram takes place at m =Λ ∼ 70 H , which is around the inflationary time-translation symmetry breaking scale. Hence, the systematics from massive fields pose a greater challenge for future high precision experiments for inflationary model selection. This result can be thought of as the impact of UV sensitivity to inflationary observables.« less
NASA Astrophysics Data System (ADS)
Buaria, Dhawal; Yeung, P. K.; Sawford, B. L.
2016-11-01
An efficient massively parallel algorithm has allowed us to obtain the trajectories of 300 million fluid particles in an 81923 simulation of isotropic turbulence at Taylor-scale Reynolds number 1300. Conditional single-particle statistics are used to investigate the effect of extreme events in dissipation and enstrophy on turbulent dispersion. The statistics of pairs and tetrads, both forward and backward in time, are obtained via post-processing of single-particle trajectories. For tetrads, since memory of shape is known to be short, we focus, for convenience, on samples which are initially regular, with all sides of comparable length. The statistics of tetrad size show similar behavior as the two-particle relative dispersion, i.e., stronger backward dispersion at intermediate times with larger backward Richardson constant. In contrast, the statistics of tetrad shape show more robust inertial range scaling, in both forward and backward frames. However, the distortion of shape is stronger for backward dispersion. Our results suggest that the Reynolds number reached in this work is sufficient to settle some long-standing questions concerning Lagrangian scale similarity. Supported by NSF Grants CBET-1235906 and ACI-1036170.
PoMiN: A Post-Minkowskian N-Body Solver
NASA Astrophysics Data System (ADS)
Feng, Justin; Baumann, Mark; Hall, Bryton; Doss, Joel; Spencer, Lucas; Matzner, Richard
2018-05-01
PoMiN is a lightweight N-body code based on the Post-Minkowskian N-body Hamiltonian of Ledvinka, Schafer, and Bicak, which includes General Relativistic effects up to first order in Newton's constant G, and all orders in the speed of light c. PoMiN is a single file written in C and uses a fourth-order Runge-Kutta integration scheme. PoMiN has also been written to handle an arbitrary number of particles (both massive and massless) with a computational complexity that scales as O(N^2).
Designing Strategies for an Efficient Language MOOC
ERIC Educational Resources Information Center
Perifanou, Maria
2016-01-01
The advent of Massive Open Online Courses (MOOCs) has dramatically changed the way people learn a language. But how can we design an efficient language learning environment for a massive number of learners? Are there any good practices that showcase successful Massive Open Online Language Course (MOOLC) design strategies? According to recent…
Binary interaction dominates the evolution of massive stars.
Sana, H; de Mink, S E; de Koter, A; Langer, N; Evans, C J; Gieles, M; Gosset, E; Izzard, R G; Le Bouquin, J-B; Schneider, F R N
2012-07-27
The presence of a nearby companion alters the evolution of massive stars in binary systems, leading to phenomena such as stellar mergers, x-ray binaries, and gamma-ray bursts. Unambiguous constraints on the fraction of massive stars affected by binary interaction were lacking. We simultaneously measured all relevant binary characteristics in a sample of Galactic massive O stars and quantified the frequency and nature of binary interactions. More than 70% of all massive stars will exchange mass with a companion, leading to a binary merger in one-third of the cases. These numbers greatly exceed previous estimates and imply that binary interaction dominates the evolution of massive stars, with implications for populations of massive stars and their supernovae.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
ERIC Educational Resources Information Center
Clarà, Marc; Kelly, Nick; Mauri, Teresa; Danaher, P. A.
2017-01-01
This paper explores the possibility that virtual communities of teachers with large numbers of members (referred to as "massive communities of teachers") can offer support to novice teachers by means of collaborative reflection. The paper examines and conceptualises some problems found in professional massive communities and proposes…
Collaborative Calibrated Peer Assessment in Massive Open Online Courses
ERIC Educational Resources Information Center
Boudria, Asma; Lafifi, Yacine; Bordjiba, Yamina
2018-01-01
The free nature and open access courses in the Massive Open Online Courses (MOOC) allow the facilities of disseminating information for a large number of participants. However, the "massive" propriety can generate many pedagogical problems, such as the assessment of learners, which is considered as the major difficulty facing in the…
The factorization of large composite numbers on the MPP
NASA Technical Reports Server (NTRS)
Mckurdy, Kathy J.; Wunderlich, Marvin C.
1987-01-01
The continued fraction method for factoring large integers (CFRAC) was an ideal algorithm to be implemented on a massively parallel computer such as the Massively Parallel Processor (MPP). After much effort, the first 60 digit number was factored on the MPP using about 6 1/2 hours of array time. Although this result added about 10 digits to the size number that could be factored using CFRAC on a serial machine, it was already badly beaten by the implementation of Davis and Holdridge on the CRAY-1 using the quadratic sieve, an algorithm which is clearly superior to CFRAC for large numbers. An algorithm is illustrated which is ideally suited to the single instruction multiple data (SIMD) massively parallel architecture and some of the modifications which were needed in order to make the parallel implementation effective and efficient are described.
Probing Massive Black Hole Populations and Their Environments with LISA
NASA Astrophysics Data System (ADS)
Katz, Michael; Larson, Shane
2018-01-01
With the adoption of the LISA Mission Proposal by the European Space Agency in response to its call for L3 mission concepts, gravitational wave measurements from space are on the horizon. With data from the Illustris large-scale cosmological simulation, we provide analysis of LISA detection rates accompanied by characterization of the merging Massive Black Holes (MBH) and their host galaxies. MBHs of total mass $\\sim10^6-10^9 M_\\odot$ are the main focus of this study. Using a precise treatment of the dynamical friction evolutionary process prior to gravitational wave emission, we evolve MBH simulation particle mergers from $\\sim$kpc scales until coalescence to achieve a merger distribution. Using the statistical basis of the Illustris output, we Monte-carlo synthesize many realizations of the merging massive black hole population across space and time. We use those realizations to build mock LISA detection catalogs to understand the impact of LISA mission configurations on our ability to probe massive black hole merger populations and their environments throughout the visible Universe.
Emergence, evolution and scaling of online social networks.
Wang, Le-Zhi; Huang, Zi-Gang; Rong, Zhi-Hai; Wang, Xiao-Fan; Lai, Ying-Cheng
2014-01-01
Online social networks have become increasingly ubiquitous and understanding their structural, dynamical, and scaling properties not only is of fundamental interest but also has a broad range of applications. Such networks can be extremely dynamic, generated almost instantaneously by, for example, breaking-news items. We investigate a common class of online social networks, the user-user retweeting networks, by analyzing the empirical data collected from Sina Weibo (a massive twitter-like microblogging social network in China) with respect to the topic of the 2011 Japan earthquake. We uncover a number of algebraic scaling relations governing the growth and structure of the network and develop a probabilistic model that captures the basic dynamical features of the system. The model is capable of reproducing all the empirical results. Our analysis not only reveals the basic mechanisms underlying the dynamics of the retweeting networks, but also provides general insights into the control of information spreading on such networks.
NASA Astrophysics Data System (ADS)
Hati, Chandan; Patra, Sudhanwa; Pritimita, Prativa; Sarkar, Utpal
2018-03-01
In this review, we present several variants of left-right symmetric models in the context of neutrino masses and leptogenesis. In particular, we discuss various low scale seesaw mechanisms like linear seesaw, inverse seesaw, extended seesaw and their implications to lepton number violating process like neutrinoless double beta decay. We also visit an alternative framework of left-right models with the inclusion of vector-like fermions to analyze the aspects of universal seesaw. The symmetry breaking of left-right symmetric model around few TeV scale predicts the existence of massive right-handed gauge bosons W_R and Z_R which might be detected at the LHC in near future. If such signals are detected at the LHC that can have severe implications for leptogenesis, a mechanism to explain the observed baryon asymmetry of the Universe. We review the implications of TeV scale left-right symmetry breaking for leptogenesis.
Efficient collective influence maximization in cascading processes with first-order transitions
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-01-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988
Efficient collective influence maximization in cascading processes with first-order transitions
NASA Astrophysics Data System (ADS)
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-03-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.
Asymmetric author-topic model for knowledge discovering of big data in toxicogenomics.
Chung, Ming-Hua; Wang, Yuping; Tang, Hailin; Zou, Wen; Basinger, John; Xu, Xiaowei; Tong, Weida
2015-01-01
The advancement of high-throughput screening technologies facilitates the generation of massive amount of biological data, a big data phenomena in biomedical science. Yet, researchers still heavily rely on keyword search and/or literature review to navigate the databases and analyses are often done in rather small-scale. As a result, the rich information of a database has not been fully utilized, particularly for the information embedded in the interactive nature between data points that are largely ignored and buried. For the past 10 years, probabilistic topic modeling has been recognized as an effective machine learning algorithm to annotate the hidden thematic structure of massive collection of documents. The analogy between text corpus and large-scale genomic data enables the application of text mining tools, like probabilistic topic models, to explore hidden patterns of genomic data and to the extension of altered biological functions. In this paper, we developed a generalized probabilistic topic model to analyze a toxicogenomics dataset that consists of a large number of gene expression data from the rat livers treated with drugs in multiple dose and time-points. We discovered the hidden patterns in gene expression associated with the effect of doses and time-points of treatment. Finally, we illustrated the ability of our model to identify the evidence of potential reduction of animal use.
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
Demise of faint satellites around isolated early-type galaxies
NASA Astrophysics Data System (ADS)
Park, Changbom; Hwang, Ho Seong; Park, Hyunbae; Lee, Jong Chul
2018-02-01
The hierarchical galaxy formation scenario in the Cold Dark Matter cosmology with a non-vanishing cosmological constant Λ and geometrically flat space (ΛCDM) has been very successful in explaining the large-scale distribution of galaxies. However, there have been claims that ΛCDM over-predicts the number of satellite galaxies associated with massive galaxies compared with observations—the missing satellite galaxy problem1-3. Isolated groups of galaxies hosted by passively evolving massive early-type galaxies are ideal laboratories for identifying the missing physics in the current theory4-11. Here, we report—based on a deep spectroscopic survey—that isolated massive and passive early-type galaxies without any signs of recent wet mergers or accretion episodes have almost no satellite galaxies fainter than the r-band absolute magnitude of about Mr = -14. If only early-type satellites are used, the cutoff is at the somewhat brighter magnitude of about Mr = -15. Such a cutoff has not been found in other nearby satellite galaxy systems hosted by late-type galaxies or those with merger features. Various physical properties of satellites depend strongly on the host-centric distance. Our observations indicate that the satellite galaxy luminosity function is largely determined by the interaction of satellites with the environment provided by their host.
Studying Student Motivations in an Astronomy Massive Open Online Class
NASA Astrophysics Data System (ADS)
Wenger, Matthew; Impey, Chris David; Buxner, Sanlyn; Formanek, Martin
2017-01-01
Massive Open Online Courses (MOOCs) are large-scale, free classes open to anyone around the world and are are part of an educational industry that includes a growing number of universities. Although they resemble formal classes, MOOCs are of interest to instructors and educational researchers because they are unique learning environments where various people--particularly adult learners--learn science. This research project examined learners in an astronomy MOOC in order to better understand the motivations of MOOC learners. Using a well-tested instrument that examines student motivations for learning, we wanted to compare the motivations of MOOC learners to previous results in undergraduate classrooms. Our results show that our MOOC learners scored high in intrinsic motivation, self-efficacy, and self-determination. They differed from learners in traditional formal educational environments by having lower grade and career-related motivations. These results suggest that MOOC learners have characteristics of learners in so called “free-choice” learning environments, similar to other life-long learners.
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...
2017-11-14
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.
2017-11-01
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.
Improving Performance and Predictability of Storage Arrays
ERIC Educational Resources Information Center
Altiparmak, Nihat
2013-01-01
Massive amount of data is generated everyday through sensors, Internet transactions, social networks, video, and all other digital sources available. Many organizations store this data to enable breakthrough discoveries and innovation in science, engineering, medicine, and commerce. Such massive scale of data poses new research problems called big…
A parsec-scale optical jet from a massive young star in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
McLeod, Anna F.; Reiter, Megan; Kuiper, Rolf; Klaassen, Pamela D.; Evans, Christopher J.
2018-02-01
Highly collimated parsec-scale jets, which are generally linked to the presence of an accretion disk, are commonly observed in low-mass young stellar objects. In the past two decades, a few of these jets have been directly (or indirectly) observed from higher-mass (larger than eight solar masses) young stellar objects, adding to the growing evidence that disk-mediated accretion also occurs in high-mass stars, the formation mechanism of which is still poorly understood. Of the observed jets from massive young stars, none is in the optical regime (massive young stars are typically highly obscured by their natal material), and none is found outside of the Milky Way. Here we report observations of HH 1177, an optical ionized jet that originates from a massive young stellar object located in the Large Magellanic Cloud. The jet is highly collimated over its entire measured length of at least ten parsecs and has a bipolar geometry. The presence of a jet indicates ongoing, disk-mediated accretion and, together with the high degree of collimation, implies that this system is probably formed through a scaled-up version of the formation mechanism of low-mass stars. We conclude that the physics that govern jet launching and collimation is independent of stellar mass.
A parsec-scale optical jet from a massive young star in the Large Magellanic Cloud.
McLeod, Anna F; Reiter, Megan; Kuiper, Rolf; Klaassen, Pamela D; Evans, Christopher J
2018-02-15
Highly collimated parsec-scale jets, which are generally linked to the presence of an accretion disk, are commonly observed in low-mass young stellar objects. In the past two decades, a few of these jets have been directly (or indirectly) observed from higher-mass (larger than eight solar masses) young stellar objects, adding to the growing evidence that disk-mediated accretion also occurs in high-mass stars, the formation mechanism of which is still poorly understood. Of the observed jets from massive young stars, none is in the optical regime (massive young stars are typically highly obscured by their natal material), and none is found outside of the Milky Way. Here we report observations of HH 1177, an optical ionized jet that originates from a massive young stellar object located in the Large Magellanic Cloud. The jet is highly collimated over its entire measured length of at least ten parsecs and has a bipolar geometry. The presence of a jet indicates ongoing, disk-mediated accretion and, together with the high degree of collimation, implies that this system is probably formed through a scaled-up version of the formation mechanism of low-mass stars. We conclude that the physics that govern jet launching and collimation is independent of stellar mass.
Building micro-soccer-balls with evaporating colloidal fakir drops
NASA Astrophysics Data System (ADS)
Gelderblom, Hanneke; Marín, Álvaro G.; Susarrey-Arce, Arturo; van Housselt, Arie; Lefferts, Leon; Gardeniers, Han; Lohse, Detlef; Snoeijer, Jacco H.
2013-11-01
Drop evaporation can be used to self-assemble particles into three-dimensional microstructures on a scale where direct manipulation is impossible. We present a unique method to create highly-ordered colloidal microstructures in which we can control the amount of particles and their packing fraction. To this end, we evaporate colloidal dispersion drops from a special type of superhydrophobic microstructured surface, on which the drop remains in Cassie-Baxter state during the entire evaporative process. The remainders of the drop consist of a massive spherical cluster of the microspheres, with diameters ranging from a few tens up to several hundreds of microns. We present scaling arguments to show how the final particle packing fraction of these balls depends on the drop evaporation dynamics, particle size, and number of particles in the system.
BINARY ASTROMETRIC MICROLENSING WITH GAIA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sajadian, Sedighe, E-mail: sajadian@ipm.ir; Department of Physics, Sharif University of Technology, P.O. Box 11155-9161, Tehran
2015-04-15
We investigate whether or not Gaia can specify the binary fractions of massive stellar populations in the Galactic disk through astrometric microlensing. Furthermore, we study whether or not some information about their mass distributions can be inferred via this method. In this regard, we simulate the binary astrometric microlensing events due to massive stellar populations according to the Gaia observing strategy by considering (i) stellar-mass black holes, (ii) neutron stars, (iii) white dwarfs, and (iv) main-sequence stars as microlenses. The Gaia efficiency for detecting the binary signatures in binary astrometric microlensing events is ∼10%–20%. By calculating the optical depth duemore » to the mentioned stellar populations, the numbers of the binary astrometric microlensing events being observed with Gaia with detectable binary signatures, for the binary fraction of about 0.1, are estimated to be 6, 11, 77, and 1316, respectively. Consequently, Gaia can potentially specify the binary fractions of these massive stellar populations. However, the binary fraction of black holes measured with this method has a large uncertainty owing to a low number of the estimated events. Knowing the binary fractions in massive stellar populations helps with studying the gravitational waves. Moreover, we investigate the number of massive microlenses for which Gaia specifies masses through astrometric microlensing of single lenses toward the Galactic bulge. The resulting efficiencies of measuring the mass of mentioned populations are 9.8%, 2.9%, 1.2%, and 0.8%, respectively. The numbers of their astrometric microlensing events being observed in the Gaia era in which the lens mass can be inferred with the relative error less than 0.5 toward the Galactic bulge are estimated as 45, 34, 76, and 786, respectively. Hence, Gaia potentially gives us some information about the mass distribution of these massive stellar populations.« less
Astronomy for Astronomical Numbers: A Worldwide Massive Open Online Class
ERIC Educational Resources Information Center
Impey, Chris D.; Wenger, Matthew C.; Austin, Carmen L.
2015-01-01
Astronomy: State of the Art is a massive, open, online class (MOOC) offered through Udemy by an instructional team at the University of Arizona. With nearly 24,000 enrolled as of early 2015, it is the largest astronomy MOOC available. The astronomical numbers enrolled do not translate into a similar level of engagement. The content consists of 14…
First Responder Weapons of Mass Destruction Training Using Massively Multiplayer On-Line Gaming
2004-06-01
Training Using Massively Multiplayer On-Line Gaming 6. AUTHOR(S) Thomas J. Richardson 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND...ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME...37 1. Transitioning from Hierarchical to Networked Control................37 2. Compliance with Government Performance and Results Act
Five Principles for MOOC Design: With a Case Study
ERIC Educational Resources Information Center
Drake, John R.; O'Hara, Margaret; Seeman, Elaine
2015-01-01
New web technologies have enabled online education to take on a massive scale, prompting many universities to create massively open online courses (MOOCs) that take advantage of these technologies in a seemingly effortless manner. Designing a MOOC, however, is anything but trivial. It involves developing content, learning activities, and…
Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC
NASA Astrophysics Data System (ADS)
Alruwaili, Manal
With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.
Temperature structure and kinematics of the IRDC G035.39-00.33
NASA Astrophysics Data System (ADS)
Sokolov, Vlas; Wang, Ke; Pineda, Jaime E.; Caselli, Paola; Henshaw, Jonathan D.; Tan, Jonathan C.; Fontani, Francesco; Jiménez-Serra, Izaskun; Lim, Wanggi
2017-10-01
Aims: Infrared dark clouds represent the earliest stages of high-mass star formation. Detailed observations of their physical conditions on all physical scales are required to improve our understanding of their role in fueling star formation. Methods: We investigate the large-scale structure of the IRDC G035.39-00.33, probing the dense gas with the classical ammonia thermometer. This allows us to put reliable constraints on the temperature of the extended, pc-scale dense gas reservoir and to probe the magnitude of its non-thermal motions. Available far-infrared observations can be used in tandem with the observed ammonia emission to estimate the total gas mass contained in G035.39-00.33. Results: We identify a main velocity component as a prominent filament, manifested as an ammonia emission intensity ridge spanning more than 6 pc, consistent with the previous studies on the Northern part of the cloud. A number of additional line-of-sight components are found, and a large-scale linear velocity gradient of 0.2km s-1 pc-1 is found along the ridge of the IRDC. In contrast to the dust temperature map, an ammonia-derived kinetic temperature map, presented for the entirety of the cloud, reveals local temperature enhancements towards the massive protostellar cores. We show that without properly accounting for the line of sight contamination, the dust temperature is 2-3 K larger than the gas temperature measured with NH3. Conclusions: While both the large-scale kinematics and temperature structure are consistent with that of starless dark filaments, the kinetic gas temperature profile on smaller scales is suggestive of tracing the heating mechanism coincident with the locations of massive protostellar cores. The reduced spectral cubes (FITS format) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A133
The Resolved Stellar Populations in the LEGUS Galaxies1
NASA Astrophysics Data System (ADS)
Sabbi, E.; Calzetti, D.; Ubeda, L.; Adamo, A.; Cignoni, M.; Thilker, D.; Aloisi, A.; Elmegreen, B. G.; Elmegreen, D. M.; Gouliermis, D. A.; Grebel, E. K.; Messa, M.; Smith, L. J.; Tosi, M.; Dolphin, A.; Andrews, J. E.; Ashworth, G.; Bright, S. N.; Brown, T. M.; Chandar, R.; Christian, C.; Clayton, G. C.; Cook, D. O.; Dale, D. A.; de Mink, S. E.; Dobbs, C.; Evans, A. S.; Fumagalli, M.; Gallagher, J. S., III; Grasha, K.; Herrero, A.; Hunter, D. A.; Johnson, K. E.; Kahre, L.; Kennicutt, R. C.; Kim, H.; Krumholz, M. R.; Lee, J. C.; Lennon, D.; Martin, C.; Nair, P.; Nota, A.; Östlin, G.; Pellerin, A.; Prieto, J.; Regan, M. W.; Ryon, J. E.; Sacchi, E.; Schaerer, D.; Schiminovich, D.; Shabani, F.; Van Dyk, S. D.; Walterbos, R.; Whitmore, B. C.; Wofford, A.
2018-03-01
The Legacy ExtraGalactic UV Survey (LEGUS) is a multiwavelength Cycle 21 Treasury program on the Hubble Space Telescope. It studied 50 nearby star-forming galaxies in 5 bands from the near-UV to the I-band, combining new Wide Field Camera 3 observations with archival Advanced Camera for Surveys data. LEGUS was designed to investigate how star formation occurs and develops on both small and large scales, and how it relates to the galactic environments. In this paper we present the photometric catalogs for all the apparently single stars identified in the 50 LEGUS galaxies. Photometric catalogs and mosaicked images for all filters are available for download. We present optical and near-UV color–magnitude diagrams for all the galaxies. For each galaxy we derived the distance from the tip of the red giant branch. We then used the NUV color–magnitude diagrams to identify stars more massive than 14 M ⊙, and compared their number with the number of massive stars expected from the GALEX FUV luminosity. Our analysis shows that the fraction of massive stars forming in star clusters and stellar associations is about constant with the star formation rate. This lack of a relation suggests that the timescale for evaporation of unbound structures is comparable or longer than 10 Myr. At low star formation rates this translates to an excess of mass in clustered environments as compared to model predictions of cluster evolution, suggesting that a significant fraction of stars form in unbound systems. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA Inc., under NASA contract NAS 5-26555.
Strong Gravitational Lensing as a Probe of Gravity, Dark-Matter and Super-Massive Black Holes
NASA Astrophysics Data System (ADS)
Koopmans, L.V.E.; Barnabe, M.; Bolton, A.; Bradac, M.; Ciotti, L.; Congdon, A.; Czoske, O.; Dye, S.; Dutton, A.; Elliasdottir, A.; Evans, E.; Fassnacht, C.D.; Jackson, N.; Keeton, C.; Lasio, J.; Moustakas, L.; Meneghetti, M.; Myers, S.; Nipoti, C.; Suyu, S.; van de Ven, G.; Vegetti, S.; Wucknitz, O.; Zhao, H.-S.
Whereas considerable effort has been afforded in understanding the properties of galaxies, a full physical picture, connecting their baryonic and dark-matter content, super-massive black holes, and (metric) theories of gravity, is still ill-defined. Strong gravitational lensing furnishes a powerful method to probe gravity in the central regions of galaxies. It can (1) provide a unique detection-channel of dark-matter substructure beyond the local galaxy group, (2) constrain dark-matter physics, complementary to direct-detection experiments, as well as metric theories of gravity, (3) probe central super-massive black holes, and (4) provide crucial insight into galaxy formation processes from the dark matter point of view, independently of the nature and state of dark matter. To seriously address the above questions, a considerable increase in the number of strong gravitational-lens systems is required. In the timeframe 2010-2020, a staged approach with radio (e.g. EVLA, e-MERLIN, LOFAR, SKA phase-I) and optical (e.g. LSST and JDEM) instruments can provide 10^(2-4) new lenses, and up to 10^(4-6) new lens systems from SKA/LSST/JDEM all-sky surveys around ~2020. Follow-up imaging of (radio) lenses is necessary with moderate ground/space-based optical-IR telescopes and with 30-50m telescopes for spectroscopy (e.g. TMT, GMT, ELT). To answer these fundamental questions through strong gravitational lensing, a strong investment in large radio and optical-IR facilities is therefore critical in the coming decade. In particular, only large-scale radio lens surveys (e.g. with SKA) provide the large numbers of high-resolution and high-fidelity images of lenses needed for SMBH and flux-ratio anomaly studies.
Population Explosions of Tiger Moth Lead to Lepidopterism Mimicking Infectious Fever Outbreaks
Wills, Pallara Janardhanan; Anjana, Mohan; Nitin, Mohan; Varun, Raghuveeran; Sachidanandan, Parayil; Jacob, Tharaniyil Mani; Lilly, Madhavan; Thampan, Raghava Varman; Karthikeya Varma, Koyikkal
2016-01-01
Lepidopterism is a disease caused by the urticating scales and toxic fluids of adult moths, butterflies or its caterpillars. The resulting cutaneous eruptions and systemic problems progress to clinical complications sometimes leading to death. High incidence of fever epidemics were associated with massive outbreaks of tiger moth Asota caricae adult populations during monsoon in Kerala, India. A significant number of monsoon related fever characteristic to lepidopterism was erroneously treated as infectious fevers due to lookalike symptoms. To diagnose tiger moth lepidopterism, we conducted immunoblots for tiger moth specific IgE in fever patients’ sera. We selected a cohort of patients (n = 155) with hallmark symptoms of infectious fevers but were tested negative to infectious fevers. In these cases, the total IgE was elevated and was detected positive (78.6%) for tiger moth specific IgE allergens. Chemical characterization of caterpillar and adult moth fluids was performed by HPLC and GC-MS analysis and structural identification of moth scales was performed by SEM analysis. The body fluids and chitinous scales were found to be highly toxic and inflammatory in nature. To replicate the disease in experimental model, wistar rats were exposed to live tiger moths in a dose dependant manner and observed similar clinico-pathological complications reported during the fever epidemics. Further, to link larval abundance and fever epidemics we conducted cointegration test for the period 2009 to 2012 and physical presence of the tiger moths were found to be cointegrated with fever epidemics. In conclusion, our experiments demonstrate that inhalation of aerosols containing tiger moth fluids, scales and hairs cause systemic reactions that can be fatal to human. All these evidences points to the possible involvement of tiger moth disease as a major cause to the massive and fatal fever epidemics observed in Kerala. PMID:27073878
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1988-01-01
These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.
Analysis of crossover between local and massive separation on airfoils
NASA Technical Reports Server (NTRS)
Barnett, Mark
1987-01-01
The occurrence of massive separation on airfoils operating at high Reynolds number poses an important problem to the aerodynamicist. In the present study, the phenomenon of crossover, induced by airfoil thickness, between local separation and massive separation is investigated for low speed (incompressible), symmetric flow past realistic airfoil geometries. This problem is studied both for the infinite Reynolds number asymptotic limit using triple-deck theory and for finite Reynolds number using interacting boundary-layer theory. Numerical results are presented which illustrate how the flow evolves from local to massive separation as the airfoil thickness is increased. The results of the triple-deck and the interacting boundary-layer analyses are found to be in qualitative agreement for the NACA four digit series and an uncambered supercritical airfoil. The effect of turbulence on the evolution of the flow is also considered. Solutions are presented for turbulent flows past a NACA 0014 airfoil and a circular cylinder. For the latter case, the calculated surface pressure distribution is found to agree well with experimental data if the proper eddy pressure level is specified.
Comparing Learner Community Behavior in Multiple Presentations of a Massive Open Online Course
ERIC Educational Resources Information Center
Gallagher, Silvia Elena; Savage, Timothy
2015-01-01
Massive Online Open Courses (MOOCs) can create large scale communities of learners who collaborate, interact and discuss learning materials and activities. MOOCs are often delivered multiple times with similar content to different cohorts of learners. However, research into the differences of learner communication, behavior and expectation between…
Comparing Learner Community Behavior in Multiple Presentations of a Massive Open Online Course
ERIC Educational Resources Information Center
Gallagher, Silvia Elena; Savage, Timothy
2016-01-01
Massive Online Open Courses (MOOCs) can create large scale communities of learners who collaborate, interact and discuss learning materials and activities. MOOCs are often delivered multiple times with similar content to different cohorts of learners. However, research into the differences of learner communication, behavior and expectation between…
Massive neutrinos and the pancake theory of galaxy formation
NASA Technical Reports Server (NTRS)
Schaeffer, R.; Silk, J.
1984-01-01
Three problems encountered by the pancake theory of galaxy formation in a massive neutrino-dominated universe are discussed. A nonlinear model for pancakes is shown to reconcile the data with the predicted coherence length and velocity field, and minimal predictions are given of the contribution from the large-scale matter distribution.
Relativistic N-body simulations with massive neutrinos
NASA Astrophysics Data System (ADS)
Adamek, Julian; Durrer, Ruth; Kunz, Martin
2017-11-01
Some of the dark matter in the Universe is made up of massive neutrinos. Their impact on the formation of large scale structure can be used to determine their absolute mass scale from cosmology, but to this end accurate numerical simulations have to be developed. Due to their relativistic nature, neutrinos pose additional challenges when one tries to include them in N-body simulations that are traditionally based on Newtonian physics. Here we present the first numerical study of massive neutrinos that uses a fully relativistic approach. Our N-body code, gevolution, is based on a weak-field formulation of general relativity that naturally provides a self-consistent framework for relativistic particle species. This allows us to model neutrinos from first principles, without invoking any ad-hoc recipes. Our simulation suite comprises some of the largest neutrino simulations performed to date. We study the effect of massive neutrinos on the nonlinear power spectra and the halo mass function, focusing on the interesting mass range between 0.06 eV and 0.3 eV and including a case for an inverted mass hierarchy.
ERIC Educational Resources Information Center
Buhl, Mie; Andreasen, Lars Birch; Pushpanadham, Karanam
2018-01-01
The proliferation and expansion of massive open online courses (MOOCs) prompts a need to revisit classical pedagogical questions. In what ways will MOOCs facilitate and promote new e-learning pedagogies? Is current learning design adequate for the "massiveness" and "openness" of MOOCs? This article discusses the ways in which…
Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise
NASA Astrophysics Data System (ADS)
Kocheemoolayil, Joseph; Lele, Sanjiva
2014-11-01
Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.
Metal enrichment of the intracluster medium: SN-driven galactic winds
NASA Astrophysics Data System (ADS)
Baumgartner, V.; Breitschwerdt, D.
2009-12-01
% We investigate the role of supernova (SN)-driven galactic winds in the chemical enrichment of the intracluster medium (ICM). Such outflows on galactic scales have their origin in huge star forming regions and expel metal enriched material out of the galaxies into their surroundings as observed, for example, in the nearby starburst galaxy NGC 253. As massive stars in OB-associations explode sequentially, shock waves are driven into the interstellar medium (ISM) of a galaxy and merge, forming a superbubble (SB). These SBs expand in a direction perpendicular to the disk plane following the density gradient of the ISM. We use the 2D analytical approximation by Kompaneets (1960) to model the expansion of SBs in an exponentially stratified ISM. This is modified in order to describe the sequence of SN-explosions as a time-dependent process taking into account the main-sequence life-time of the SN-progenitors and using an initial mass function to get the number of massive stars per mass interval. The evolution of the bubble in space and time is calculated analytically, from which the onset of Rayleigh-Taylor instabilities in the shell can be determined. In its further evolution, the shell will break up and high-metallicity gas will be ejected into the halo of the galaxy and even into the ICM. We derive the number of stars needed for blow-out depending on the scale height and density of the ambient medium, as well as the fraction of alpha- and iron peak elements contained in the hot gas. Finally, the amount of metals injected by Milky Way-type galaxies to the ICM is calculated confirming the importance of this enrichment process.
A distance-limited sample of massive molecular outflows
NASA Astrophysics Data System (ADS)
Maud, L. T.; Moore, T. J. T.; Lumsden, S. L.; Mottram, J. C.; Urquhart, J. S.; Hoare, M. G.
2015-10-01
We have observed 99 mid-infrared-bright, massive young stellar objects and compact H II regions drawn from the Red MSX source survey in the J = 3-2 transition of 12CO and 13CO, using the James Clerk Maxwell Telescope. 89 targets are within 6 kpc of the Sun, covering a representative range of luminosities and core masses. These constitute a relatively unbiased sample of bipolar molecular outflows associated with massive star formation. Of these, 59, 17 and 13 sources (66, 19 and 15 per cent) are found to have outflows, show some evidence of outflow, and have no evidence of outflow, respectively. The time-dependent parameters of the high-velocity molecular flows are calculated using a spatially variable dynamic time-scale. The canonical correlations between the outflow parameters and source luminosity are recovered and shown to scale with those of low-mass sources. For coeval star formation, we find the scaling is consistent with all the protostars in an embedded cluster providing the outflow force, with massive stars up to ˜30 M⊙ generating outflows. Taken at face value, the results support the model of a scaled-up version of the accretion-related outflow-generation mechanism associated with discs and jets in low-mass objects with time-averaged accretion rates of ˜10-3 M⊙ yr-1 on to the cores. However, we also suggest an alternative model, in which the molecular outflow dynamics are dominated by the entrained mass and are unrelated to the details of the acceleration mechanism. We find no evidence that outflows contribute significantly to the turbulent kinetic energy of the surrounding dense cores.
NASA Astrophysics Data System (ADS)
De Lucia, Gabriella; Fontanot, Fabio; Hirschmann, Michaela
2017-03-01
We take advantage of our recently published model for GAlaxy Evolution and Assembly (GAEA) to study the origin of the observed correlation between [α/Fe] and galaxy stellar mass. In particular, we analyse the role of radio-mode active galactic nuclei (AGN) feedback, which recent work has identified as a crucial ingredient to reproduce observations. In GAEA, this process introduces the observed trend of star formation histories extending over shorter time-scales for more massive galaxies, but does not provide a sufficient condition to reproduce the observed α enhancements of massive galaxies. In the framework of our model, this is possible only by assuming that any residual star formation is truncated for galaxies more massive than 1010.5 M⊙. This results, however, in even shorter star formation time-scales for the most massive galaxies, which translate in total stellar metallicities significantly lower than observed. Our results demonstrate that (I) trends of [α/Fe] ratios cannot be simply converted into relative time-scale indicators; and (II) AGN feedback cannot explain alone the positive correlation between [α/Fe] and galaxy mass/velocity dispersion. Reproducing simultaneously the mass-metallicity relation and the α enhancements observed pose a challenge for hierarchical models, unless more exotic solutions are adopted such as metal-rich winds or a variable initial mass function.
NASA Astrophysics Data System (ADS)
Lomax, Jamie R.; Peters, Matthew; Wisniewski, John; Dalcanton, Julianne; Williams, Benjamin; Lutz, Julie; Choi, Yumi; Sigut, Aaron
2017-11-01
Massive stars are intrinsically rare and therefore present a challenge to understand from a statistical perspective, especially within the Milky Way. We recently conducted follow-up observations to the Panchromatic Hubble Andromeda Treasury (PHAT) survey that were designed to detect more than 10,000 emission line stars, including WRs, by targeting regions in M31 previously known to host large numbers of young, massive clusters and very young stellar populations. Because of the existing PHAT data, we are able to derive an effective temperature, bolarimetric luminosity, and extinction for each of our detected stars. We report on preliminary results of the massive star population of our dataset and discuss how our results compare to previous studies of massive stars in M31.
NASA Astrophysics Data System (ADS)
Huang, Liang; Ni, Xuan; Ditto, William L.; Spano, Mark; Carney, Paul R.; Lai, Ying-Cheng
2017-01-01
We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on-off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.
ALMA REVEALS POTENTIAL LOCALIZED DUST ENRICHMENT FROM MASSIVE STAR CLUSTERS IN II Zw 40
DOE Office of Scientific and Technical Information (OSTI.GOV)
Consiglio, S. Michelle; Turner, Jean L.; Beck, Sara
2016-12-10
We present subarcsecond images of submillimeter CO and continuum emission from a local galaxy forming massive star clusters: the blue compact dwarf galaxy II Zw 40. At ∼0.″4 resolution (20 pc), the CO(3-2), CO(1-0), 3 mm, and 870 μ m continuum maps illustrate star formation on the scales of individual molecular clouds. Dust contributes about one-third of the 870 μ m continuum emission, with free–free accounting for the rest. On these scales, there is not a good correspondence between gas, dust, and free–free emission. Dust continuum is enhanced toward the star-forming region as compared to the CO emission. We suggestmore » that an unexpectedly low and spatially variable gas-to-dust ratio is the result of rapid and localized dust enrichment of clouds by the massive clusters of the starburst.« less
NASA Astrophysics Data System (ADS)
Shibata, Masaru; Kiuchi, Kenta
2017-06-01
Employing a simplified version of the Israel-Stewart formalism of general-relativistic shear-viscous hydrodynamics, we explore the evolution of a remnant massive neutron star of binary neutron star merger and pay special attention to the resulting gravitational waveforms. We find that for the plausible values of the so-called viscous alpha parameter of the order 10-2 the degree of the differential rotation in the remnant massive neutron star is significantly reduced in the viscous time scale, ≲5 ms . Associated with this, the degree of nonaxisymmetric deformation is also reduced quickly, and as a consequence, the amplitude of quasiperiodic gravitational waves emitted also decays in the viscous time scale. Our results indicate that for modeling the evolution of the merger remnants of binary neutron stars we would have to take into account magnetohydrodynamics effects, which in nature could provide the viscous effects.
Hyper-scaling relations in the conformal window from dynamic AdS/QCD
NASA Astrophysics Data System (ADS)
Evans, Nick; Scott, Marc
2014-09-01
Dynamic AdS/QCD is a holographic model of strongly coupled gauge theories with the dynamics included through the running anomalous dimension of the quark bilinear, γ. We apply it to describe the physics of massive quarks in the conformal window of SU(Nc) gauge theories with Nf fundamental flavors, assuming the perturbative two-loop running for γ. We show that to find regular, holographic renormalization group flows in the infrared, the decoupling of the quark flavors at the scale of the mass is important, and enact it through suitable boundary conditions when the flavors become on shell. We can then compute the quark condensate and the mesonic spectrum (Mρ,Mπ,Mσ) and decay constants. We compute their scaling dependence on the quark mass for a number of examples. The model matches perturbative expectations for large quark mass and naïve dimensional analysis (including the anomalous dimensions) for small quark mass. The model allows study of the intermediate regime where there is an additional scale from the running of the coupling, and we present results for the deviation of scalings from assuming only the single scale of the mass.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
The tidal disruption of a star by a massive black hole
NASA Technical Reports Server (NTRS)
Evans, Charles R.; Kochanek, Christopher S.
1989-01-01
Results are reported from a three-dimensional numerical calculation of the tidal disruption of a low-mass main-sequence star on a parabolic orbit around a massive black hole (Mh = 10 to the 6th stellar mass). The postdisruption evolution is followed until hydrodynamic forces becomes negligible and the liberated gas becomes ballistic. Also given is the rate at which bound mass returns to pericenter after orbiting the hole once. The processes that determine the time scale to circularize the debris orbits and allow an accretion torus to form are discussed. This time scale and the time scales for radiative cooling and accretion inflow determine the onset and duration of the subsequent flare in the AGN luminosity.
Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.
Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2015-03-12
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
Spectral x-ray diffraction using a 6 megapixel photon counting array detector
NASA Astrophysics Data System (ADS)
Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2015-03-01
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
NASA Astrophysics Data System (ADS)
Xu, Kui; Sun, Xiaoli; Zhang, Dongmei
2016-10-01
This paper investigates the spectral and energy efficiencies of a multi-pair two-way amplify-and-forward (AF) relay system over Ricean fading channels, where multiple user-pairs exchange information within pair through a relay with very large number of antennas, while each user equipped with a single antenna. Firstly, beamforming matrixe of zero-forcing reception/zero-forcing transmission (ZFR/ZFT) with imperfect channel state information (CSI) at the relay is given. Then, the unified asymptotic signal-to-interference-plus-noise ratio (SINR) expressions with imperfect CSI are obtained analytically. Finally, two power scaling schemes are proposed and the asymptotic spectral and energy efficiencies based on the proposed power scaling schemes are derived and verified by the Monte-Carlo simulations. Theoretical analyses and simulation results show that with imperfect CSI, if the number of relay antennas grows asymptotically large, we need cut down the transmit power of each user and relay to different proportion when the Ricean K-factor is non-zero and zero (Rayleigh fading) in order to maintain a desirable rate.
Final Project Report. Scalable fault tolerance runtime technology for petascale computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamoorthy, Sriram; Sadayappan, P
With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been amore » considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.« less
Siretskiy, Alexey; Sundqvist, Tore; Voznesenskiy, Mikhail; Spjuth, Ola
2015-01-01
New high-throughput technologies, such as massively parallel sequencing, have transformed the life sciences into a data-intensive field. The most common e-infrastructure for analyzing this data consists of batch systems that are based on high-performance computing resources; however, the bioinformatics software that is built on this platform does not scale well in the general case. Recently, the Hadoop platform has emerged as an interesting option to address the challenges of increasingly large datasets with distributed storage, distributed processing, built-in data locality, fault tolerance, and an appealing programming methodology. In this work we introduce metrics and report on a quantitative comparison between Hadoop and a single node of conventional high-performance computing resources for the tasks of short read mapping and variant calling. We calculate efficiency as a function of data size and observe that the Hadoop platform is more efficient for biologically relevant data sizes in terms of computing hours for both split and un-split data files. We also quantify the advantages of the data locality provided by Hadoop for NGS problems, and show that a classical architecture with network-attached storage will not scale when computing resources increase in numbers. Measurements were performed using ten datasets of different sizes, up to 100 gigabases, using the pipeline implemented in Crossbow. To make a fair comparison, we implemented an improved preprocessor for Hadoop with better performance for splittable data files. For improved usability, we implemented a graphical user interface for Crossbow in a private cloud environment using the CloudGene platform. All of the code and data in this study are freely available as open source in public repositories. From our experiments we can conclude that the improved Hadoop pipeline scales better than the same pipeline on high-performance computing resources, we also conclude that Hadoop is an economically viable option for the common data sizes that are currently used in massively parallel sequencing. Given that datasets are expected to increase over time, Hadoop is a framework that we envision will have an increasingly important role in future biological data analysis.
MassiveNuS: cosmological massive neutrino simulations
NASA Astrophysics Data System (ADS)
Liu, Jia; Bird, Simeon; Zorrilla Matilla, José Manuel; Hill, J. Colin; Haiman, Zoltán; Madhavacheril, Mathew S.; Petri, Andrea; Spergel, David N.
2018-03-01
The non-zero mass of neutrinos suppresses the growth of cosmic structure on small scales. Since the level of suppression depends on the sum of the masses of the three active neutrino species, the evolution of large-scale structure is a promising tool to constrain the total mass of neutrinos and possibly shed light on the mass hierarchy. In this work, we investigate these effects via a large suite of N-body simulations that include massive neutrinos using an analytic linear-response approximation: the Cosmological Massive Neutrino Simulations (MassiveNuS). The simulations include the effects of radiation on the background expansion, as well as the clustering of neutrinos in response to the nonlinear dark matter evolution. We allow three cosmological parameters to vary: the neutrino mass sum Mν in the range of 0–0.6 eV, the total matter density Ωm, and the primordial power spectrum amplitude As. The rms density fluctuation in spheres of 8 comoving Mpc/h (σ8) is a derived parameter as a result. Our data products include N-body snapshots, halo catalogues, merger trees, ray-traced galaxy lensing convergence maps for four source redshift planes between zs=1–2.5, and ray-traced cosmic microwave background lensing convergence maps. We describe the simulation procedures and code validation in this paper. The data are publicly available at http://columbialensing.org.
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
NASA Astrophysics Data System (ADS)
Aoki, Katsuki; Maeda, Kei-ichi; Misonoh, Yosuke; Okawa, Hirotada
2018-02-01
We find vacuum solutions such that massive gravitons are confined in a local spacetime region by their gravitational energy in asymptotically flat spacetimes in the context of the bigravity theory. We call such self-gravitating objects massive graviton geons. The basic equations can be reduced to the Schrödinger-Poisson equations with the tensor "wave function" in the Newtonian limit. We obtain a nonspherically symmetric solution with j =2 , ℓ=0 as well as a spherically symmetric solution with j =0 , ℓ=2 in this system where j is the total angular momentum quantum number and ℓ is the orbital angular momentum quantum number, respectively. The energy eigenvalue of the Schrödinger equation in the nonspherical solution is smaller than that in the spherical solution. We then study the perturbative stability of the spherical solution and find that there is an unstable mode in the quadrupole mode perturbations which may be interpreted as the transition mode to the nonspherical solution. The results suggest that the nonspherically symmetric solution is the ground state of the massive graviton geon. The massive graviton geons may decay in time due to emissions of gravitational waves but this timescale can be quite long when the massive gravitons are nonrelativistic and then the geons can be long-lived. We also argue possible prospects of the massive graviton geons: applications to the ultralight dark matter scenario, nonlinear (in)stability of the Minkowski spacetime, and a quantum transition of the spacetime.
Ramler, Paul I; van den Akker, Thomas; Henriquez, Dacia D C A; Zwart, Joost J; van Roosmalen, Jos
2017-06-19
Postpartum hemorrhage remains the leading cause of maternal morbidity and mortality worldwide. Few population-based studies have examined the epidemiology of massive transfusion for postpartum hemorrhage. The aim of this study was to determine the incidence, management, and outcomes of women with postpartum hemorrhage who required massive transfusion in the Netherlands between 2004 and 2006. Data for all women from a gestational age of 20 weeks onwards who had postpartum hemorrhage requiring eight or more red blood cell concentrates were obtained from a nationwide population-based cohort study including all 98 hospitals with a maternity unit in the Netherlands. Three hundred twenty-seven women who had postpartum hemorrhage requiring massive transfusion were identified (massive transfusion rate 91 per 100,000 deliveries (95% confidence interval: 81-101)). The median blood loss was 4500 mL (interquartile range 3250-6000 mL) and the median number of red blood cell concentrates transfused was 11 units (interquartile range 9-16 units). Among women receiving massive transfusion, the most common cause of hemorrhage was uterine atony. Eighty-three women (25%) underwent hysterectomy, 227 (69%) were admitted to an intensive care unit, and three women died (case fatality rate 0,9%). The number of women in the Netherlands who had postpartum hemorrhage treated with massive transfusion was relatively high compared to other comparable settings. Evidence-based uniform management guidelines are necessary.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Extracting Databases from Dark Data with DeepDive.
Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng
2016-01-01
DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.
GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac
2017-03-01
The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research on the Orbital Period of Massive Binaries
NASA Astrophysics Data System (ADS)
Zhao, E.; Qain, S.
2011-12-01
Massive binary is the kind of binary, whose spectral type is earlier than B5. Research on massive binary plays an important role in the mass and angular momentum transfer or loss between the components, and the evolution of binary. Some massive binaries are observed and analyzed, including O-type binary LY Aur, B-type contact binary RZ Pyx and B-type semi-detached binary AI Cru. It is found that all of their periods have a long-term increasing, which indicates that the system is undergoing a Case A slow mass transfer stage on the nuclear time-scale of the secondary. Moreover, analysis show a cyclic change of orbital period, which can be explained by the light-travel effect time of the third body.
Shocked and Scorched - Free-Floating Evaporating Gas Globules and Star Formation
NASA Astrophysics Data System (ADS)
Sahai, Raghvendra; Morris, Mark R.; Claussen, Mark J.
2014-07-01
Massive stars have a strong feedback effect on their environment, via their winds, UV radiation, and ultimately, supernova blast waves, all of which can alter the likelihood for the formation of stars in nearby clouds and limit the accretion process of nearby protostars. Free-floating Evaporating Gaseous Globules, or frEGGs, are a newly recognized class of stellar nurseries embedded within the giant HII regions found in massive star-formation region (MSFRs). We recently discovered the prototype frEGG in the Cygnus MSFR with HST. Further investigation using the Spitzer and Herschel archives have revealed a much larger number (>50) in Cygnus and other MSFRs. Our molecular-line observations of these show the presence of dense clouds with total masses of cool molecular gas exceeding 0.5 to a few Msun associated with these objects, thereby disproving the initial hypothesis based on their morphology that these have an origin similar to the proplyds (cometary-shaped photoevaporating protoplanetary disks) found in Orion. We report the results of our molecular-line studies and detailed high-resolution optical (with HST) or near-IR (with AO at the Keck Observatory) imaging of a few frEGGs in Cygnus, Carina and the W5 MSFRs. The images show the presence of young stars with associated outflow cavities and/or jets in the heads of the tadpole-shaped frEGGs. These results support our hypothesis that frEGGs are density concentrations originating in giant molecular clouds, that, when subject to the compression by the strong winds and ionization from massive stars in these MSFRs, become active star-forming cores. In summary, by virtue of their distinct, isolated morphologies, frEGGs offer us a clean probe of triggered star formation on small scales in the vicinity of massive stars.
Neutrino masses, scale-dependent growth, and redshift-space distortions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernández, Oscar F., E-mail: oscarh@physics.mcgill.ca
2017-06-01
Massive neutrinos leave a unique signature in the large scale clustering of matter. We investigate the wavenumber dependence of the growth factor arising from neutrino masses and use a Fisher analysis to determine the aspects of a galaxy survey needed to measure this scale dependence.
Intellectual interchanges in the history of the massive online open-editing encyclopedia, Wikipedia
NASA Astrophysics Data System (ADS)
Yun, Jinhyuk; Lee, Sang Hoon; Jeong, Hawoong
2016-01-01
Wikipedia is a free Internet encyclopedia with an enormous amount of content. This encyclopedia is written by volunteers with various backgrounds in a collective fashion; anyone can access and edit most of the articles. This open-editing nature may give us prejudice that Wikipedia is an unstable and unreliable source; yet many studies suggest that Wikipedia is even more accurate and self-consistent than traditional encyclopedias. Scholars have attempted to understand such extraordinary credibility, but usually used the number of edits as the unit of time, without consideration of real time. In this work, we probe the formation of such collective intelligence through a systematic analysis using the entire history of 34 534 110 English Wikipedia articles, between 2001 and 2014. From this massive data set, we observe the universality of both timewise and lengthwise editing scales, which suggests that it is essential to consider the real-time dynamics. By considering real time, we find the existence of distinct growth patterns that are unobserved by utilizing the number of edits as the unit of time. To account for these results, we present a mechanistic model that adopts the article editing dynamics based on both editor-editor and editor-article interactions. The model successfully generates the key properties of real Wikipedia articles such as distinct types of articles for the editing patterns characterized by the interrelationship between the numbers of edits and editors, and the article size. In addition, the model indicates that infrequently referred articles tend to grow faster than frequently referred ones, and articles attracting a high motivation to edit counterintuitively reduce the number of participants. We suggest that this decay of participants eventually brings inequality among the editors, which will become more severe with time.
Intellectual interchanges in the history of the massive online open-editing encyclopedia, Wikipedia.
Yun, Jinhyuk; Lee, Sang Hoon; Jeong, Hawoong
2016-01-01
Wikipedia is a free Internet encyclopedia with an enormous amount of content. This encyclopedia is written by volunteers with various backgrounds in a collective fashion; anyone can access and edit most of the articles. This open-editing nature may give us prejudice that Wikipedia is an unstable and unreliable source; yet many studies suggest that Wikipedia is even more accurate and self-consistent than traditional encyclopedias. Scholars have attempted to understand such extraordinary credibility, but usually used the number of edits as the unit of time, without consideration of real time. In this work, we probe the formation of such collective intelligence through a systematic analysis using the entire history of 34534110 English Wikipedia articles, between 2001 and 2014. From this massive data set, we observe the universality of both timewise and lengthwise editing scales, which suggests that it is essential to consider the real-time dynamics. By considering real time, we find the existence of distinct growth patterns that are unobserved by utilizing the number of edits as the unit of time. To account for these results, we present a mechanistic model that adopts the article editing dynamics based on both editor-editor and editor-article interactions. The model successfully generates the key properties of real Wikipedia articles such as distinct types of articles for the editing patterns characterized by the interrelationship between the numbers of edits and editors, and the article size. In addition, the model indicates that infrequently referred articles tend to grow faster than frequently referred ones, and articles attracting a high motivation to edit counterintuitively reduce the number of participants. We suggest that this decay of participants eventually brings inequality among the editors, which will become more severe with time.
ERIC Educational Resources Information Center
Clarke, Thomas
2013-01-01
Purpose: The purpose of this paper is to analyse the rapid development of the massive open online courses (MOOCs) and the implications for business education, to critically examine the educational and business models of the MOOCs, to assess their present scale and scalability, and to explore the responses of the universities to this challenge.…
Maps and the Geospatial Revolution: Teaching a Massive Open Online Course (MOOC) in Geography
ERIC Educational Resources Information Center
Robinson, Anthony C.; Kerski, Joseph; Long, Erin C.; Luo, Heng; DiBiase, David; Lee, Angela
2015-01-01
The massive open online course (MOOC) is a new approach for teaching online. MOOCs stand apart from traditional online classes in that they support thousands of learners through content and assessment mechanisms that can scale. A reason for their size is that MOOCs are free for anyone to take. Here we describe the design, development, and teaching…
The Concept of Openness behind c- and x-MOOCs (Massive Open Online Courses)
ERIC Educational Resources Information Center
Rodriguez, Osvaldo
2013-01-01
The last five years have witnessed a hype about MOOCs (Massive Open Online Courses) presaging a revolution in higher education. Although all MOOCs have in common their scale and free access, they have already bifurcated in two very distinct types of courses when compared in terms of their underpinning theory, format and structure, known as c-MOOCs…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kehagias, Alex; Riotto, Antonio, E-mail: kehagias@central.ntua.gr, E-mail: Antonio.Riotto@unige.ch
Cosmological perturbations of massive higher-spin fields are generated during inflation, but they decay on scales larger than the Hubble radius as a consequence of the Higuchi bound. By introducing suitable couplings to the inflaton field, we show that one can obtain statistical correlators of massive higher-spin fields which remain constant or decay very slowly outside the Hubble radius. This opens up the possibility of new observational signatures from inflation.
On The Evidence For Large-Scale Galactic Conformity In The Local Universe
NASA Astrophysics Data System (ADS)
Sin, Larry P. T.; Lilly, Simon J.; Henriques, Bruno M. B.
2017-10-01
We re-examine the observational evidence for large-scale (4 Mpc) galactic conformity in the local Universe, as presented in Kauffmann et al. We show that a number of methodological features of their analysis act to produce a misleadingly high amplitude of the conformity signal. These include a weighting in favour of central galaxies in very high density regions, the likely misclassification of satellite galaxies as centrals in the same high-density regions and the use of medians to characterize bimodal distributions. We show that the large-scale conformity signal in Kauffmann et al. clearly originates from a very small number of central galaxies in the vicinity of just a few very massive clusters, whose effect is strongly amplified by the methodological issues that we have identified. Some of these 'centrals' are likely misclassified satellites, but some may be genuine centrals showing a real conformity effect. Regardless, this analysis suggests that conformity on 4 Mpc scales is best viewed as a relatively short-range effect (at the virial radius) associated with these very large neighbouring haloes, rather than a very long-range effect (at tens of virial radii) associated with the relatively low-mass haloes that host the nominal central galaxies in the analysis. A mock catalogue constructed from a recent semi-analytic model shows very similar conformity effects to the data when analysed in the same way, suggesting that there is no need to introduce new physical processes to explain galactic conformity on 4 Mpc scales.
Symmetry breaking in holographic theories with Lifshitz scaling
NASA Astrophysics Data System (ADS)
Argurio, Riccardo; Hartong, Jelle; Marzolla, Andrea; Naegels, Daniel
2018-02-01
We study holographically Lifshitz-scaling theories with broken symmetries. In order to do this, we set up a bulk action with a complex scalar and a massless vector on a background which consists in a Lifshitz metric and a massive vector. We first study separately the complex scalar and the massless vector, finding a similar pattern in the twopoint functions that we can compute analytically. By coupling the probe complex scalar to the background massive vector we can construct probe actions that are more general than the usual Klein-Gordon action. Some of these actions have Galilean boost symmetry. Finally, in the presence of a symmetry breaking scalar profile in the bulk, we reproduce the expected Ward identities of a Lifshitz-scaling theory with a broken global continuous symmetry. In the spontaneous case, the latter imply the presence of a gapless mode, the Goldstone boson, which will have dispersion relations dictated by the Lifshitz scaling.
Small-scale hero: Massive-star enrichment in the Hercules dwarf spheroidal
NASA Astrophysics Data System (ADS)
Koch, Andreas; Matteucci, Francesca; Feltzing, Sofia
2012-09-01
Dwarf spheroidal galaxies are often conjectured to be the sites of the first stars. The best current contenders for finding the chemical imprints from the enrichment by those massive objects are the ``ultrafaint dwarfs'' (UFDs). Here we present evidence for remarkably low heavy element abundances in the metal poor Hercules UFD. Combined with other peculiar abundance patterns this indicates that Hercules was likely only influenced by very few, massive explosive events - thus bearing the traces of an early, localized chemical enrichment with only very little other contributions from other sources at later times.
Subspace Methods for Massive and Messy Data
2017-07-12
Subspace Methods for Massive and Messy Data The views, opinions and/or findings contained in this report are those of the author(s) and should not...AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 REPORT DOCUMENTATION PAGE 11. SPONSOR...Number: W911NF-14-1-0634 Organization: University of Michigan - Ann Arbor Title: Subspace Methods for Massive and Messy Data Report Term: 0-Other
Collisions in primordial star clusters. Formation pathway for intermediate mass black holes
NASA Astrophysics Data System (ADS)
Reinoso, B.; Schleicher, D. R. G.; Fellhauer, M.; Klessen, R. S.; Boekholt, T. C. N.
2018-06-01
Collisions were suggested to potentially play a role in the formation of massive stars in present day clusters, and have likely been relevant during the formation of massive stars and intermediate mass black holes within the first star clusters. In the early Universe, the first stellar clusters were particularly dense, as fragmentation typically only occurred at densities above 109 cm-3, and the radii of the protostars were enhanced as a result of larger accretion rates, suggesting a potentially more relevant role of stellar collisions. We present here a detailed parameter study to assess how the number of collisions and the mass growth of the most massive object depend on the properties of the cluster. We also characterize the time evolution with three effective parameters: the time when most collisions occur, the duration of the collisions period, and the normalization required to obtain the total number of collisions. We apply our results to typical Population III (Pop. III) clusters of about 1000 M⊙, finding that a moderate enhancement of the mass of the most massive star by a factor of a few can be expected. For more massive Pop. III clusters as expected in the first atomic cooling halos, we expect a more significant enhancement by a factor of 15-32. We therefore conclude that collisions in massive Pop. III clusters were likely relevant to form the first intermediate mass black holes.
NASA Astrophysics Data System (ADS)
Avetissian, A. K.
2017-07-01
New cosmic scales, completely different from the Plank's scales, have been disclosed in the frame of so called “Non-Inflationary Cosmology” (NIC), created by the author during last decade. The proposed new ideas shed light on some hidden inaccuracies within the essence of Planck's scales in Modern Cosmology, so the new scales have been nominated as “NAIRI (New Alternative Ideas Regenerating Irregularities) Cosmic Scales” (NCS). The NCS is believed to be realistic due to qualitative and quantitative correspondences with observational and experimental data. The basic concept about NCS has been created based on two hypotheses about cosmological time-evolution of Planck's constant and multi-photon processes. Together with the hypothesis about domination of Bose-statistics in the early Universe and the possibility of large-scale Bose-condensate, these predictions have been converted into phenomena, based on which the bases of alternative theory of cosmology have been investigated. The predicted by the author “Cosmic Small (Local) Bang” (CSB) phenomenon has been investigated in the model of galaxy, and as a consequence of CSB the possibility of Super-Strong Shock Wave (SSW) has been postulated. Thus, based on phenomena CSB and SSW, NIC guarantees the non-accretion mechanism of generation of galaxies and super-massive black holes in their core, as well as creation of supernovas and massive stars (super-massive stars exceeding also 100M⊙). The possibility of gravitational radiation (GR) by the central black hole of the galaxy, even by the disk (or whole galaxy!) has been investigated.
Zones, spots, and planetary-scale waves beating in brown dwarf atmospheres.
Apai, D; Karalidi, T; Marley, M S; Yang, H; Flateau, D; Metchev, S; Cowan, N B; Buenzli, E; Burgasser, A J; Radigan, J; Artigau, E; Lowrance, P
2017-08-18
Brown dwarfs are massive analogs of extrasolar giant planets and may host types of atmospheric circulation not seen in the solar system. We analyzed a long-term Spitzer Space Telescope infrared monitoring campaign of brown dwarfs to constrain cloud cover variations over a total of 192 rotations. The infrared brightness evolution is dominated by beat patterns caused by planetary-scale wave pairs and by a small number of bright spots. The beating waves have similar amplitudes but slightly different apparent periods because of differing velocities or directions. The power spectrum of intermediate-temperature brown dwarfs resembles that of Neptune, indicating the presence of zonal temperature and wind speed variations. Our findings explain three previously puzzling behaviors seen in brown dwarf brightness variations. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Large-scale recording of neuronal ensembles.
Buzsáki, György
2004-05-01
How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.
Understanding the X-ray Flaring from Eta Carinae
NASA Technical Reports Server (NTRS)
Moffat, A.F.J.; Corcoran, Michael F.
2009-01-01
We quantify the rapid variations in X-ray brightness ("flares") from the extremely massive colliding wind binary Eta Carinae seen during the past three orbital cycles by RXTE. The observed flares tend to be shorter in duration and more frequent as periastron is approached, although the largest ones tend to be roughly constant in strength at all phases. Plausible scenarios include (1) the largest of multi-scale stochastic wind clumps from the LBV component entering and compressing the hard X-ray emitting wind-wind collision (WWC) zone, (2) large-scale corotating interacting regions in the LBV wind sweeping across the WWC zone, or (3) instabilities intrinsic to the WWC zone. The first one appears to be most consistent with the observations, requiring homologously expanding clumps as they propagate outward in the LBV wind and a turbulence-like powerlaw distribution of clumps, decreasing in number towards larger sizes, as seen in Wolf-Rayet winds.
Biomorphic architectures for autonomous Nanosat designs
NASA Technical Reports Server (NTRS)
Hasslacher, Brosl; Tilden, Mark W.
1995-01-01
Modern space tool design is the science of making a machine both massively complex while at the same time extremely robust and dependable. We propose a novel nonlinear control technique that produces capable, self-organizing, micron-scale space machines at low cost and in large numbers by parallel silicon assembly. Experiments using biomorphic architectures (with ideal space attributes) have produced a wide spectrum of survival-oriented machines that are reliably domesticated for work applications in specific environments. In particular, several one-chip satellite prototypes show interesting control properties that can be turned into numerous application-specific machines for autonomous, disposable space tasks. We believe that the real power of these architectures lies in their potential to self-assemble into larger, robust, loosely coupled structures. Assembly takes place at hierarchical space scales, with different attendant properties, allowing for inexpensive solutions to many daunting work tasks. The nature of biomorphic control, design, engineering options, and applications are discussed.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domènech, Guillem; Hiramatsu, Takashi; Lin, Chunshan
We consider a cosmological model in which the tensor mode becomes massive during inflation, and study the Cosmic Microwave Background (CMB) temperature and polarization bispectra arising from the mixing between the scalar mode and the massive tensor mode during inflation. The model assumes the existence of a preferred spatial frame during inflation. The local Lorentz invariance is already broken in cosmology due to the existence of a preferred rest frame. The existence of a preferred spatial frame further breaks the remaining local SO(3) invariance and in particular gives rise to a mass in the tensor mode. At linear perturbation level,more » we minimize our model so that the vector mode remains non-dynamical, while the scalar mode is the same as the one in single-field slow-roll inflation. At non-linear perturbation level, this inflationary massive graviton phase leads to a sizeable scalar-scalar-tensor coupling, much greater than the scalar-scalar-scalar one, as opposed to the conventional case. This scalar-scalar-tensor interaction imprints a scale dependent feature in the CMB temperature and polarization bispectra. Very intriguingly, we find a surprizing similarity between the predicted scale dependence and the scale-dependent non-Gaussianities at low multipoles hinted in the WMAP and Planck results.« less
NASA Astrophysics Data System (ADS)
Fillingham, Sean P.; Cooper, Michael C.; Wheeler, Coral; Garrison-Kimmel, Shea; Boylan-Kolchin, Michael; Bullock, James S.
2015-12-01
The vast majority of dwarf satellites orbiting the Milky Way and M31 are quenched, while comparable galaxies in the field are gas rich and star forming. Assuming that this dichotomy is driven by environmental quenching, we use the Exploring the Local Volume in Simulations (ELVIS) suite of N-body simulations to constrain the characteristic time-scale upon which satellites must quench following infall into the virial volumes of their hosts. The high satellite quenched fraction observed in the Local Group demands an extremely short quenching time-scale (˜2 Gyr) for dwarf satellites in the mass range M⋆ ˜ 106-108 M⊙. This quenching time-scale is significantly shorter than that required to explain the quenched fraction of more massive satellites (˜8 Gyr), both in the Local Group and in more massive host haloes, suggesting a dramatic change in the dominant satellite quenching mechanism at M⋆ ≲ 108 M⊙. Combining our work with the results of complementary analyses in the literature, we conclude that the suppression of star formation in massive satellites (M⋆ ˜ 108-1011 M⊙) is broadly consistent with being driven by starvation, such that the satellite quenching time-scale corresponds to the cold gas depletion time. Below a critical stellar mass scale of ˜108 M⊙, however, the required quenching times are much shorter than the expected cold gas depletion times. Instead, quenching must act on a time-scale comparable to the dynamical time of the host halo. We posit that ram-pressure stripping can naturally explain this behaviour, with the critical mass (of M⋆ ˜ 108 M⊙) corresponding to haloes with gravitational restoring forces that are too weak to overcome the drag force encountered when moving through an extended, hot circumgalactic medium.
NASA Astrophysics Data System (ADS)
Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin
2015-12-01
The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.
Is there a cluster in the massive star forming region IRAS 20126+4104?
NASA Astrophysics Data System (ADS)
Montes, V. A.; Hofner, Peter; Anderson, C.; Rosero, V.
2017-03-01
A Chandra X-ray Observatory ACIS-I observation and a 6 cm continuum radio observation with the Karl G. Jansky Very Large Array (VLA) together with a multiwavelength study in infrared (2MASS and Spitzer) and optical (USNO-B1.0) shows an increasing surface density of X-ray sources toward the massive protostar. There are at least 43 YSOs within 1.2 pc distance from the massive protostar. This number is consistent with typical B-type stars clusters (Lada & Lada 2003).
Learner Groups in Massive Open Online Courses
ERIC Educational Resources Information Center
Arora, Skand; Goel, Manav; Sabitha, A. Sai; Mehrotra, Deepti
2017-01-01
The open nature of Massive Open Online Courses (MOOCs) attracts a large number of learners with different backgrounds, skills, motivations, and goals. This has brought a need to understand such heterogeneity in populations of MOOC learners. Categorizing these learners based upon their interaction with the course can help address this need and…
MOOCocracy: The Learning Culture of Massive Open Online Courses
ERIC Educational Resources Information Center
Loizzo, Jamie; Ertmer, Peggy A.
2016-01-01
Massive open online courses (MOOCs) are often examined and evaluated in terms of institutional cost, instructor prestige, number of students enrolled, and completion rates. MOOCs, which are connecting thousands of adult learners from diverse backgrounds, have yet to be viewed from a learning culture perspective. This research used virtual…
Dipolar dark matter with massive bigravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchet, Luc; Heisenberg, Lavinia; Department of Physics & The Oskar Klein Centre, AlbaNova University Centre,Roslagstullsbacken 21, 10691 Stockholm
2015-12-14
Massive gravity theories have been developed as viable IR modifications of gravity motivated by dark energy and the problem of the cosmological constant. On the other hand, modified gravity and modified dark matter theories were developed with the aim of solving the problems of standard cold dark matter at galactic scales. Here we propose to adapt the framework of ghost-free massive bigravity theories to reformulate the problem of dark matter at galactic scales. We investigate a promising alternative to dark matter called dipolar dark matter (DDM) in which two different species of dark matter are separately coupled to the twomore » metrics of bigravity and are linked together by an internal vector field. We show that this model successfully reproduces the phenomenology of dark matter at galactic scales (i.e. MOND) as a result of a mechanism of gravitational polarisation. The model is safe in the gravitational sector, but because of the particular couplings of the matter fields and vector field to the metrics, a ghost in the decoupling limit is present in the dark matter sector. However, it might be possible to push the mass of the ghost beyond the strong coupling scale by an appropriate choice of the parameters of the model. Crucial questions to address in future work are the exact mass of the ghost, and the cosmological implications of the model.« less
Dipolar dark matter with massive bigravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchet, Luc; Heisenberg, Lavinia, E-mail: blanchet@iap.fr, E-mail: laviniah@kth.se
2015-12-01
Massive gravity theories have been developed as viable IR modifications of gravity motivated by dark energy and the problem of the cosmological constant. On the other hand, modified gravity and modified dark matter theories were developed with the aim of solving the problems of standard cold dark matter at galactic scales. Here we propose to adapt the framework of ghost-free massive bigravity theories to reformulate the problem of dark matter at galactic scales. We investigate a promising alternative to dark matter called dipolar dark matter (DDM) in which two different species of dark matter are separately coupled to the twomore » metrics of bigravity and are linked together by an internal vector field. We show that this model successfully reproduces the phenomenology of dark matter at galactic scales (i.e. MOND) as a result of a mechanism of gravitational polarisation. The model is safe in the gravitational sector, but because of the particular couplings of the matter fields and vector field to the metrics, a ghost in the decoupling limit is present in the dark matter sector. However, it might be possible to push the mass of the ghost beyond the strong coupling scale by an appropriate choice of the parameters of the model. Crucial questions to address in future work are the exact mass of the ghost, and the cosmological implications of the model.« less
Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2005-01-01
Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2014-02-28
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2015-01-01
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230
Resolving the problem of galaxy clustering on small scales: any new physics needed?
NASA Astrophysics Data System (ADS)
Kang, X.
2014-02-01
Galaxy clustering sets strong constraints on the physics governing galaxy formation and evolution. However, most current models fail to reproduce the clustering of low-mass galaxies on small scales (r < 1 Mpc h-1). In this paper, we study the galaxy clusterings predicted from a few semi-analytical models. We first compare two Munich versions, Guo et al. and De Lucia & Blaizot. The Guo11 model well reproduces the galaxy stellar mass function, but overpredicts the clustering of low-mass galaxies on small scales. The DLB07 model provides a better fit to the clustering on small scales, but overpredicts the stellar mass function. These seem to be puzzling. The clustering on small scales is dominated by galaxies in the same dark matter halo, and there is slightly more fraction of satellite galaxies residing in massive haloes in the Guo11 model, which is the dominant contribution to the clustering discrepancy between the two models. However, both models still overpredict the clustering at 0.1 < r < 10 Mpc h-1 for low-mass galaxies. This is because both models overpredict the number of satellites by 30 per cent in massive haloes than the data. We show that the Guo11 model could be slightly modified to simultaneously fit the stellar mass function and clusterings, but that cannot be easily achieved in the DLB07 model. The better agreement of DLB07 model with the data actually comes as a coincidence as it predicts too many low-mass central galaxies which are less clustered and thus brings down the total clustering. Finally, we show the predictions from the semi-analytical models of Kang et al. We find that this model can simultaneously fit the stellar mass function and galaxy clustering if the supernova feedback in satellite galaxies is stronger. We conclude that semi-analytical models are now able to solve the small-scales clustering problem, without invoking of any other new physics or changing the dark matter properties, such as the recent favoured warm dark matter.
NASA Astrophysics Data System (ADS)
Stellmach, Stephan; Hansen, Ulrich
2008-05-01
Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.
Supercomputer simulations of structure formation in the Universe
NASA Astrophysics Data System (ADS)
Ishiyama, Tomoaki
2017-06-01
We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.
Statistical properties of online avatar numbers in a massive multiplayer online role-playing game
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Ren, Fei; Gu, Gao-Feng; Tan, Qun-Zhao; Zhou, Wei-Xing
2010-02-01
Massive multiplayer online role-playing games (MMORPGs) have been very popular in the past few years. The profit of an MMORPG company is proportional to how many users registered, and the instant number of online avatars is a key factor to assess how popular an MMORPG is. We use the online-offline logs on an MMORPG server to reconstruct the instant number of online avatars per second and investigate its statistical properties. We find that the online avatar number exhibits one-day periodic behavior and clear intraday pattern, the fluctuation distribution of the online avatar numbers has a leptokurtic non-Gaussian shape with power-law tails, and the increments of online avatar numbers after removing the intraday pattern are uncorrelated and the associated absolute values have long-term correlation. In addition, both time series exhibit multifractal nature.
NASA Technical Reports Server (NTRS)
Michelassi, V.; Durbin, P. A.; Mansour, N. N.
1996-01-01
A four-equation model of turbulence is applied to the numerical simulation of flows with massive separation induced by a sudden expansion. The model constants are a function of the flow parameters, and two different formulations for these functions are tested. The results are compared with experimental data for a high Reynolds-number case and with experimental and DNS data for a low Reynolds-number case. The computations prove that the recovery region downstream of the massive separation is properly modeled only for the high Re case. The problems in this case stem from the gradient diffusion hypothesis, which underestimates the turbulent diffusion.
Can a supersonically expanding Bose-Einstein Condensates be used to study cosmological inflation?
NASA Astrophysics Data System (ADS)
Banik, Swarnav; Eckel, Stephen; Kumar, Avinash; Jacobson, Ted; Spielman, Ian; Campbell, Gretchen
2017-04-01
The massive scale of the universe makes the experimental study of cosmological inflation difficult. This has led to an interest in developing analogous systems using table top experiments. Here, we present the basic features of an expanding universe by drawing parallels with an expanding toroidal Bose Einstein Condensate (BEC) of 23Na atoms. The toroidal BEC serves as the background vacuum and phonons are the analogue to photons in the expanding universe. We study the dynamics of phonons in both non-expanding and expanding condensates and measure dissipation using the structure factor. We demonstrate red shifting of phonons and quasi-particle production similar to pre-heating after the inflation of universe. At the end of expansion, we also observe spontaneous non-zero winding numbers in the ring. Using Monte-Carlo simulations, we predict the widths of the resulting winding number distribution, which agree well with our experimental findings.
Accurate stratospheric particle size distributions from a flat plate collection surface
NASA Technical Reports Server (NTRS)
Zolensky, M. E.; Mackinnon, I. D. R.
1985-01-01
Flat plate particle collections have revealed the presence of a remarkable variety of both terrestrial and extraterrestrial material in the stratosphere. It is found that the ratio of terrestrial to extraterrestrial material and the nature of the material collected may vary significantly over short time scales. These fluctuations may be related to massive injections of volcanic ash, emissions from solid fuel rockets, or variations in the micrometeoroid flux. The variations in particle number density can be of great importance to the earth's atmospheric radiation balance, and, therefore, its climate. With the objective to assess the number density of solid particles in the stratosphere, an examination has been conducted of all particles exceeding 1 micron in average diameter for a representative suite of particles obtained from a single flat plate collection surface. Attention is given to solid particle size distributions in the stratosphere, and the origin of important stratospheric particle types.
THE LOCATION, CLUSTERING, AND PROPAGATION OF MASSIVE STAR FORMATION IN GIANT MOLECULAR CLOUDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ochsendorf, Bram B.; Meixner, Margaret; Chastenet, Jérémy
Massive stars are key players in the evolution of galaxies, yet their formation pathway remains unclear. In this work, we use data from several galaxy-wide surveys to build an unbiased data set of ∼600 massive young stellar objects, ∼200 giant molecular clouds (GMCs), and ∼100 young (<10 Myr) optical stellar clusters (SCs) in the Large Magellanic Cloud. We employ this data to quantitatively study the location and clustering of massive star formation and its relation to the internal structure of GMCs. We reveal that massive stars do not typically form at the highest column densities nor centers of their parentmore » GMCs at the ∼6 pc resolution of our observations. Massive star formation clusters over multiple generations and on size scales much smaller than the size of the parent GMC. We find that massive star formation is significantly boosted in clouds near SCs. However, whether a cloud is associated with an SC does not depend on either the cloud’s mass or global surface density. These results reveal a connection between different generations of massive stars on timescales up to 10 Myr. We compare our work with Galactic studies and discuss our findings in terms of GMC collapse, triggered star formation, and a potential dichotomy between low- and high-mass star formation.« less
NASA Astrophysics Data System (ADS)
Vazza, F.; Brunetti, G.; Gheller, C.; Brunino, R.
2010-11-01
We present a sample of 20 massive galaxy clusters with total virial masses in the range of 6 × 10 14 M ⊙ ⩽ Mvir ⩽ 2 × 10 15 M ⊙, re-simulated with a customized version of the 1.5. ENZO code employing adaptive mesh refinement. This technique allowed us to obtain unprecedented high spatial resolution (≈25 kpc/h) up to the distance of ˜3 virial radii from the clusters center, and makes it possible to focus with the same level of detail on the physical properties of the innermost and of the outermost cluster regions, providing new clues on the role of shock waves and turbulent motions in the ICM, across a wide range of scales. In this paper, a first exploratory study of this data set is presented. We report on the thermal properties of galaxy clusters at z = 0. Integrated and morphological properties of gas density, gas temperature, gas entropy and baryon fraction distributions are discussed, and compared with existing outcomes both from the observational and from the numerical literature. Our cluster sample shows an overall good consistency with the results obtained adopting other numerical techniques (e.g. Smoothed Particles Hydrodynamics), yet it provides a more accurate representation of the accretion patterns far outside the cluster cores. We also reconstruct the properties of shock waves within the sample by means of a velocity-based approach, and we study Mach numbers and energy distributions for the various dynamical states in clusters, giving estimates for the injection of Cosmic Rays particles at shocks. The present sample is rather unique in the panorama of cosmological simulations of massive galaxy clusters, due to its dynamical range, statistics of objects and number of time outputs. For this reason, we deploy a public repository of the available data, accessible via web portal at http://data.cineca.it.
Tattini, Lorenzo; Olmi, Simona; Torcini, Alessandro
2012-06-01
In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
Peer Assessment for Massive Open Online Courses (MOOCs)
ERIC Educational Resources Information Center
Suen, Hoi K.
2014-01-01
The teach-learn-assess cycle in education is broken in a typical massive open online course (MOOC). Without formative assessment and feedback, MOOCs amount to information dump or broadcasting shows, not educational experiences. A number of remedies have been attempted to bring formative assessment back into MOOCs, each with its own limits and…
Barriers to Taking Massive Open Online Courses (MOOCs)
ERIC Educational Resources Information Center
Semenova, Tatiana Vadimovna; Rudakova, Lyudmila Mikhailovna
2016-01-01
Researchers of the traditional higher education system identify a number of factors affecting admission to a university (barriers to entry) and factors of its successful completion (barriers to exit). Massive open online courses (MOOCs), available to any Internet user, remove barriers to entry because anyone can study there. But do all students…
Automating a Massive Online Course with Cluster Computing
ERIC Educational Resources Information Center
Haas, Timothy C.
2016-01-01
Before massive numbers of students can take online courses for college credit, the challenges of providing tutoring support, answers to student-posed questions, and the control of cheating will need to be addressed. These challenges are taken up here by developing an online course delivery system that runs in a cluster computing environment and is…
Massive Open Online Courses (MOOCs): Current Applications and Future Potential
ERIC Educational Resources Information Center
Milheim, William D.
2013-01-01
Massive Open Online Courses (or MOOCs) are the subject of numerous recent articles in "The Chronicle of Higher Education," "The New York Times," and other publications related to their increasing use by a variety of universities to reach large numbers of online students. This article describes the current state of these online…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popolo, A. Del; Delliou, M. Le, E-mail: adelpopolo@oact.inaf.it, E-mail: delliou@ift.unesp.br
2014-12-01
We continue the study of the impact of baryon physics on the small scale problems of the ΛCDM model, based on a semi-analytical model (Del Popolo, 2009). With such model, we show how the cusp/core, missing satellite (MSP), Too Big to Fail (TBTF) problems and the angular momentum catastrophe can be reconciled with observations, adding parent-satellite interaction. Such interaction between dark matter (DM) and baryons through dynamical friction (DF) can sufficiently flatten the inner cusp of the density profiles to solve the cusp/core problem. Combining, in our model, a Zolotov et al. (2012)-like correction, similarly to Brooks et al. (2013),more » and effects of UV heating and tidal stripping, the number of massive, luminous satellites, as seen in the Via Lactea 2 (VL2) subhaloes, is in agreement with the numbers observed in the MW, thus resolving the MSP and TBTF problems. The model also produces a distribution of the angular spin parameter and angular momentum in agreement with observations of the dwarfs studied by van den Bosch, Burkert, and Swaters (2001)« less
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations
NASA Technical Reports Server (NTRS)
Kaiser, N.; Davis, M.
1985-01-01
It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.
NASA Astrophysics Data System (ADS)
Jia, T.; Yu, X.
2018-04-01
With the availability of massive trajectory data, it is highly valuable to reveal their activity information for many domains such as understanding the functionality of urban regions. This article utilizes the scaling patterns of human activities to enhance functional distribution of natural urban places. Specifically, we proposed a temporal city clustering algorithm to aggregate the stopping locations into natural urban places, which are reported to follow remarkable power law distributions of sizes and obey a universal law of economy of scale on human interactions with urban infrastructure. Besides, we proposed a novel Bayesian inference model with damping factor to estimate the most likely POI type associated with a stopping location. Our results suggest that hot natural urban places could be effectively identified from their scaling patterns and their functionality can be very well enhanced. For instance, natural urban places containing airport or railway station can be highly stressed by accumulating the massive types of human activities.
Biology Inspired Approach for Communal Behavior in Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2006-01-01
Research in wireless sensor network technology has exploded in the last decade. Promises of complex and ubiquitous control of the physical environment by these networks open avenues for new kinds of science and business. Due to the small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors working in concert. Although the reduction in size has been phenomenal it results in severe limitations on the computing, communicating, and power capabilities of these devices. Under these constraints, research efforts have concentrated on developing techniques for performing relatively simple tasks with minimal energy expense assuming some form of centralized control. Unfortunately, centralized control does not scale to massive size networks and execution of simple tasks in sparsely populated networks will not lead to the sophisticated applications predicted. These must be enabled by new techniques dependent on local and autonomous cooperation between sensors to effect global functions. As a step in that direction, in this work we detail a technique whereby a large population of sensors can attain a global goal using only local information and by making only local decisions without any form of centralized control.
Kinematic evidence for feedback-driven star formation in NGC 1893
NASA Astrophysics Data System (ADS)
Lim, Beomdu; Sung, Hwankyung; Bessell, Michael S.; Lee, Sangwoo; Lee, Jae Joon; Oh, Heeyoung; Hwang, Narae; Park, Byeong-Gon; Hur, Hyeonoh; Hong, Kyeongsoo; Park, Sunkyung
2018-06-01
OB associations are the prevailing star-forming sites in the Galaxy. Up to now, the process of how OB associations were formed remained a mystery. A possible process is self-regulating star formation driven by feedback from massive stars. However, although a number of observational studies uncovered various signposts of feedback-driven star formation, the effectiveness of such feedback has been questioned. Stellar and gas kinematics is a promising tool to capture the relative motion of newborn stars and gas away from ionizing sources. We present high-resolution spectroscopy of stars and gas in the young open cluster NGC 1893. Our findings show that newborn stars and the tadpole nebula Sim 130 are moving away from the central cluster containing two O-type stars, and that the time-scale of sequential star formation is about 1 Myr within a 9 pc distance. The newborn stars formed by feedback from massive stars account for at least 18 per cent of the total stellar population in the cluster, suggesting that this process can play an important role in the formation of OB associations. These results support the self-regulating star formation model.
Extracting Databases from Dark Data with DeepDive
Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng
2016-01-01
DeepDive is a system for extracting relational databases from dark data: the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data — scientific papers, Web classified ads, customer service notes, and so on — were instead in a relational database, it would give analysts a massive and valuable new set of “big data.” DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference. PMID:28316365
GEMINI: a computationally-efficient search engine for large gene expression datasets.
DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick
2016-02-24
Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.
NASA Astrophysics Data System (ADS)
Wilber, A.; Brüggen, M.; Bonafede, A.; Rafferty, D.; Savini, F.; Shimwell, T.; van Weeren, R. J.; Botteon, A.; Cassano, R.; Brunetti, G.; De Gasperin, F.; Wittor, D.; Hoeft, M.; Birzan, L.
2018-05-01
Merging galaxy clusters produce low-Mach-number shocks in the intracluster medium. These shocks can accelerate electrons to relativistic energies that are detectable at radio frequencies. MACS J0744.9+3927 is a massive [M500 = (11.8 ± 2.8) × 1014 M⊙], high-redshift (z = 0.6976) cluster where a Bullet-type merger is presumed to have taken place. Sunyaev-Zel'dovich maps from MUSTANG indicate that a shock, with Mach number M = 1.0-2.9 and an extension of ˜200 kpc, sits near the centre of the cluster. The shock is also detected as a brightness and temperature discontinuity in X-ray observations. To search for diffuse radio emission associated with the merger, we have imaged the cluster with the LOw Frequency ARray (LOFAR) at 120-165 MHz. Our LOFAR radio images reveal previously undetected AGN emission, but do not show clear cluster-scale diffuse emission in the form of a radio relic nor a radio halo. The region of the shock is on the western edge of AGN lobe emission from the brightest cluster galaxy. Correlating the flux of known shock-induced radio relics versus their size, we find that the radio emission overlapping the shocked region in MACS J0744.9+3927 is likely of AGN origin. We argue against the presence of a relic caused by diffusive shock acceleration and suggest that the shock is too weak to accelerate electrons from the intracluster medium.
Cosmological perturbation and matter power spectrum in bimetric massive gravity
NASA Astrophysics Data System (ADS)
Geng, Chao-Qiang; Lee, Chung-Chi; Zhang, Kaituo
2018-04-01
We discuss the linear perturbation equations with the synchronous gauge in a minimal scenario of the bimetric massive gravity theory. We find that the matter density perturbation and matter power spectrum are suppressed. We also examine the ghost and stability problems and show that the allowed deviation of this gravitational theory from the cosmological constant is constrained to be smaller than O(10-2) by the large scale structure observational data.
Assefa, Tsion; Haile Mariam, Damen; Mekonnen, Wubegzier; Derbew, Miliard
2017-12-28
A rapid transition from severe physician workforce shortage to massive production to ensure the physician workforce demand puts the Ethiopian health care system in a variety of challenges. Therefore, this study discovered how the health system response for physician workforce shortage using the so-called flooding strategy was viewed by different stakeholders. The study adopted the grounded theory research approach to explore the causes, contexts, and consequences (at the present, in the short and long term) of massive medical student admission to the medical schools on patient care, medical education workforce, and medical students. Forty-three purposively selected individuals were involved in a semi-structured interview from different settings: academics, government health care system, and non-governmental organizations (NGOs). Data coding, classification, and categorization were assisted using ATLAs.ti qualitative data analysis scientific software. In relation to the health system response, eight main categories were emerged: (1) reasons for rapid medical education expansion; (2) preparation for medical education expansion; (3) the consequences of rapid medical education expansion; (4) massive production/flooding as human resources for health (HRH) development strategy; (5) cooperation on HRH development; (6) HRH strategies and planning; (7) capacity of system for HRH development; and (8) institutional continuity for HRH development. The demand for physician workforce and gaining political acceptance were cited as main reasons which motivated the government to scale up the medical education rapidly. However, the rapid expansion was beyond the capacity of medical schools' human resources, patient flow, and size of teaching hospitals. As a result, there were potential adverse consequences in clinical service delivery, and teaching learning process at the present: "the number should consider the available resources such as number of classrooms, patient flows, medical teachers, library…". In the future, it was anticipated to end in surplus in physician workforce, unemployment, inefficiency, and pressure on the system: "…flooding may seem a good strategy superficially but it is a dangerous strategy. It may put the country into crisis, even if good physicians are being produced; they may not get a place where to go…". Massive physician workforce production which is not closely aligned with the training capacity of the medical schools and the absorption of graduates in to the health system will end up in unanticipated adverse consequences.
A universal minimal mass scale for present-day central black holes
NASA Astrophysics Data System (ADS)
Alexander, Tal; Bar-Or, Ben
2017-08-01
The early stages of massive black hole growth are poorly understood1. High-luminosity active galactic nuclei at very high redshift2 z further imply rapid growth soon after the Big Bang. Suggested formation mechanisms typically rely on the extreme conditions found in the early Universe (very low metallicity, very high gas or star density). It is therefore plausible that these black hole seeds were formed in dense environments, at least a Hubble time ago (z > 1.8 for a look-back time of tH = 10 Gyr)3. Intermediate-mass black holes (IMBHs) of mass M• ≈ 102-105 solar masses, M⊙, are the long-sought missing link4 between stellar black holes, born of supernovae5, and massive black holes6, tied to galaxy evolution by empirical scaling relations7,8. The relation between black hole mass, M•, and stellar velocity dispersion, σ★, that is observed in the local Universe over more than about three decades in massive black hole mass, correlates M• and σ★ on scales that are well outside the massive black hole's radius of dynamical influence6, rh≈GM•/σ★2. We show that low-mass black hole seeds that accrete stars from locally dense environments in galaxies following a universal M•/σ★ relation9,10 grow over the age of the Universe to be above M0≈3×105M⊙ (5% lower limit), independent of the unknown seed masses and formation processes. The mass M0 depends weakly on the uncertain formation redshift, and sets a universal minimal mass scale for present-day black holes. This can explain why no IMBHs have yet been found6, and it implies that present-day galaxies with σ★ < S0 ≈ 40 km s-1 lack a central black hole, or formed it only recently. A dearth of IMBHs at low redshifts has observable implications for tidal disruptions11 and gravitational wave mergers12.
Spontaneous breaking of scale invariance in a D = 3 U(N ) model with Chern-Simons gauge fields
Bardeen, William A.; Moshe, Moshe
2014-06-18
We study spontaneous breaking of scale invariance in the large N limit of three dimensional U(N ) κ Chern-Simons theories coupled to a scalar field in the fundamental representation. When a λ 6 ( Ø † · Ø) 3 self interaction term is added to the action we find a massive phase at a certain critical value for a combination of the λ(6) and ’t Hooft’s λ = N/κ couplings. This model attracted recent attention since at finite κ it contains a singlet sector which is conjectured to be dual to Vasiliev’s higher spin gravity on AdS 4. Our papermore » concentrates on the massive phase of the 3d boundary theory. We discuss the advantage of introducing masses in the boundary theory through spontaneous breaking of scale invariance.« less
NASA Astrophysics Data System (ADS)
Pletikapić, Galja; Ivošević DeNardis, Nadica
2017-01-01
Surface analytical methods are applied to examine the environmental status of seawaters. The present overview emphasizes advantages of combining surface analytical methods, applied to a hazardous situation in the Adriatic Sea, such as monitoring of the first aggregation phases of dissolved organic matter in order to potentially predict the massive mucilage formation and testing of oil spill cleanup. Such an approach, based on fast and direct characterization of organic matter and its high-resolution visualization, sets a continuous-scale description of organic matter from micro- to nanometre scales. Electrochemical method of chronoamperometry at the dropping mercury electrode meets the requirements for monitoring purposes due to the simple and fast analysis of a large number of natural seawater samples enabling simultaneous differentiation of organic constituents. In contrast, atomic force microscopy allows direct visualization of biotic and abiotic particles and provides an insight into structural organization of marine organic matter at micro- and nanometre scales. In the future, merging data at different spatial scales, taking into account experimental input on micrometre scale, observations on metre scale and modelling on kilometre scale, will be important for developing sophisticated technological platforms for knowledge transfer, reports and maps applicable for the marine environmental protection and management of the coastal area, especially for tourism, fishery and cruiser trafficking.
Hu, Wei; Lin, Lin; Yang, Chao
2015-12-21
With the help of our recently developed massively parallel DGDFT (Discontinuous Galerkin Density Functional Theory) methodology, we perform large-scale Kohn-Sham density functional theory calculations on phosphorene nanoribbons with armchair edges (ACPNRs) containing a few thousands to ten thousand atoms. The use of DGDFT allows us to systematically achieve a conventional plane wave basis set type of accuracy, but with a much smaller number (about 15) of adaptive local basis (ALB) functions per atom for this system. The relatively small number of degrees of freedom required to represent the Kohn-Sham Hamiltonian, together with the use of the pole expansion the selected inversion (PEXSI) technique that circumvents the need to diagonalize the Hamiltonian, results in a highly efficient and scalable computational scheme for analyzing the electronic structures of ACPNRs as well as their dynamics. The total wall clock time for calculating the electronic structures of large-scale ACPNRs containing 1080-10,800 atoms is only 10-25 s per self-consistent field (SCF) iteration, with accuracy fully comparable to that obtained from conventional planewave DFT calculations. For the ACPNR system, we observe that the DGDFT methodology can scale to 5000-50,000 processors. We use DGDFT based ab initio molecular dynamics (AIMD) calculations to study the thermodynamic stability of ACPNRs. Our calculations reveal that a 2 × 1 edge reconstruction appears in ACPNRs at room temperature.
An analysis of the crossover between local and massive separation on airfoils
NASA Technical Reports Server (NTRS)
Barnett, M.; Carter, J. E.
1987-01-01
Massive separation on airfoils operating at high Reynolds number is an important problem to the aerodynamicist, since its onset generally determines the limiting performance of an airfoil, and it can lead to serious problems related to aircraft control as well as turbomachinery operation. The phenomenon of crossover between local separation and massive separation on realistic airfoil geometries induced by airfoil thickness is investigated for low speed (incompressible) flow. The problem is studied both for the asymptotic limit of infinite Reynolds number using triple-deck theory, and for finite Reynolds number using interacting boundary-layer theory. Numerical results are presented which follow the evolution of the flow as it develops from a mildly separated state to one dominated by the massively separated flow structure as the thickness of the airfoil geometry is systematically increased. The effect of turbulence upon the evolution of the flow is considered, and the impact is significant, with the principal effect being the suppression of the onset of separation. Finally, the effect of surface suction and injection for boundary-layer control is considered. The approach which was developed provides a valuable tool for the analysis of boundary-layer separation up to and beyond stall. Another important conclusion is that interacting boundary-layer theory provides an efficient tool for the analysis of the effect of turbulence and boundary-layer control upon separated vicsous flow.
NASA Astrophysics Data System (ADS)
Dessauges-Zavadsky, Miroslava; Cava, Antonio; Richard, Johan; Schaerer, Daniel; Egami, Eiichi
2015-08-01
Deep and high-resolution imaging has revealed clumpy, rest-frame UV morphologies among z=1-3 galaxies. The majority of these galaxies has been shown to be dominated by ordered disk rotation, which led to the conclusion that the observed giant clumps, resolved on kpc-scales, are generated from disk fragmentation due to gravitational instability. State-of-the-art numerical simulations show that they may occupy a relevant role in galaxy evolution, contributing to the galactic bulge formation. Despite the high resolution attained by the most advanced ground- and space-based facilities, as well as in numerical simulations, the intrinsic typical masses and scale sizes of these star-forming clumps remain unconstrained, since they are barely resolved at z=1-3.Thanks to the amplification and stretching power provided by strong gravitational lensing, we are likely to reach the spatial resolving power for unveiling the physics of these star-forming regions. We report on the study of clumpy star formation observed in the Cosmic Snake, a strongly lensed galaxy at z=1, representative of the typical star-forming population close to the peak of Universe activity. About 20 clumps are identified in the HST images. Benefiting from extreme amplification factors up to 100, they are resolved down to an intrinsic scale of 100 pc, never reached before at z=1.The HST multi-wavelength analysis of these individual star clusters allows us to determine their intrinsic physical properties, showing stellar masses (Ms) from 106 to 108.3 Msun, sizes from 100 to 400 pc, and ages from 106 to 108.5 yr. The masses we find are in line with the new, very high resolution numerical simulations, which also suggest that the massive giant clumps previously observed at high redshift with Ms as high as 109-10 Msun may suffer from low resolution effects, being unresolved conglomerates of less massive star clusters. We also compare our results with those of massive young clusters in nearby galaxies. Our approved ALMA observations will reach the same 100 pc scale, which is essential for the study of associated giant molecular clouds in this galaxy.
A Mechanical Model of Brownian Motion for One Massive Particle Including Slow Light Particles
NASA Astrophysics Data System (ADS)
Liang, Song
2018-01-01
We provide a connection between Brownian motion and a classical mechanical system. Precisely, we consider a system of one massive particle interacting with an ideal gas, evolved according to non-random mechanical principles, via interaction potentials, without any assumption requiring that the initial velocities of the environmental particles should be restricted to be "fast enough". We prove the convergence of the (position, velocity)-process of the massive particle under a certain scaling limit, such that the mass of the environmental particles converges to 0 while the density and the velocities of them go to infinity, and give the precise expression of the limiting process, a diffusion process.
NASA Astrophysics Data System (ADS)
Shirakata, Hikari; Kawaguchi, Toshihiro; Okamoto, Takashi; Ishiyama, Tomoaki
2017-09-01
We present the galactic stellar age - velocity dispersion relation obtained from a semi-analytic model of galaxy formation. We divide galaxies into two populations: galaxies which have over-massive/under-massive black holes (BHs) against the best-fitting BH mass - velocity dispersion relation. We find that galaxies with larger velocity dispersion have older stellar ages. We also find that galaxies with over-massive BHs have older stellar ages. These results are consistent with observational results obtained from Martin-Navarro et al. (2016). We tested the model with weak AGN feedback and find that galaxies with larger velocity dispersion have a younger stellar age.
N-body simulations with a cosmic vector for dark energy
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.
2012-07-01
We present the results of a series of cosmological N-body simulations of a vector dark energy (VDE) model, performed using a suitably modified version of the publicly available GADGET-2 code. The set-ups of our simulations were calibrated pursuing a twofold aim: (1) to analyse the large-scale distribution of massive objects and (2) to determine the properties of halo structure in this different framework. We observe that structure formation is enhanced in VDE, since the mass function at high redshift is boosted up to a factor of 10 with respect to Λ cold dark matter (ΛCDM), possibly alleviating tensions with the observations of massive clusters at high redshifts and early reionization epoch. Significant differences can also be found for the value of the growth factor, which in VDE shows a completely different behaviour, and in the distribution of voids, which in this cosmology are on average smaller and less abundant. We further studied the structure of dark matter haloes more massive than 5 × 1013 h-1 M⊙, finding that no substantial difference emerges when comparing spin parameter, shape, triaxiality and profiles of structures evolved under different cosmological pictures. Nevertheless, minor differences can be found in the concentration-mass relation and the two-point correlation function, both showing different amplitudes and steeper slopes. Using an additional series of simulations of a ΛCDM scenario with the same ? and σ8 used in the VDE cosmology, we have been able to establish whether the modifications induced in the new cosmological picture were due to the particular nature of the dynamical dark energy or a straightforward consequence of the cosmological parameters. On large scales, the dynamical effects of the cosmic vector field can be seen in the peculiar evolution of the cluster number density function with redshift, in the shape of the mass function, in the distribution of voids and on the characteristic form of the growth index γ(z). On smaller scales, internal properties of haloes are almost unaffected by the change of cosmology, since no statistical difference can be observed in the characteristics of halo profiles, spin parameters, shapes and triaxialities. Only halo masses and concentrations show a substantial increase, which can, however, be attributed to the change in the cosmological parameters.
NASA Astrophysics Data System (ADS)
Keszthelyi, Zsolt; Wade, Gregg A.; Petit, Veronique
2017-11-01
Large-scale dipolar surface magnetic fields have been detected in a fraction of OB stars, however only few stellar evolution models of massive stars have considered the impact of these fossil fields. We are performing 1D hydrodynamical model calculations taking into account evolutionary consequences of the magnetospheric-wind interactions in a simplified parametric way. Two effects are considered: i) the global mass-loss rates are reduced due to mass-loss quenching, and ii) the surface angular momentum loss is enhanced due to magnetic braking. As a result of the magnetic mass-loss quenching, the mass of magnetic massive stars remains close to their initial masses. Thus magnetic massive stars - even at Galactic metallicity - have the potential to be progenitors of "heavy" stellar mass black holes. Similarly, at Galactic metallicity, the formation of pair instability supernovae is plausible with a magnetic progenitor.
Perceptions of Authority in a Massive Open Online Course: An Intercultural Study
ERIC Educational Resources Information Center
Andersen, Bjarke Lindsø; Na-songkhla, Jaitip; Hasse, Cathrine; Nordin, Norazah; Norman, Helmi
2018-01-01
Spurred on by rapid advances of technology, massive open online courses (MOOCs) have proliferated over the past decade. They pride themselves on making (higher) education available to more people at reduced (or no) cost compared to traditional university schemes and on being inclusive in terms of admitting vast numbers of students from all over…
Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation
ERIC Educational Resources Information Center
Chauhan, Amit
2014-01-01
In 2014, Massive Open Online Courses (MOOCs) are expected to witness a phenomenal growth in student registration compared to the previous years (Lee, Stewart, & Claugar-Pop, 2014). As MOOCs continue to grow in number, there has been an increasing focus on assessment and evaluation. Because of the huge enrollments in a MOOC, it is impossible…
COMPACT E+A GALAXIES AS A PROGENITOR OF MASSIVE COMPACT QUIESCENT GALAXIES AT 0.2 < z < 0.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahid, H. Jabran; Hochmuth, Nicholas Baeza; Geller, Margaret J.
We search the Sloan Digital Sky Survey and the Baryon Oscillation Sky Survey to identify ∼5500 massive compact quiescent galaxy candidates at 0.2 < z < 0.8. We robustly classify a subsample of 438 E+A galaxies based on their spectral properties and make this catalog publicly available. We examine sizes, stellar population ages, and kinematics of galaxies in the sample and show that the physical properties of compact E+A galaxies suggest that they are a progenitor of massive compact quiescent galaxies. Thus, two classes of objects—compact E+A and compact quiescent galaxies—may be linked by a common formation scenario. The typicalmore » stellar population age of compact E+A galaxies is <1 Gyr. The existence of compact E+A galaxies with young stellar populations at 0.2 < z < 0.8 means that some compact quiescent galaxies first appear at intermediate redshifts. We derive a lower limit for the number density of compact E+A galaxies. Assuming passive evolution, we convert this number density into an appearance rate of new compact quiescent galaxies at 0.2 < z < 0.8. The lower limit number density of compact quiescent galaxies that may appear at z < 0.8 is comparable to the lower limit of the total number density of compact quiescent galaxies at these intermediate redshifts. Thus, a substantial fraction of the z < 0.8 massive compact quiescent galaxy population may descend from compact E+A galaxies at intermediate redshifts.« less
NASA Astrophysics Data System (ADS)
Choi, Junil; Love, David J.; Bidigare, Patrick
2014-10-01
The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.
Homogeneous, anisotropic three-manifolds of topologically massive gravity
NASA Astrophysics Data System (ADS)
Nutku, Y.; Baekler, P.
1989-10-01
We present a new class of exact solutions of Deser, Jackiw, and Templeton's theory (DJT) of topologically massive gravity which consists of homogeneous, anisotropic manifolds. In these solutions the coframe is given by the left-invariant 1-forms of 3-dimensional Lie algebras up to constant scale factors. These factors are fixed in terms of the DJT coupling constant μ which is the constant of proportionality between the Einstein and Cotton tensors in 3-dimensions. Differences between the scale factors result in anisotropy which is a common feature of topologically massive 3-manifolds. We have found that only Bianchi Types VI, VIII, and IX lead to nontrivial solutions. Among these, a Bianchi Type IX, squashed 3-sphere solution of the Euclideanized DJT theory has finite action. Bianchi Type VIII, IX solutions can variously be embedded in the de Sitter/anti-de Sitter space. That is, some DJT 3-manifolds that we shall present here can be regarded as the basic constituent of anti-de Sitter space which is the ground state solution in higher dimensional generalization of Einstein's general relativity.
Homogeneous, anisotropic three-manifolds of topologically massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutku, Y.; Baekler, P.
1989-10-01
We present a new class of exact solutions of Deser, Jackiw, and Templeton's theory (DJT) of topologically massive gravity which consists of homogeneous, anisotropic manifolds. In these solutions the coframe is given by the left-invariant 1-forms of 3-dimensional Lie algebras up to constant scale factors. These factors are fixed in terms of the DJT coupling constant {mu}m which is the constant of proportionality between the Einstein and Cotton tensors in 3-dimensions. Differences between the scale factors result in anisotropy which is a common feature of topologically massive 3-manifolds. We have found that only Bianchi Types VI, VIII, and IX leadmore » to nontrivial solutions. Among these, a Bianchi Type IX, squashed 3-sphere solution of the Euclideanized DJT theory has finite action, Bianchi Type VIII, IX solutions can variously be embedded in the de Sitter/anti-de Sitter space. That is, some DJT 3-manifolds that we shall present here can be regarded as the basic constitent of anti-de Sitter space which is the ground state solution in higher dimensional generalizations of Einstein's general relativity. {copyright} 1989 Academic Press, Inc.« less
Analytical Cost Metrics : Days of Future Past
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less
On the uniqueness of the non-minimal matter coupling in massive gravity and bigravity
Huang, Qing-Guo; Ribeiro, Raquel H.; Xing, Yu-Hang; ...
2015-07-03
In de Rham–Gabadadze–Tolley (dRGT) massive gravity and bi-gravity, a non-minimal matter coupling involving both metrics generically reintroduces the Boulware–Deser (BD) ghost. A non-minimal matter coupling via a simple, yet specific composite metric has been proposed, which eliminates the BD ghost below the strong coupling scale. Working explicitly in the metric formulation and for arbitrary spacetime dimensions, we show that this composite metric is the unique consistent non-minimal matter coupling below the strong coupling scale, which emerges out of two diagnostics, namely, the absence of Ostrogradski ghosts in the decoupling limit and the absence of the BD ghost from matter quantummore » loop corrections.« less
Observational constraints on varying neutrino-mass cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.
We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.
A HIERARCHICAL MODELING FRAMEWORK FOR GEOLOGICAL STORAGE OF CARBON DIOXIDE
Carbon Capture and Storage, or CCS, is likely to be an important technology in a carbonconstrained world. CCS will involve subsurface injection of massive amounts of captured CO2, on a scale that has not previously been approached. The unprecedented scale of t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenfeld, Alessandro; Treu, Tommaso; Marshall, Philip J.
2015-02-20
We investigate the cosmic evolution of the internal structure of massive early-type galaxies over half of the age of the universe. We perform a joint lensing and stellar dynamics analysis of a sample of 81 strong lenses from the Strong Lensing Legacy Survey and Sloan ACS Lens Survey and combine the results with a hierarchical Bayesian inference method to measure the distribution of dark matter mass and stellar initial mass function (IMF) across the population of massive early-type galaxies. Lensing selection effects are taken into account. We find that the dark matter mass projected within the inner 5 kpc increasesmore » for increasing redshift, decreases for increasing stellar mass density, but is roughly constant along the evolutionary tracks of early-type galaxies. The average dark matter slope is consistent with that of a Navarro-Frenk-White profile, but is not well constrained. The stellar IMF normalization is close to a Salpeter IMF at log M {sub *} = 11.5 and scales strongly with increasing stellar mass. No dependence of the IMF on redshift or stellar mass density is detected. The anti-correlation between dark matter mass and stellar mass density supports the idea of mergers being more frequent in more massive dark matter halos.« less
Sonnenfeld, Alessandro; Treu, Tommaso; Marshall, Philip J.; ...
2015-02-17
Here, we investigate the cosmic evolution of the internal structure of massive early-type galaxies over half of the age of the universe. We also perform a joint lensing and stellar dynamics analysis of a sample of 81 strong lenses from the Strong Lensing Legacy Survey and Sloan ACS Lens Survey and combine the results with a hierarchical Bayesian inference method to measure the distribution of dark matter mass and stellar initial mass function (IMF) across the population of massive early-type galaxies. Lensing selection effects are taken into account. Furthermore, we found that the dark matter mass projected within the innermore » 5 kpc increases for increasing redshift, decreases for increasing stellar mass density, but is roughly constant along the evolutionary tracks of early-type galaxies. The average dark matter slope is consistent with that of a Navarro-Frenk-White profile, but is not well constrained. The stellar IMF normalization is close to a Salpeter IMF at log M * = 11.5 and scales strongly with increasing stellar mass. No dependence of the IMF on redshift or stellar mass density is detected. The anti-correlation between dark matter mass and stellar mass density supports the idea of mergers being more frequent in more massive dark matter halos.« less
NASA Astrophysics Data System (ADS)
Kang, Dong Hun; Yun, Tae Sup
2018-02-01
We propose a new outflow boundary condition to minimize the capillary end effect for a pore-scale CO2 displacement simulation. The Rothman-Keller lattice Boltzmann method with multi-relaxation time is implemented to manipulate a nonflat wall and inflow-outflow boundaries with physically acceptable fluid properties in 2-D microfluidic chip domain. Introducing a mean capillary pressure acting at CO2-water interface to the nonwetting fluid at the outlet effectively prevents CO2 injection pressure from suddenly dropping upon CO2 breakthrough such that the continuous CO2 invasion and the increase of CO2 saturation are allowed. This phenomenon becomes most pronounced at capillary number of logCa = -5.5, while capillary fingering and massive displacement of CO2 prevail at low and high capillary numbers, respectively. Simulations with different domain length in homogeneous and heterogeneous domains reveal that capillary pressure and CO2 saturation near the inlet are reproducible compared with those with a proposed boundary condition. The residual CO2 saturation uniquely follows the increasing tendency with increasing capillary number, corroborated by experimental evidences. The determination of the mean capillary pressure and its sensitivity are also discussed. The proposed boundary condition is commonly applicable to other pore-scale simulations to accurately capture the spatial distribution of nonwetting fluid and corresponding displacement ratio.
Let them fall where they may: congruence analysis in massive phylogenetically messy data sets.
Leigh, Jessica W; Schliep, Klaus; Lopez, Philippe; Bapteste, Eric
2011-10-01
Interest in congruence in phylogenetic data has largely focused on issues affecting multicellular organisms, and animals in particular, in which the level of incongruence is expected to be relatively low. In addition, assessment methods developed in the past have been designed for reasonably small numbers of loci and scale poorly for larger data sets. However, there are currently over a thousand complete genome sequences available and of interest to evolutionary biologists, and these sequences are predominantly from microbial organisms, whose molecular evolution is much less frequently tree-like than that of multicellular life forms. As such, the level of incongruence in these data is expected to be high. We present a congruence method that accommodates both very large numbers of genes and high degrees of incongruence. Our method uses clustering algorithms to identify subsets of genes based on similarity of phylogenetic signal. It involves only a single phylogenetic analysis per gene, and therefore, computation time scales nearly linearly with the number of genes in the data set. We show that our method performs very well with sets of sequence alignments simulated under a wide variety of conditions. In addition, we present an analysis of core genes of prokaryotes, often assumed to have been largely vertically inherited, in which we identify two highly incongruent classes of genes. This result is consistent with the complexity hypothesis.
The progenitors of the first red sequence galaxies at z ~ 2
NASA Astrophysics Data System (ADS)
Barro, G.; Faber, S.; Perez-Gonzalez, P.; Koo, D.; Williams, C.; Kocevski, D.; Trump, J.; Mozena, M.
2013-07-01
Nearby galaxies come in two flavors: red quiescent galaxies (QGs) with old stellar populations, and blue young star-forming galaxies (SFGs). This color bimodality seems to be already in place at z = 2 - 3, presenting also strong correlations with size and morphology. Surprisingly, massive QGs at higher redshifts are ~5 times smaller than local, equal mass analogs. In contrast, most of the massive SFGs at these redshifts are still relatively large disks. The strong bimodality in both SFR and sizes indicates that some SFGs must experience strong structural transformations accompanied by a rapid truncation of the star-formation to match the observed properties of QGs. Using high-resolution HST/WFC3 F160W imaging from the CANDELS survey in GOODS-S and UDS, along with multi-wavelength ancillary data, we analyze stellar masses, SFRs and sizes of a sample of massive (M* > 1010 M ⊙) galaxies at z = 1.4 - 3.0 to identify a population of compact SFGs with similar structural properties as compact QGs at z~2. We also find that the number density of QGs increases rapidly since z = 3. Among these, the number of compact QGs builds up first, and only at z < 1.8 we do start finding a sizable number of extended QGs. This suggests that the bulk of these galaxies are assembled at late times by both continuous migration (quenching) of non-compact SFGs and size growth of cQGs. As a result of this growth, the population of cQGs disappears by z~1. Simultaneously, we identify a population of compact SFGs (cSFGs) whose number density decreases steadily with time since z = 3.0, being almost completely absent at z < 1.4. The number of cSFGs makes up less than 20% of all massive SFGs, but they present similar number densities as cQGs down to z~2, suggesting an evolutionary link between the two populations.
PoMiN: A Post-Minkowskian N-body Solver
NASA Astrophysics Data System (ADS)
Feng, Justin; Baumann, Mark; Hall, Bryton; Doss, Joel; Spencer, Lucas; Matzner, Richard
2018-06-01
In this paper, we introduce PoMiN, a lightweight N-body code based on the post-Minkowskian N-body Hamiltonian of Ledvinka et al., which includes general relativistic effects up to first order in Newton’s constant G, and all orders in the speed of light c. PoMiN is written in C and uses a fourth-order Runge–Kutta integration scheme. PoMiN has also been written to handle an arbitrary number of particles (both massive and massless), with a computational complexity that scales as O(N 2). We describe the methods we used to simplify and organize the Hamiltonian, and the tests we performed (convergence, conservation, and analytical comparison tests) to validate the code.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
Gamma Rays from the Galactic Bulge and Large Extra Dimensions
NASA Astrophysics Data System (ADS)
Cassé, Michel; Paul, Jacques; Bertone, Gianfranco; Sigl, Günter
2004-03-01
An intriguing feature of extra dimensions is the possible production of Kaluza Klein gravitons by nucleon-nucleon bremsstrahlung, in the course of core collapse of massive stars, with gravitons then being trapped around the newly born neutron stars and decaying into two gamma rays, making neutron stars gamma-ray sources. We strengthen the limits on the radius of compactification of extra dimensions for a small number n of them, or alternatively the fundamental scale of quantum gravity, considering the gamma-ray emission of the whole population of neutron stars sitting in the Galactic bulge, instead of the closest member of this category. For n=1 the constraint on the compactification radius is R<400 μm.
Lee, JeSuk; Lee, Weon-Young; Hwang, Jang-Sun; Stack, Steven John
2014-08-01
This study investigated the nature of media coverage of a national entertainer's suicide and its impact on subsequent suicides. After the celebrity suicide, the number of suicide-related articles reported surged around 80 times in the week after the suicide compared with the week prior. Many articles (37.1%) violated several critical items on the World Health Organization suicide reporting guidelines, like containing a detailed suicide method. Most gender and age subgroups were at significantly higher risk of suicide during the 4 weeks after the celebrity suicide. Results imply that massive and noncompliant media coverage of a celebrity suicide can cause a large-scale copycat effect. © 2014 The American Association of Suicidology.
NASA Astrophysics Data System (ADS)
Rodríguez-Tzompantzi, Omar; Escalante, Alberto
2018-05-01
By applying the Faddeev-Jackiw symplectic approach we systematically show that both the local gauge symmetry and the constraint structure of topologically massive gravity with a cosmological constant Λ , elegantly encoded in the zero-modes of the symplectic matrix, can be identified. Thereafter, via a suitable partial gauge-fixing procedure, the time gauge, we calculate the quantization bracket structure (generalized Faddeev-Jackiw brackets) for the dynamic variables and confirm that the number of physical degrees of freedom is one. This approach provides an alternative to explore the dynamical content of massive gravity models.
Integrands for QCD rational terms and {N} = {4} SYM from massive CSW rules
NASA Astrophysics Data System (ADS)
Elvang, Henriette; Freedman, Daniel Z.; Kiermaier, Michael
2012-06-01
We use massive CSW rules to derive explicit compact expressions for integrands of rational terms in QCD with any number of external legs. Specifically, we present all- n integrands for the one-loop all-plus and one-minus gluon amplitudes in QCD. We extract the finite part of spurious external-bubble contributions systematically; this is crucial for the application of integrand-level CSW rules in theories without supersymmetry. Our approach yields integrands that are independent of the choice of CSW reference spinor even before integration. Furthermore, we present a recursive derivation of the recently proposed massive CSW-style vertex expansion for massive tree amplitudes and loop integrands on the Coulomb-branch of {N} = {4} SYM. The derivation requires a careful study of boundary terms in all-line shift recursion relations, and provides a rigorous (albeit indirect) proof of the recently proposed construction of massive amplitudes from soft-limits of massless on-shell amplitudes. We show that the massive vertex expansion manifestly preserves all holomorphic and half of the anti-holomorphic supercharges, diagram-by-diagram, even off-shell.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-01-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650
A modeling analysis program for the JPL table mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, William H.; Goldberg, Bruce A.
1988-01-01
Research in the third and final year of this project is divided into three main areas: (1) completion of data processing and calibration for 34 of the 1981 Region B/C images, selected from the massive JPL sodium cloud data set; (2) identification and examination of the basic features and observed changes in the morphological characteristics of the sodium cloud images; and (3) successful physical interpretation of these basic features and observed changes using the highly developed numerical sodium cloud model at AER. The modeling analysis has led to a number of definite conclusions regarding the local structure of Io's atmosphere, the gas escape mechanism at Io, and the presence of an east-west electric field and a System III longitudinal asymmetry in the plasma torus. Large scale stability, as well as some smaller scale time variability for both the sodium cloud and the structure of the plasma torus over a several year time period are also discussed.
The halo model in a massive neutrino cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massara, Elena; Villaescusa-Navarro, Francisco; Viel, Matteo, E-mail: emassara@sissa.it, E-mail: villaescusa@oats.inaf.it, E-mail: viel@oats.inaf.it
2014-12-01
We provide a quantitative analysis of the halo model in the context of massive neutrino cosmologies. We discuss all the ingredients necessary to model the non-linear matter and cold dark matter power spectra and compare with the results of N-body simulations that incorporate massive neutrinos. Our neutrino halo model is able to capture the non-linear behavior of matter clustering with a ∼20% accuracy up to very non-linear scales of k = 10 h/Mpc (which would be affected by baryon physics). The largest discrepancies arise in the range k = 0.5 – 1 h/Mpc where the 1-halo and 2-halo terms are comparable and are present also inmore » a massless neutrino cosmology. However, at scales k < 0.2 h/Mpc our neutrino halo model agrees with the results of N-body simulations at the level of 8% for total neutrino masses of < 0.3 eV. We also model the neutrino non-linear density field as a sum of a linear and clustered component and predict the neutrino power spectrum and the cold dark matter-neutrino cross-power spectrum up to k = 1 h/Mpc with ∼30% accuracy. For masses below 0.15 eV the neutrino halo model captures the neutrino induced suppression, casted in terms of matter power ratios between massive and massless scenarios, with a 2% agreement with the results of N-body/neutrino simulations. Finally, we provide a simple application of the halo model: the computation of the clustering of galaxies, in massless and massive neutrinos cosmologies, using a simple Halo Occupation Distribution scheme and our halo model extension.« less
Neumann, Julie A; Zgonis, Miltiadis H; Rickert, Kathleen D; Bradley, Kendall E; Kremen, Thomas J; Boggess, Blake R; Toth, Alison P
2017-05-01
Management of massive rotator cuff tears in shoulders without glenohumeral arthritis remains problematic for surgeons. Repairs of massive rotator cuff tears have failure rates of 20% to 94% at 1 to 2 years postoperatively as demonstrated with arthrography, ultrasound, and magnetic resonance imaging. Additionally, inconsistent outcomes have been reported with debridement alone of massive rotator cuff tears, and limitations have been seen with other current methods of operative intervention, including arthroplasty and tendon transfers. The use of interposition porcine acellular dermal matrix xenograft in patients with massive rotator cuff tears will result in improved subjective outcomes, postoperative pain, function, range of motion, and strength. Case series; Level of evidence, 4. Sixty patients (61 shoulders) were prospectively observed for a mean of 50.3 months (range, 24-63 months) after repair of massive rotator cuff tears with porcine acellular dermal matrix xenograft as an interposition graft. Subjective outcome data were obtained with visual analog scale for pain score (0-10, 0 = no pain) and Modified American Shoulder and Elbow Surgeons (MASES) score. Active range of motion in flexion, external rotation, and internal rotation were recorded. Strength in the supraspinatus and infraspinatus muscles was assessed manually on a 10-point scale and by handheld dynamometer. Ultrasound was used to assess the integrity of the repair during latest follow-up. Mean visual analog scale pain score decreased from 4.0 preoperatively to 1.0 postoperatively ( P < .001). Mean active forward flexion improved from 140.7° to 160.4° ( P < .001), external rotation at 0° of abduction from 55.6° to 70.1° ( P = .001), and internal rotation at 90° of abduction from 52.0° to 76.2° ( P < .001). Supraspinatus manual strength increased from 7.7 to 8.8 ( P < .001) and infraspinatus manual strength from 7.7 to 9.3 ( P < .001). Mean dynamometric strength in forward flexion was 77.7 N in nonoperative shoulders (shoulder that did not undergo surgery) and 67.8 N ( P < .001) in operative shoulders (shoulder that underwent rotator cuff repair with interposition porcine dermal matrix xenograft). Mean dynamometric strength in external rotation was 54.5 N in nonoperative shoulders and 50.1 N in operative shoulders ( P = .04). Average postoperative MASES score was 87.8. Musculoskeletal ultrasound showed that 91.8% (56 of 61) of repairs were fully intact; 3.3% (2 of 61), partially intact; and 4.9% (3 of 61), not intact. Patients who underwent repair of massive rotator cuff tears with interposition porcine acellular dermal matrix graft have good subjective function as assessed by the MASES score. Patients have significant improvement in pain, range of motion, and manual muscle strength. Postoperative ultrasound demonstrated that the repair was completely intact in 91.8% of patients, a vast improvement compared with results previously reported for primary repairs of massive rotator cuff tears.
Abelian Higgs cosmic strings: Small-scale structure and loops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hindmarsh, Mark; Stuckey, Stephanie; Bevis, Neil
2009-06-15
Classical lattice simulations of the Abelian Higgs model are used to investigate small-scale structure and loop distributions in cosmic string networks. Use of the field theory ensures that the small-scale physics is captured correctly. The results confirm analytic predictions of Polchinski and Rocha 29 for the two-point correlation function of the string tangent vector, with a power law from length scales of order the string core width up to horizon scale. An analysis of the size distribution of string loops gives a very low number density, of order 1 per horizon volume, in contrast with Nambu-Goto simulations. Further, our loopmore » distribution function does not support the detailed analytic predictions for loop production derived by Dubath et al. 30. Better agreement to our data is found with a model based on loop fragmentation 32, coupled with a constant rate of energy loss into massive radiation. Our results show a strong energy-loss mechanism, which allows the string network to scale without gravitational radiation, but which is not due to the production of string width loops. From evidence of small-scale structure we argue a partial explanation for the scale separation problem of how energy in the very low frequency modes of the string network is transformed into the very high frequency modes of gauge and Higgs radiation. We propose a picture of string network evolution, which reconciles the apparent differences between Nambu-Goto and field theory simulations.« less
Proxy-equation paradigm: A strategy for massively parallel asynchronous computations
NASA Astrophysics Data System (ADS)
Mittal, Ankita; Girimaji, Sharath
2017-09-01
Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.
A large-scale dynamo and magnetoturbulence in rapidly rotating core-collapse supernovae.
Mösta, Philipp; Ott, Christian D; Radice, David; Roberts, Luke F; Schnetter, Erik; Haas, Roland
2015-12-17
Magnetohydrodynamic turbulence is important in many high-energy astrophysical systems, where instabilities can amplify the local magnetic field over very short timescales. Specifically, the magnetorotational instability and dynamo action have been suggested as a mechanism for the growth of magnetar-strength magnetic fields (of 10(15) gauss and above) and for powering the explosion of a rotating massive star. Such stars are candidate progenitors of type Ic-bl hypernovae, which make up all supernovae that are connected to long γ-ray bursts. The magnetorotational instability has been studied with local high-resolution shearing-box simulations in three dimensions, and with global two-dimensional simulations, but it is not known whether turbulence driven by this instability can result in the creation of a large-scale, ordered and dynamically relevant field. Here we report results from global, three-dimensional, general-relativistic magnetohydrodynamic turbulence simulations. We show that hydromagnetic turbulence in rapidly rotating protoneutron stars produces an inverse cascade of energy. We find a large-scale, ordered toroidal field that is consistent with the formation of bipolar magnetorotationally driven outflows. Our results demonstrate that rapidly rotating massive stars are plausible progenitors for both type Ic-bl supernovae and long γ-ray bursts, and provide a viable mechanism for the formation of magnetars. Moreover, our findings suggest that rapidly rotating massive stars might lie behind potentially magnetar-powered superluminous supernovae.
Tachyon field non-minimally coupled to massive neutrino matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Safia; Myrzakulov, Nurgissa; Myrzakulov, R., E-mail: safia@ctp-jamia.res.in, E-mail: nmyrzakulov@gmail.com, E-mail: rmyrzakulov@gmail.com
2016-07-01
In this paper, we consider rolling tachyon, with steep run-away type of potentials non-minimally coupled to massive neutrino matter. The coupling dynamically builds up at late times as neutrino matter turns non-relativistic. In case of scaling and string inspired potentials, we have shown that non-minimal coupling leads to minimum in the field potential. Given a suitable choice of model parameters, it is shown to give rise to late-time acceleration with the desired equation of state.
2015-11-03
scale optical projection system powered by spatial light modulators, such as digital micro-mirror device ( DMD ). Figure 4 shows the parallel lithography ...1Scientific RepoRts | 5:16192 | DOi: 10.1038/srep16192 www.nature.com/scientificreports High throughput optical lithography by scanning a massive...array of bowtie aperture antennas at near-field X. Wen1,2,3,*, A. Datta1,*, L. M. Traverso1, L. Pan1, X. Xu1 & E. E. Moon4 Optical lithography , the
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
NASA Astrophysics Data System (ADS)
Boozer, Allen H.
2017-05-01
The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive-gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scale when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. The physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.
NASA Technical Reports Server (NTRS)
Globus, Al; Biegel, Bryan A.; Traugott, Steve
2004-01-01
AsterAnts is a concept calling for a fleet of solar sail powered spacecraft to retrieve large numbers of small (1/2-1 meter diameter) Near Earth Objects (NEOs) for orbital processing. AsterAnts could use the International Space Station (ISS) for NEO processing, solar sail construction, and to test NEO capture hardware. Solar sails constructed on orbit are expected to have substantially better performance than their ground built counterparts [Wright 1992]. Furthermore, solar sails may be used to hold geosynchronous communication satellites out-of-plane [Forward 1981] increasing the total number of slots by at least a factor of three. potentially generating $2 billion worth of orbital real estate over North America alone. NEOs are believed to contain large quantities of water, carbon, other life-support materials and metals. Thus. with proper processing, NEO materials could in principle be used to resupply the ISS, produce rocket propellant, manufacture tools, and build additional ISS working space. Unlike proposals requiring massive facilities, such as lunar bases, before returning any extraterrestrial larger than a typical inter-planetary mission. Furthermore, AsterAnts could be scaled up to deliver large amounts of material by building many copies of the same spacecraft, thereby achieving manufacturing economies of scale. Because AsterAnts would capture NEOs whole, NEO composition details, which are generally poorly characterized, are relatively unimportant and no complex extraction equipment is necessary. In combination with a materials processing facility at the ISS, AsterAnts might inaugurate an era of large-scale orbital construction using extraterrestrial materials.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA DETAILS
NASA Technical Reports Server (NTRS)
2002-01-01
These are two views of a highly active region of star birth located northeast of the central cluster, R136, in 30 Doradus. The orientation and scale are identical for both views. The top panel is a composite of images in two colors taken with the Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The bottom panel is a composite of pictures taken through three infrared filters with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). In both cases the colors of the displays were chosen to correlate with the nebula's and stars' true colors. Seven very young objects are identified with numbered arrows in the infrared image. Number 1 is a newborn, compact cluster dominated by a triple system of 'hefty' stars. It has formed within the head of a massive dust pillar pointing toward R136. The energetic outflows from R136 have shaped the pillar and triggered the collapse of clouds within its summit to form the new stars. The radiation and outflows from these new stars have in turn blown off the top of the pillar, so they can be seen in the visible-light as well as the infrared image. Numbers 2 and 3 also pinpoint newborn stars or stellar systems inside an adjacent, bright-rimmed pillar, likewise oriented toward R136. These objects are still immersed within their natal dust and can be seen only as very faint, red points in the visible-light image. They are, however, among the brightest objects in the infrared image, since dust does not block infrared light as much as visible light. Thus, numbers 2 and 3 and number 1 correspond respectively to two successive stages in the birth of massive stars. Number 4 is a very red star that has just formed within one of several very compact dust clouds nearby. Number 5 is another very young triple-star system with a surrounding cluster of fainter stars. They also can be seen in the visible-light picture. Most remarkable are the glowing patches numbered 6 and 7, which astronomers have interpreted as 'impact points' produced by twin jets of material slamming into surrounding dust clouds. These 'impact points' are perfectly aligned on opposite sides of number 5 (the triple-star system), and each is separated from the star system by about 5 light-years. The jets probably originate from a circumstellar disk around one of the young stars in number 5. They may be rotating counterclockwise, thus producing moving, luminous patches on the surrounding dust, like a searchlight creating spots on clouds. These infrared patches produced by jets from a massive, young star are a new astronomical phenomenon. Credits for NICMOS image: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
Effective Web Videoconferencing for Proctoring Online Oral Exams: A Case Study at Scale in Brazil
ERIC Educational Resources Information Center
Okada, Alexandra; Scott, Peter; Mendonça, Murilo
2015-01-01
The challenging of assessing formal and informal online learning at scale includes various issues. Many universities who are now promoting "Massive Online Open Courses" (MOOC), for instance, focus on relatively informal assessment of participant competence, which is not highly "quality assured". This paper reports best…
Identity-Based Authentication for Cloud Computing
NASA Astrophysics Data System (ADS)
Li, Hongwei; Dai, Yuanshun; Tian, Ling; Yang, Haomiao
Cloud computing is a recently developed new technology for complex systems with massive-scale services sharing among numerous users. Therefore, authentication of both users and services is a significant issue for the trust and security of the cloud computing. SSL Authentication Protocol (SAP), once applied in cloud computing, will become so complicated that users will undergo a heavily loaded point both in computation and communication. This paper, based on the identity-based hierarchical model for cloud computing (IBHMCC) and its corresponding encryption and signature schemes, presented a new identity-based authentication protocol for cloud computing and services. Through simulation testing, it is shown that the authentication protocol is more lightweight and efficient than SAP, specially the more lightweight user side. Such merit of our model with great scalability is very suited to the massive-scale cloud.
The Early Growth of the First Black Holes
NASA Astrophysics Data System (ADS)
Johnson, Jarrett L.; Haardt, Francesco
2016-03-01
With detections of quasars powered by increasingly massive black holes at increasingly early times in cosmic history over the past decade, there has been correspondingly rapid progress made on the theory of early black hole formation and growth. Here, we review the emerging picture of how the first massive black holes formed from the primordial gas and then grew to supermassive scales. We discuss the initial conditions for the formation of the progenitors of these seed black holes, the factors dictating the initial masses with which they form, and their initial stages of growth via accretion, which may occur at super-Eddington rates. Finally, we briefly discuss how these results connect to large-scale simulations of the growth of supermassive black holes in the first billion years after the Big Bang.
NASA Astrophysics Data System (ADS)
Zinnecker, Hans
We review the multiplicity of massive stars by compiling the abstracts of the most relevant papers in the field. We start by discussing the massive stars in the Orion Trapezium Cluster and in other Galactic young clusters and OB associations, and end with the R136 cluster in the LMC. The multiplicity of field O-stars and runaway OB stars is also reviewed. The results of both visual and spectroscopic surveys are presented, as well as data for eclipsing systems. Among the latter, we find the most massive known binary system WR20a, with two ~,80M_⊙ components in a 3 day orbit. Some 80% of the wide visual binaries in stellar associations are in fact hierarchical triple systems, where typically the more massive of the binary components is itself a spectroscopic or even eclipsing binary pair. The multiplicity (number of companions) of massive star primaries is significantly higher than for low-mass solar-type primaries or for young low-mass T Tauri stars. There is also a striking preponderance of very close nearly equal mass binary systems (the origin of which has recently been explained in an accretion scenario). Finally, we offer a new idea as to the origin of massive Trapezium systems, frequently found in the centers of dense young clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
Massive star evolution and SN 1987A
NASA Technical Reports Server (NTRS)
Arnett, David
1991-01-01
The evolution of massive stars through hydrogen and helium burning is addressed. A set of stellar evolutionary sequences for mass/solar mass of 15, 20, and 25, and metallicity of 0.002, 0.005, 0.007, 0.010, and 0.20 are presented; semiconvection is restricted to operating slower than the local thermal time scale. Using these sequences, simple models of the massive star content of the LMC are found to agree moderately well with the new observational data of Fitzpatrick and Garmany (1990). LMC supergiants were detected only in their postmain-sequence phases, so that 5-10 times more massive stars are there but not identified as such. It is argued that SN 1987A exhibits the normal evolution of a single star of about 20 solar mases having LMC abundances. Despite the variety of envelope behavior, the structure of the core at collapse is rather similar for the stars of a given mass. Variations due to different rates of mass loss are likely to be larger than those due to composition.
The VLT-FLAMES Tarantula Survey
NASA Astrophysics Data System (ADS)
Vink, Jorick S.; Evans, C. J.; Bestenlehner, J.; McEvoy, C.; Ramírez-Agudelo, O.; Sana, H.; Schneider, F.; VFTS Collaboration
2017-11-01
We present a number of notable results from the VLT-FLAMES Tarantula Survey (VFTS), an ESO Large Program during which we obtained multi-epoch medium-resolution optical spectroscopy of a very large sample of over 800 massive stars in the 30 Doradus region of the Large Magellanic Cloud (LMC). This unprecedented data-set has enabled us to address some key questions regarding atmospheres and winds, as well as the evolution of (very) massive stars. Here we focus on O-type runaways, the width of the main sequence, and the mass-loss rates for (very) massive stars. We also provide indications for the presence of a top-heavy initial mass function (IMF) in 30 Dor.
Photon emission from massive projectile impacts on solids.
Fernandez-Lima, F A; Pinnick, V T; Della-Negra, S; Schweikert, E A
2011-01-01
First evidence of photon emission from individual impacts of massive gold projectiles on solids for a number of projectile-target combinations is reported. Photon emission from individual impacts of massive Au(n) (+q) (1 ≤ n ≤ 400; q = 1-4) projectiles with impact energies in the range of 28-136 keV occurs in less than 10 ns after the projectile impact. Experimental observations show an increase in the photon yield from individual impacts with the projectile size and velocity. Concurrently with the photon emission, electron emission from the impact area has been observed below the kinetic emission threshold and under unlikely conditions for potential electron emission. We interpret the puzzling electron emission and correlated luminescence observation as evidence of the electronic excitation resulting from the high-energy density deposited by massive cluster projectiles during the impact.
Photon emission from massive projectile impacts on solids
Fernandez-Lima, F. A.; Pinnick, V. T.; Della-Negra, S.; Schweikert, E. A.
2011-01-01
First evidence of photon emission from individual impacts of massive gold projectiles on solids for a number of projectile-target combinations is reported. Photon emission from individual impacts of massive Aun+q (1 ≤ n ≤ 400; q = 1–4) projectiles with impact energies in the range of 28–136 keV occurs in less than 10 ns after the projectile impact. Experimental observations show an increase in the photon yield from individual impacts with the projectile size and velocity. Concurrently with the photon emission, electron emission from the impact area has been observed below the kinetic emission threshold and under unlikely conditions for potential electron emission. We interpret the puzzling electron emission and correlated luminescence observation as evidence of the electronic excitation resulting from the high-energy density deposited by massive cluster projectiles during the impact. PMID:21603128
Hybrid LES RANS technique based on a one-equation near-wall model
NASA Astrophysics Data System (ADS)
Breuer, M.; Jaffrézic, B.; Arora, K.
2008-05-01
In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid-scale and RANS models. Here, in addition to the study on the behavior of the suggested hybrid LES RANS approach, special emphasis is put on the investigation of suitable interface criteria and the adjustment of the RANS model. To investigate these issues, two different test cases are considered. Besides the standard plane channel flow test case, the flow over a periodic arrangement of hills is studied in detail. This test case includes a pressure-induced flow separation and subsequent reattachment. In comparison with a wall-resolved LES prediction encouraging results are achieved.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Neural Parallel Engine: A toolbox for massively parallel neural signal processing.
Tam, Wing-Kin; Yang, Zhi
2018-05-01
Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.
The Correspondence between Convergence Peaks from Weak Lensing and Massive Dark Matter Haloes
NASA Astrophysics Data System (ADS)
Wei, Chengliang; Li, Guoliang; Kang, Xi; Liu, Xiangkun; Fan, Zuhui; Yuan, Shuo; Pan, Chuzhong
2018-05-01
The convergence peaks, constructed from galaxy shape measurement in weak lensing, is a powerful probe of cosmology as the peaks can be connected with the underlined dark matter haloes. However the capability of convergence peak statistic is affected by the noise in galaxy shape measurement, signal to noise ratio as well as the contribution from the projected mass distribution from the large-scale structures along the line of sight (LOS). In this paper we use the ray-tracing simulation on a curved sky to investigate the correspondence between the convergence peak and the dark matter haloes at the LOS. We find that, in case of no noise and for source galaxies at zs = 1, more than 65% peaks with SNR ≥ 3 (signal to noise ratio) are related to more than one massive haloes with mass larger than 1013M⊙. Those massive haloes contribute 87.2% to high peaks (SNR ≥ 5) with the remaining contributions are from the large-scale structures. On the other hand, the peaks distribution is skewed by the noise in galaxy shape measurement, especially for lower SNR peaks. In the noisy field where the shape noise is modelled as a Gaussian distribution, about 60% high peaks (SNR ≥ 5) are true peaks and the fraction decreases to 20% for lower peaks (3 ≤ SNR < 5). Furthermore, we find that high peaks (SNR ≥ 5) are dominated by very massive haloes larger than 1014M⊙.
NASA Astrophysics Data System (ADS)
Abdurro'uf; Akiyama, Masayuki
2017-08-01
We investigate the relation between star formation rate (SFR) and stellar mass (M*) at the sub-galactic scale (˜1 kpc) of 93 local (0.01 < z < 0.02) massive (M* > 1010.5 M⊙) spiral galaxies. To derive a spatially resolved SFR and stellar mass, we perform the so-called pixel-to-pixel spectral energy distribution (SED) fitting, which fits an observed spatially resolved multiband SED with a library of model SEDs using Bayesian statistics. We use two bands (far-ultraviolet or FUV and near-ultraviolet or NUV) and five bands (u, g, r, I and z) of imaging data from Galaxy Evolution Explorer (GALEX) and Sloan Digital Sky Survey (SDSS), respectively. We find a tight nearly linear relation between the local surface density of SFR (ΣSFR) and stellar mass (Σ*), which is flattened at high Σ*. The near linear relation between Σ* and ΣSFR suggests a constant specific SFR (sSFR) throughout the galaxies, and the scatter of the relation is directly related to that of the sSFR. Therefore, we analyse the variation of the sSFR in various scales. More massive galaxies on average have lower sSFR throughout them than less massive galaxies. We also find that barred galaxies have a lower sSFR in the core region than non-barred galaxies. However, in the outer region, the sSFRs of barred and non-barred galaxies are similar and lead to a similar total sSFR.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J
2011-06-01
We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.
Neutrino in standard model and beyond
NASA Astrophysics Data System (ADS)
Bilenky, S. M.
2015-07-01
After discovery of the Higgs boson at CERN the Standard Model acquired a status of the theory of the elementary particles in the electroweak range (up to about 300 GeV). What general conclusions can be inferred from the Standard Model? It looks that the Standard Model teaches us that in the framework of such general principles as local gauge symmetry, unification of weak and electromagnetic interactions and Brout-Englert-Higgs spontaneous breaking of the electroweak symmetry nature chooses the simplest possibilities. Two-component left-handed massless neutrino fields play crucial role in the determination of the charged current structure of the Standard Model. The absence of the right-handed neutrino fields in the Standard Model is the simplest, most economical possibility. In such a scenario Majorana mass term is the only possibility for neutrinos to be massive and mixed. Such mass term is generated by the lepton-number violating Weinberg effective Lagrangian. In this approach three Majorana neutrino masses are suppressed with respect to the masses of other fundamental fermions by the ratio of the electroweak scale and a scale of a lepton-number violating physics. The discovery of the neutrinoless double β-decay and absence of transitions of flavor neutrinos into sterile states would be evidence in favor of the minimal scenario we advocate here.
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
Parallel Index and Query for Large Scale Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less
Zhang, Hong; Zapol, Peter; Dixon, David A.; ...
2015-11-17
The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Zapol, Peter; Dixon, David A.
The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less
NASA Astrophysics Data System (ADS)
Rubio-Díez, M. M.; Najarro, F.; García, M.; Sundqvist, J. O.
2017-11-01
Recent studies of WNh stars at the cores of young massive clusters have challenged the previously accepted upper stellar mass limit (~150 M ⊙), suggesting some of these objects may have initial masses as high as 300 M ⊙. We investigated the possible existence of observed stars above ~150 M ⊙ by i) examining the nature and stellar properties of VFTS 682, a recently identified WNh5 very massive star, and ii) studying the uncertainties in the luminosity estimates of R136's core stars due to crowding. Our spectroscopic analysis reveals that the most massive members of R136 and VFTS 682 are very similar and our K-band photometric study of R136's core stars shows that the measurements seem to display higher uncertainties than previous studies suggested; moreover, for the most massive stars in the cluster, R136a1 and a2, we found previous magnitudes were underestimated by at least 0.4 mag. As such, luminosities and masses of these stars have to be significantly scaled down, which then also lowers the hitherto observed upper mass limit of stars.
Effects of coupled dark energy on the Milky Way and its satellites
NASA Astrophysics Data System (ADS)
Penzo, Camilla; Macciò, Andrea V.; Baldi, Marco; Casarini, Luciano; Oñorbe, Jose; Dutton, Aaron A.
2016-09-01
We present the first numerical simulations in coupled dark energy cosmologies with high enough resolution to investigate the effects of the coupling on galactic and subgalactic scales. We choose two constant couplings and a time-varying coupling function and we run simulations of three Milky Way-sized haloes (˜1012 M⊙), a lower mass halo (6 × 1011 M⊙) and a dwarf galaxy halo (5 × 109 M⊙). We resolve each halo with several million dark matter particles. On all scales, the coupling causes lower halo concentrations and a reduced number of substructures with respect to Λ cold dark matter (ΛCDM). We show that the reduced concentrations are not due to different formation times. We ascribe them to the extra terms that appear in the equations describing the gravitational dynamics. On the scale of the Milky Way satellites, we show that the lower concentrations can help in reconciling observed and simulated rotation curves, but the coupling values necessary to have a significant difference from ΛCDM are outside the current observational constraints. On the other hand, if other modifications to the standard model allowing a higher coupling (e.g. massive neutrinos) are considered, coupled dark energy can become an interesting scenario to alleviate the small-scale issues of the ΛCDM model.
Anatomy of an online misinformation network.
Shao, Chengcheng; Hui, Pik-Mai; Wang, Lei; Jiang, Xinwen; Flammini, Alessandro; Menczer, Filippo; Ciampaglia, Giovanni Luca
2018-01-01
Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.
Anatomy of an online misinformation network
Wang, Lei; Jiang, Xinwen; Flammini, Alessandro; Ciampaglia, Giovanni Luca
2018-01-01
Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network. PMID:29702657
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Wong, M.K.; Boucheron, E.A.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less
Billieux, Joël; Chanal, Julien; Khazaal, Yasser; Rochat, Lucien; Gay, Philippe; Zullino, Daniele; Van der Linden, Martial
2011-01-01
Massively Multiplayer Online Role-Playing Games (MMORPGs) are video games in which a large number of players interact with one another in a persistent virtual world. MMORPGs can become problematic and result in negative outcomes in daily living (e.g. loss of control on gaming behaviors, compromised social and individual quality of life). The aim of the present study is to investigate psychological predictors of problematic involvement in MMORPGs. Fifty-four males who played MMORPGs regularly were recruited in cybercafés and screened using the UPPS Impulsive Behavior Scale (which assesses 4 facets of impulsivity) and the Motivation to Play Online Questionnaire (which assesses personal motives to play online). Negative consequences due to excessive time spent on the Internet were assessed with the Internet Addiction Test. Multiple regression analysis showed that problematic use of MMORPGs is significantly predicted by: (1) high urgency (b = 0.45), and (2) a motivation to play for immersion (b = 0.35). This study showed that, for certain individuals (who are characterized by a proneness to act rashly in emotional contexts and motivated to play to be immersed in a virtual world), involvement in MMORPGs can become problematic and engender tangible negative consequences in daily life. Copyright © 2011 S. Karger AG, Basel.
Testing the hierarchical assembly of massive galaxies using accurate merger rates out to z ˜ 1.5
NASA Astrophysics Data System (ADS)
Rodrigues, Myriam; Puech, M.; Flores, H.; Hammer, F.; Pirzkal, N.
2018-04-01
We established an accurate comparison between observationally and theoretically estimated major merger rates over a large range of mass (log Mbar/M⊙ =9.9-11.4) and redshift (z = 0.7-1.6). For this, we combined a new estimate of the merger rate from an exhaustive count of pairs within the virial radius of massive galaxies at z ˜ 1.265 and cross-validated with their morphology, with estimates from the morpho-kinematic analysis of two other samples. Theoretical predictions were estimated using semi-empirical models with inputs matching the properties of the observed samples, while specific visibility time-scales scaled to the observed samples were used. Both theory and observations are found to agree within 30 per cent of the observed value, which provides strong support to the hierarchical assembly of galaxies over the probed ranges of mass and redshift. Here, we find that ˜60 per cent of population of local massive (Mstellar =1010.3-11.6 M⊙) galaxies would have undergone a wet major merger since z = 1.5, consistently with previous studies. Such recent mergers are expected to result in the (re-)formation of a significant fraction of local disc galaxies.
The Eta Carinae Homunculus in Full 3D with X-Shooter and Shape
NASA Technical Reports Server (NTRS)
Steffen, Wolfgang; Teodoro, Mairan; Madura, Thomas I.; Groh, Jose H.; Gull, Theodore R.; Mehner, Andrea; Corcoran, Michael F.; Damineli, Augusto; Hamaguchi, Kenji
2014-01-01
Massive stars like Eta Carinae are extremely rare in comparison to stars such as the Sun, and currently we know of only a handful of stars with masses of more than 100 solar mass in the Milky Way. Such massive stars were much more frequent in the early history of the Universe and had a huge impact on its evolution. Even among this elite club, Eta Car is outstanding, in particular because of its giant eruption around 1840 that produced the beautiful bipolar nebula now known as the Homunculus. In this study, we used detailed spatio-kinematic information obtained from X-shooter spectra to reconstruct the 3D structure of the Homunculus. The small-scale features suggest that the central massive binary played a significant role in shaping the Homunculus.
The Ecological Impacts of Large-Scale Agrofuel Monoculture Production Systems in the Americas
ERIC Educational Resources Information Center
Altieri, Miguel A.
2009-01-01
This article examines the expansion of agrofuels in the Americas and the ecological impacts associated with the technologies used in the production of large-scale monocultures of corn and soybeans. In addition to deforestation and displacement of lands devoted to food crops due to expansion of agrofuels, the massive use of transgenic crops and…
ERIC Educational Resources Information Center
Rodriguez, C. Osvaldo
2012-01-01
Open online courses (OOC) with a massive number of students have represented an important development for online education in the past years. A course on artificial intelligence, CS221, at the University of Stanford was offered in the fall of 2011 free and online which attracted 160,000 registered students. It was one of three offered as an…
Selected Readings in the History of Soviet Operational Art
1990-05-01
beginning of the twentieth century (the Russo- Japanese War); now massive armies, numbering millions and supplied with massive equipment, operate on...light, according to the experience of the wars of the twentieth century, a picture of political preparation and maintenance of war. The exposition...history of the most important wars of the twentieth century, the interrelationships of war and politics in the epoch and on the grounds of imperialism
Micro injector sample delivery system for charged molecules
Davidson, James C.; Balch, Joseph W.
1999-11-09
A micro injector sample delivery system for charged molecules. The injector is used for collecting and delivering controlled amounts of charged molecule samples for subsequent analysis. The injector delivery system can be scaled to large numbers (>96) for sample delivery to massively parallel high throughput analysis systems. The essence of the injector system is an electric field controllable loading tip including a section of porous material. By applying the appropriate polarity bias potential to the injector tip, charged molecules will migrate into porous material, and by reversing the polarity bias potential the molecules are ejected or forced away from the tip. The invention has application for uptake of charged biological molecules (e.g. proteins, nucleic acids, polymers, etc.) for delivery to analytical systems, and can be used in automated sample delivery systems.
NASA Astrophysics Data System (ADS)
Mochizuki, Yuji; Yamashita, Katsumi; Fukuzawa, Kaori; Takematsu, Kazutomo; Watanabe, Hirofumi; Taguchi, Naoki; Okiyama, Yoshio; Tsuboi, Misako; Nakano, Tatsuya; Tanaka, Shigenori
2010-06-01
Two proteins on the influenza virus surface have been well known. One is hemagglutinin (HA) associated with the infection to cells. The fragment molecular orbital (FMO) calculations were performed on a complex consisting of HA trimer and two Fab-fragments at the third-order Møller-Plesset perturbation (MP3) level. The numbers of residues and 6-31G basis functions were 2351 and 201276, and thus a massively parallel-vector computer was utilized to accelerate the processing. This FMO-MP3 job was completed in 5.8 h with 1024 processors. Another protein is neuraminidase (NA) involved in the escape from infected cells. The FMO-MP3 calculation was also applied to analyze the interactions between oseltamivir and surrounding residues in pharmacophore.
Granular Materials and the Risks They Pose for Success on the Moon and Mars
NASA Technical Reports Server (NTRS)
Wilkinson, R. Allen; Behringer, Robert P.; Jenkins, James T.; Louge, Michel Y.
2004-01-01
Working with soil, sand, powders, ores, cement and sintered bricks, excavating, grading construction sites, driving off-road, transporting granules in chutes and pipes, sifting gravel, separating solids from gases, and using hoppers are so routine that it seems straightforward to do it on the Moon and Mars as we do it on Earth. This paper brings to the fore how little these processes are understood and the millennia-long trial-and-error practices that lead to today's massive over-design, high failure rate, and extensive incremental scaling up of industrial processes because of the inadequate predictive tools for design. We present a number of pragmatic scenarios where granular materials play a role, the risks involved, and what understanding is needed to greatly reduce the risks.
Granular Materials and the Risks They Pose for Success on the Moon and Mars
NASA Astrophysics Data System (ADS)
Wilkinson, R. Allen; Behringer, Robert P.; Jenkins, James T.; Louge, Michel Y.
2005-02-01
Working with soil, sand, powders, ores, cement and sintered bricks, excavating, grading construction sites, driving off-road, transporting granules in chutes and pipes, sifting gravel, separating solids from gases, and using hoppers are so routine that it seems straightforward to do it on the Moon and Mars as we do it on Earth. This paper brings to the fore how little these processes are understood and the millennia-long trial-and-error practices that lead to today's massive over-design, high failure rate, and extensive incremental scaling up of industrial processes because of the inadequate predictive tools for design. We present a number of pragmatic scenarios where granular materials play a role, the risks involved, and what understanding is needed to greatly reduce the risks.
Location estimation in wireless sensor networks using spring-relaxation technique.
Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M
2010-01-01
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
STAR CLUSTER FORMATION WITH STELLAR FEEDBACK AND LARGE-SCALE INFLOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Christopher D.; Jumper, Peter H., E-mail: matzner@astro.utoronto.ca
2015-12-10
During star cluster formation, ongoing mass accretion is resisted by stellar feedback in the form of protostellar outflows from the low-mass stars and photo-ionization and radiation pressure feedback from the massive stars. We model the evolution of cluster-forming regions during a phase in which both accretion and feedback are present and use these models to investigate how star cluster formation might terminate. Protostellar outflows are the strongest form of feedback in low-mass regions, but these cannot stop cluster formation if matter continues to flow in. In more massive clusters, radiation pressure and photo-ionization rapidly clear the cluster-forming gas when itsmore » column density is too small. We assess the rates of dynamical mass ejection and of evaporation, while accounting for the important effect of dust opacity on photo-ionization. Our models are consistent with the census of protostellar outflows in NGC 1333 and Serpens South and with the dust temperatures observed in regions of massive star formation. Comparing observations of massive cluster-forming regions against our model parameter space, and against our expectations for accretion-driven evolution, we infer that massive-star feedback is a likely cause of gas disruption in regions with velocity dispersions less than a few kilometers per second, but that more massive and more turbulent regions are too strongly bound for stellar feedback to be disruptive.« less
NASA Astrophysics Data System (ADS)
Lee, J. H.; Yoon, H.; Kitanidis, P. K.; Werth, C. J.; Valocchi, A. J.
2015-12-01
Characterizing subsurface properties, particularly hydraulic conductivity, is crucial for reliable and cost-effective groundwater supply management, contaminant remediation, and emerging deep subsurface activities such as geologic carbon storage and unconventional resources recovery. With recent advances in sensor technology, a large volume of hydro-geophysical and chemical data can be obtained to achieve high-resolution images of subsurface properties, which can be used for accurate subsurface flow and reactive transport predictions. However, subsurface characterization with a plethora of information requires high, often prohibitive, computational costs associated with "big data" processing and large-scale numerical simulations. As a result, traditional inversion techniques are not well-suited for problems that require coupled multi-physics simulation models with massive data. In this work, we apply a scalable inversion method called Principal Component Geostatistical Approach (PCGA) for characterizing heterogeneous hydraulic conductivity (K) distribution in a 3-D sand box. The PCGA is a Jacobian-free geostatistical inversion approach that uses the leading principal components of the prior information to reduce computational costs, sometimes dramatically, and can be easily linked with any simulation software. Sequential images of transient tracer concentrations in the sand box were obtained using magnetic resonance imaging (MRI) technique, resulting in 6 million tracer-concentration data [Yoon et. al., 2008]. Since each individual tracer observation has little information on the K distribution, the dimension of the data was reduced using temporal moments and discrete cosine transform (DCT). Consequently, 100,000 unknown K values consistent with the scale of MRI data (at a scale of 0.25^3 cm^3) were estimated by matching temporal moments and DCT coefficients of the original tracer data. Estimated K fields are close to the true K field, and even small-scale variability of the sand box was captured to highlight high K connectivity and contrasts between low and high K zones. Total number of 1,000 MODFLOW and MT3DMS simulations were required to obtain final estimates and corresponding estimation uncertainty, showing the efficiency and effectiveness of our method.
Quantifying Bursty Star Formation and Dust Extinction in Dwarf Galaxies at 0.75 < z < 1.5
NASA Astrophysics Data System (ADS)
Siana, Brian
2014-10-01
Using the magnification provided by gravitational lensing, our team has recently uncovered an important population of star-forming dwarf galaxies at 1
Sediment depositions upstream of open check dams: new elements from small scale models
NASA Astrophysics Data System (ADS)
Piton, Guillaume; Le Guern, Jules; Carbonari, Costanza; Recking, Alain
2015-04-01
Torrent hazard mitigation remains a big issue in mountainous regions. In steep slope streams and especially in their fan part, torrential floods mainly result from abrupt and massive sediment deposits. To curtail such phenomenon, soil conservation measures as well as torrent control works have been undertaken for decades. Since the 1950s, open check dams complete other structural and non-structural measures in watershed scale mitigation plans1. They are often built to trap sediments near the fan apexes. The development of earthmoving machinery after the WWII facilitated the dredging operations of open check dams. Hundreds of these structures have thus been built for 60 years. Their design evolved with the improving comprehension of torrential hydraulics and sediment transport; however this kind of structure has a general tendency to trap most of the sediments supplied by the headwaters. Secondary effects as channel incision downstream of the traps often followed an open check dam creation. This sediment starvation trend tends to propagate to the main valley rivers and to disrupt past geomorphic equilibriums. Taking it into account and to diminish useless dredging operation, a better selectivity of sediment trapping must be sought in open check dams, i.e. optimal open check dams would trap sediments during dangerous floods and flush them during normal small floods. An accurate description of the hydraulic and deposition processes that occur in sediment traps is needed to optimize existing structures and to design best-adjusted new structures. A literature review2 showed that if design criteria exist for the structure itself, little information is available on the dynamic of the sediment depositions upstream of open check dams, i.e. what are the geomorphic patterns that occur during the deposition?, what are the relevant friction laws and sediment transport formula that better describe massive depositions in sediment traps?, what are the range of Froude and Shields numbers that the flows tend to adopt? New small scale model experiments have been undertaken focusing on depositions processes and their related hydraulics. Accurate photogrammetric measurements allowed us to better describe the deposition processes3. Large Scale Particle Image Velocimetry (LS-PIV) was performed to determine surface velocity fields in highly active channels with low grain submersion4. We will present preliminary results of our experiments showing the new elements we observed in massive deposit dynamics. REFERENCES 1.Armanini, A., Dellagiacoma, F. & Ferrari, L. From the check dam to the development of functional check dams. Fluvial Hydraulics of Mountain Regions 37, 331-344 (1991). 2.Piton, G. & Recking, A. Design of sediment traps with open check dams: a review, part I: hydraulic and deposition processes. (Accepted by the) Journal of Hydraulic Engineering 1-23 (2015). 3.Le Guern, J. Ms Thesis: Modélisation physique des plages de depot : analyse de la dynamique de remplissage.(2014) . 4.Carbonari, C. Ms Thesis: Small scale experiments of deposition processes occuring in sediment traps, LS-PIV measurments and geomorphological descriptions. (in preparation).
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
NASA Astrophysics Data System (ADS)
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
Dark Matter Coannihilation with a Lighter Species
NASA Astrophysics Data System (ADS)
Berlin, Asher
2017-09-01
We propose a new thermal freeze-out mechanism for ultraheavy dark matter. Dark matter coannihilates with a lighter unstable species that is nearby in mass, leading to an annihilation rate that is exponentially enhanced relative to standard weakly interactive massive particles. This scenario destabilizes any potential dark matter candidate. In order to remain consistent with astrophysical observations, our proposal necessitates very long-lived states, motivating striking phenomenology associated with the late decays of ultraheavy dark matter, potentially as massive as the scale of grand unified theories, MGUT˜1016 GeV .
Dark Matter Coannihilation with a Lighter Species.
Berlin, Asher
2017-09-22
We propose a new thermal freeze-out mechanism for ultraheavy dark matter. Dark matter coannihilates with a lighter unstable species that is nearby in mass, leading to an annihilation rate that is exponentially enhanced relative to standard weakly interactive massive particles. This scenario destabilizes any potential dark matter candidate. In order to remain consistent with astrophysical observations, our proposal necessitates very long-lived states, motivating striking phenomenology associated with the late decays of ultraheavy dark matter, potentially as massive as the scale of grand unified theories, M_{GUT}∼10^{16} GeV.
The early growth of the first black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jarrett L.; Haardt, Francesco
With detections of quasars powered by increasingly massive black holes at increasingly early times in cosmic history over the past decade, there has been correspondingly rapid progress made on the theory of early black hole formation and growth. Here, we review the emerging picture of how the first massive black holes formed from the primordial gas and then grew to supermassive scales. We discuss the initial conditions for the formation of the progenitors of these seed black holes, the factors dictating the initial masses with which they form, and their initial stages of growth via accretion, which may occur atmore » super-Eddington rates. Lastly, we briefly discuss how these results connect to large-scale simulations of the growth of supermassive black holes in the first billion years after the Big Bang.« less
The early growth of the first black holes
Johnson, Jarrett L.; Haardt, Francesco
2016-03-04
With detections of quasars powered by increasingly massive black holes at increasingly early times in cosmic history over the past decade, there has been correspondingly rapid progress made on the theory of early black hole formation and growth. Here, we review the emerging picture of how the first massive black holes formed from the primordial gas and then grew to supermassive scales. We discuss the initial conditions for the formation of the progenitors of these seed black holes, the factors dictating the initial masses with which they form, and their initial stages of growth via accretion, which may occur atmore » super-Eddington rates. Lastly, we briefly discuss how these results connect to large-scale simulations of the growth of supermassive black holes in the first billion years after the Big Bang.« less
On the curious spectrum of duality invariant higher-derivative gravity
NASA Astrophysics Data System (ADS)
Hohm, Olaf; Naseer, Usman; Zwiebach, Barton
2016-08-01
We analyze the spectrum of the exactly duality and gauge invariant higher-derivative double field theory. While this theory is based on a chiral CFT and does not correspond to a standard string theory, our analysis illuminates a number of issues central in string theory. The full quadratic action is rewritten as a two-derivative theory with additional fields. This allows for a simple analysis of the spectrum, which contains two massive spin-2 ghosts and massive scalars, in addition to the massless fields. Moreover, in this formulation, the massless or tensionless limit α ' → ∞ is non-singular and leads to an enhanced gauge symmetry. We show that the massive modes can be integrated out exactly at the quadratic level, leading to an infinite series of higher-derivative corrections. Finally, we present a ghost-free massive extension of linearized double field theory, which employs a novel mass term for the dilaton and metric.
Exact solutions of massive gravity in three dimensions
NASA Astrophysics Data System (ADS)
Chakhad, Mohamed
In recent years, there has been an upsurge in interest in three-dimensional theories of gravity. In particular, two theories of massive gravity in three dimensions hold strong promise in the search for fully consistent theories of quantum gravity, an understanding of which will shed light on the problems of quantum gravity in four dimensions. One of these theories is the "old" third-order theory of topologically massive gravity (TMG) and the other one is a "new" fourth-order theory of massive gravity (NMG). Despite this increase in research activity, the problem of finding and classifying solutions of TMG and NMG remains a wide open area of research. In this thesis, we provide explicit new solutions of massive gravity in three dimensions and suggest future directions of research. These solutions belong to the Kundt class of spacetimes. A systematic analysis of the Kundt solutions with constant scalar polynomial curvature invariants provides a glimpse of the structure of the spaces of solutions of the two theories of massive gravity. We also find explicit solutions of topologically massive gravity whose scalar polynomial curvature invariants are not all constant, and these are the first such solutions. A number of properties of Kundt solutions of TMG and NMG, such as an identification of solutions which lie at the intersection of the full nonlinear and linearized theories, are also derived.
MOLECULAR GAS EVOLUTION ACROSS A SPIRAL ARM IN M51
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egusa, Fumi; Scoville, Nick; Koda, Jin, E-mail: fegusa@ir.isas.jaxa.jp
We present sensitive and high angular resolution CO(1-0) data obtained by the Combined Array for Research in Millimeter-wave Astronomy observations toward the nearby grand-design spiral galaxy M51. The angular resolution of 0.''7 corresponds to 30 pc, which is similar to the typical size of giant molecular clouds (GMCs), and the sensitivity is also high enough to detect typical GMCs. Within the 1' field of view centered on a spiral arm, a number of GMC-scale structures are detected as clumps. However, only a few clumps are found to be associated with each giant molecular association (GMA) and more than 90% ofmore » the total flux is resolved out in our data. Considering the high sensitivity and resolution of our data, these results indicate that GMAs are not mere confusion with GMCs but plausibly smooth structures. In addition, we have found that the most massive clumps are located downstream of the spiral arm, which suggests that they are at a later stage of molecular cloud evolution across the arm and plausibly are cores of GMAs. By comparing with H{alpha} and Pa{alpha} images, most of these cores are found to have nearby star-forming regions. We thus propose an evolutionary scenario for the interstellar medium, in which smaller molecular clouds collide to form smooth GMAs at spiral arm regions and then star formation is triggered in the GMA cores. Our new CO data have revealed the internal structure of GMAs at GMC scales, finding the most massive substructures on the downstream side of the arm in close association with the brightest H II regions.« less
Gokmen, Tayfun; Vlasov, Yurii
2016-01-01
In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30, 000 × compared to state-of-the-art microprocessors while providing power efficiency of 84, 000 GigaOps∕s∕W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration, and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things) sensors. PMID:27493624
NASA Technical Reports Server (NTRS)
Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.
1995-01-01
A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.
NASA Astrophysics Data System (ADS)
Kim, M.-H.; Cho, J. H.; Park, S.-J.; Eden, J. G.
2017-08-01
Plasmachemical systems based on the production of a specific molecule (O3) in literally thousands of microchannel plasmas simultaneously have been demonstrated, developed and engineered over the past seven years, and commercialized. At the heart of this new plasma technology is the plasma chip, a flat aluminum strip fabricated by photolithographic and wet chemical processes and comprising 24-48 channels, micromachined into nanoporous aluminum oxide, with embedded electrodes. By integrating 4-6 chips into a module, the mass output of an ozone microplasma system is scaled linearly with the number of modules operating in parallel. A 115 g/hr (2.7 kg/day) ozone system, for example, is realized by the combined output of 18 modules comprising 72 chips and 1,800 microchannels. The implications of this plasma processing architecture for scaling ozone production capability, and reducing capital and service costs when introducing redundancy into the system, are profound. In contrast to conventional ozone generator technology, microplasma systems operate reliably (albeit with reduced output) in ambient air and humidity levels up to 90%, a characteristic attributable to the water adsorption/desorption properties and electrical breakdown strength of nanoporous alumina. Extensive testing has documented chip and system lifetimes (MTBF) beyond 5,000 hours, and efficiencies >130 g/kWh when oxygen is the feedstock gas. Furthermore, the weight and volume of microplasma systems are a factor of 3-10 lower than those for conventional ozone systems of comparable output. Massively-parallel plasmachemical processing offers functionality, performance, and commercial value beyond that afforded by conventional technology, and is currently in operation in more than 30 countries worldwide.
Gokmen, Tayfun; Vlasov, Yurii
2016-01-01
In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30, 000 × compared to state-of-the-art microprocessors while providing power efficiency of 84, 000 GigaOps∕s∕W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration, and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things) sensors.
Do Massive Galaxies at z~6 Present a Challenge for Hierarchical Merging?
NASA Astrophysics Data System (ADS)
Steinhardt, Charles L.; Capak, Peter L.; Masters, Daniel; Speagle, Josh S.; Splash
2015-01-01
The Spitzer Large Area Survey with Hyper-Suprime-Cam (SPLASH) recently released an initial view of the massive star-forming galaxy population at 4 < z < 6 over 1.8 square degrees. SPLASH found approximately 100 galaxy candidates with best-fit stellar masses over 10^11 solar. If even 10% of these are truly this massive and at such a high redshift, the corresponding number density would be inconsistent with the halo mass functions produced at these redshifts by numerical simulations. We will discuss these candidates, prospects for followup observations, and the potential implications for our understanding of the initial formation and early evolution of galaxies in the high-redshift universe.
Bounds on neutrino mass in viscous cosmology
NASA Astrophysics Data System (ADS)
Anand, Sampurn; Chaubal, Prakrut; Mazumdar, Arindam; Mohanty, Subhendra; Parashari, Priyank
2018-05-01
Effective field theoretic description of dark matter fluid on large scales predicts viscosity of the order 10‑6 H0 MP2. Recently, it has been shown that the same magnitude of viscosity can resolve the discordance between large scale structure observations and Planck CMB data in the σ8-Ωm0 and H0-Ωm0 parameters space. On the other hand, massive neutrinos suppresses the matter power spectrum on the small length scales similar to the viscosities. Therefore, it is expected that the viscous dark matter setup along with massive neutrinos can provide stringent constraint on neutrino mass. In this article, we show that the inclusion of effective viscosity, which arises from summing over non linear perturbations at small length scales, indeed severely constrains the cosmological bound on neutrino masses. Under a joint analysis of Planck CMB and different large scale observation data, we find that upper bound on the sum of the neutrino masses, at 2-σ level, decreases respectively from ∑ mν <= 0.396 eV (for normal hierarchy) and ∑ mν <= 0.378 eV (for inverted hierarchy) to ∑ mν <= 0.267 eV (for normal hierarchy) and ∑ mν <= 0.146 eV (for inverted hierarchy).
The beaming of subhalo accretion
NASA Astrophysics Data System (ADS)
Libeskind, Noam I.
2016-10-01
We examine the infall pattern of subhaloes onto hosts in the context of the large-scale structure. We find that the infall pattern is essentially driven by the shear tensor of the ambient velocity field. Dark matter subhaloes are preferentially accreted along the principal axis of the shear tensor which corresponds to the direction of weakest collapse. We examine the dependence of this preferential infall on subhalo mass, host halo mass and redshift. Although strongest for the most massive hosts and the most massive subhaloes at high redshift, the preferential infall of subhaloes is effectively universal in the sense that its always aligned with the axis of weakest collapse of the velocity shear tensor. It is the same shear tensor that dictates the structure of the cosmic web and hence the shear field emerges as the key factor that governs the local anisotropic pattern of structure formation. Since the small (sub-Mpc) scale is strongly correlated with the mid-range (~ 10 Mpc) scale - a scale accessible by current surveys of peculiar velocities - it follows that findings presented here open a new window into the relation between the observed large scale structure unveiled by current surveys of peculiar velocities and the preferential infall direction of the Local Group. This may shed light on the unexpected alignments of dwarf galaxies seen in the Local Group.
Scaling bioinformatics applications on HPC.
Mikailov, Mike; Luo, Fu-Jyh; Barkley, Stuart; Valleru, Lohit; Whitney, Stephen; Liu, Zhichao; Thakkar, Shraddha; Tong, Weida; Petrick, Nicholas
2017-12-28
Recent breakthroughs in molecular biology and next generation sequencing technologies have led to the expenential growh of the sequence databases. Researchrs use BLAST for processing these sequences. However traditional software parallelization techniques (threads, message passing interface) applied in newer versios of BLAST are not adequate for processing these sequences in timely manner. A new method for array job parallelization has been developed which offers O(T) theoretical speed-up in comparison to multi-threading and MPI techniques. Here T is the number of array job tasks. (The number of CPUs that will be used to complete the job equals the product of T multiplied by the number of CPUs used by a single task.) The approach is based on segmentation of both input datasets to the BLAST process, combining partial solutions published earlier (Dhanker and Gupta, Int J Comput Sci Inf Technol_5:4818-4820, 2014), (Grant et al., Bioinformatics_18:765-766, 2002), (Mathog, Bioinformatics_19:1865-1866, 2003). It is accordingly referred to as a "dual segmentation" method. In order to implement the new method, the BLAST source code was modified to allow the researcher to pass to the program the number of records (effective number of sequences) in the original database. The team also developed methods to manage and consolidate the large number of partial results that get produced. Dual segmentation allows for massive parallelization, which lifts the scaling ceiling in exciting ways. BLAST jobs that hitherto failed or slogged inefficiently to completion now finish with speeds that characteristically reduce wallclock time from 27 days on 40 CPUs to a single day using 4104 tasks, each task utilizing eight CPUs and taking less than 7 minutes to complete. The massive increase in the number of tasks when running an analysis job with dual segmentation reduces the size, scope and execution time of each task. Besides significant speed of completion, additional benefits include fine-grained checkpointing and increased flexibility of job submission. "Trickling in" a swarm of individual small tasks tempers competition for CPU time in the shared HPC environment, and jobs submitted during quiet periods can complete in extraordinarily short time frames. The smaller task size also allows the use of older and less powerful hardware. The CDRH workhorse cluster was commissioned in 2010, yet its eight-core CPUs with only 24GB RAM work well in 2017 for these dual segmentation jobs. Finally, these techniques are excitingly friendly to budget conscious scientific research organizations where probabilistic algorithms such as BLAST might discourage attempts at greater certainty because single runs represent a major resource drain. If a job that used to take 24 days can now be completed in less than an hour or on a space available basis (which is the case at CDRH), repeated runs for more exhaustive analyses can be usefully contemplated.
Gooding, Thomas Michael [Rochester, MN
2011-04-19
An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.
Large Eddy Simulation of High Reynolds Number Complex Flows
NASA Astrophysics Data System (ADS)
Verma, Aman
Marine configurations are subject to a variety of complex hydrodynamic phenomena affecting the overall performance of the vessel. The turbulent flow affects the hydrodynamic drag, propulsor performance and structural integrity, control-surface effectiveness, and acoustic signature of the marine vessel. Due to advances in massively parallel computers and numerical techniques, an unsteady numerical simulation methodology such as Large Eddy Simulation (LES) is well suited to study such complex turbulent flows whose Reynolds numbers (Re) are typically on the order of 10. 6. LES also promises increasedaccuracy over RANS based methods in predicting unsteady phenomena such as cavitation and noise production. This dissertation develops the capability to enable LES of high Re flows in complex geometries (e.g. a marine vessel) on unstructured grids and provide physical insight into the turbulent flow. LES is performed to investigate the geometry induced separated flow past a marine propeller attached to a hull, in an off-design condition called crashback. LES shows good quantitative agreement with experiments and provides a physical mechanism to explain the increase in side-force on the propeller blades below an advance ratio of J=-0.7. Fundamental developments in the dynamic subgrid-scale model for LES are pursued to improve the LES predictions, especially for complex flows on unstructured grids. A dynamic procedure is proposed to estimate a Lagrangian time scale based on a surrogate correlation without any adjustable parameter. The proposed model is applied to turbulent channel, cylinder and marine propeller flows and predicts improved results over other model variants due to a physically consistent Lagrangian time scale. A wall model is proposed for application to LES of high Reynolds number wall-bounded flows. The wall model is formulated as the minimization of a generalized constraint in the dynamic model for LES and applied to LES of turbulent channel flow at various Reynolds numbers up to Reτ=10000 and coarse grid resolutions to obtain significant improvement.
Massive blood transfusion during hospitalization for delivery in New York State, 1998-2007.
Mhyre, Jill M; Shilkrut, Alexander; Kuklina, Elena V; Callaghan, William M; Creanga, Andreea A; Kaminsky, Sari; Bateman, Brian T
2013-12-01
To define the frequency, risk factors, and outcomes of massive transfusion in obstetrics. The State Inpatient Dataset for New York (1998-2007) was used to identify all delivery hospitalizations for hospitals that reported at least one delivery-related transfusion per year. Multivariable logistic regression analysis was performed to examine the relationship between maternal age, race, and relevant clinical variables and the risk of massive blood transfusion defined as 10 or more units of blood recorded. Massive blood transfusion complicated 6 of every 10,000 deliveries with cases observed even in the smallest facilities. Risk factors with the strongest independent associations with massive blood transfusion included abnormal placentation (1.6/10,000 deliveries, adjusted odds ratio [OR] 18.5, 95% confidence interval [CI] 14.7-23.3), placental abruption (1.0/10,000, adjusted OR 14.6, 95% CI 11.2-19.0), severe preeclampsia (0.8/10,000, adjusted OR 10.4, 95% CI 7.7-14.2), and intrauterine fetal demise (0.7/10,000, adjusted OR 5.5, 95% CI 3.9-7.8). The most common etiologies of massive blood transfusion were abnormal placentation (26.6% of cases), uterine atony (21.2%), placental abruption (16.7%), and postpartum hemorrhage associated with coagulopathy (15.0%). A disproportionate number of women who received a massive blood transfusion experienced severe morbidity including renal failure, acute respiratory distress syndrome, sepsis, and in-hospital death. Massive blood transfusion was infrequent, regardless of facility size. In the presence of known risk for receipt of massive blood transfusion, women should be informed of this possibility, should deliver in a well-resourced facility if possible, and should receive appropriate blood product preparation and venous access in advance of delivery. : II.
NASA Astrophysics Data System (ADS)
Petersen, S.; Augustin, N.; de Benedetti, A.; Esposito, A.; Gaertner, A.; Gemmell, B.; Gibson, H.; He, G.; Huegler, M.; Kleeberg, R.; Kuever, J.; Kummer, N. A.; Lackschewitz, K.; Lappe, F.; Monecke, T.; Perrin, K.; Peters, M.; Sharpe, R.; Simpson, K.; Smith, D.; Wan, B.
2007-12-01
Seafloor hydrothermal systems related to volcanic arcs are known from several localities in the Tyrrhenian Sea in water depths ranging from 650 m (Palinuro Seamount) to less than 50 m (Panarea). At Palinuro Seamount 13 holes (<5m) were drilled using Rockdrill 1 of the British Geological Survey 1 into the heavily sediment-covered deposit recovering 11 m of semi-massive to massive sulfides. Maximum recovery within a single core was 4.8 m of massive sulfides/sulfates with abundant late native sulfur overprint. The deposit is open to all sides and to depth since all drill holes ended in mineralization. Metal enrichment at the top of the deposit is evident in some cores with polymetallic (Zn, Pb, Ag) sulfides overlying more massive and dense pyritic ore. The massive sulfide mineralization at Palinuro Seamount contains a number of unusual minerals, including enargite, tennantite, luzonite, and Ag-sulfosalts, that are not commonly encountered in mid-ocean ridge massive sulfides. In analogy to epithermal deposits forming on land, the occurrence of these minerals suggests a high sulfidation state of the hydrothermal fluids during deposition implying that the mineralizing fluids were acidic and oxidizing rather than near-neutral and reducing as those forming typical base metal rich massive sulfides along mid-ocean ridges. Oxidizing conditions during sulfide deposition can probably be related to the presence of magmatic volatiles in the mineralizing fluids that may be derived from a degassing magma chamber. Elevated temperatures within sediment cores and TV-grab stations (up to 60°C) indicate present day hydrothermal fluid flow. This is also indicated by the presence of small tube-worm bushes present on top the sediment. A number of drill holes were placed around the known phreatic gas-rich vents of Panarea and recovered intense clay-alteration in some holes as well as abundant massive anhydrite/gypsum with only trace sulfides along a structural depression suggesting the presence of an anhydrite seal to a larger hydrothermal system at depth. The aim of this study is to understand the role that magmatic volatiles and phase separation play in the formation of these precious and trace element-rich shallow water (<750m) hydrothermal systems in the volcanic arcs of the Tyrrhenian Sea.
Mobility Data Analytics Center.
DOT National Transportation Integrated Search
2016-01-01
Mobility Data Analytics Center aims at building a centralized data engine to efficiently manipulate : large-scale data for smart decision making. Integrating and learning the massive data are the key to : the data engine. The ultimate goal of underst...
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.
Topologically massive gravity and galilean conformal algebra: a study of correlation functions
NASA Astrophysics Data System (ADS)
Bagchi, Arjun
2011-02-01
The Galilean Conformal Algebra (GCA) arises from the conformal algebra in the non-relativistic limit. In two dimensions, one can view it as a limit of linear combinations of the two copies Virasoro algebra. Recently, it has been argued that Topologically Massive Gravity (TMG) realizes the quantum 2d GCA in a particular scaling limit of the gravitational Chern-Simons term. To add strength to this claim, we demonstrate a matching of correlation functions on both sides of this correspondence. A priori looking for spatially dependent correlators seems to force us to deal with high spin operators in the bulk. We get around this difficulty by constructing the non-relativistic Energy-Momentum tensor and considering its correlation functions. On the gravity side, our analysis makes heavy use of recent results of Holographic Renormalization in Topologically Massive Gravity.
A massively parallel computational approach to coupled thermoelastic/porous gas flow problems
NASA Technical Reports Server (NTRS)
Shia, David; Mcmanus, Hugh L.
1995-01-01
A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.
A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.
2014-12-01
Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.
NASA Astrophysics Data System (ADS)
Prochaska, J. Xavier; Lau, Marie Wingyee; Hennawi, Joseph F.
2014-12-01
We survey the incidence and absorption strength of the metal-line transitions C II 1334 and C IV 1548 from the circumgalactic medium (CGM) surrounding z ~ 2 quasars, which act as signposts for massive dark matter halos M halo ≈ 1012.5 M ⊙. On scales of the virial radius (r vir ≈ 160 kpc), we measure a high covering fraction fC = 0.73 ± 0.10 to strong C II 1334 absorption (rest equivalent width W 1334 >= 0.2 Å), implying a massive reservoir of cool (T ~ 104 K) metal enriched gas. We conservatively estimate a metal mass exceeding 108 M ⊙. We propose that these metals trace enrichment of the incipient intragroup/intracluster medium that these halos eventually inhabit. This cool CGM around quasars is the pinnacle among galaxies observed at all epochs, as regards covering the fraction and average equivalent width of H I Lyα and low-ion metal absorption. We argue that the properties of this cool CGM primarily reflect the halo mass, and that other factors such as feedback, star-formation rate, and accretion from the intergalactic medium are secondary. We further estimate that the CGM of massive, z ~ 2 galaxies accounts for the majority of strong Mg II absorption along random quasar sightlines. Last, we detect an excess of strong C IV 1548 absorption (W 1548 >= 0.3 Å) over random incidence to the 1 Mpc physical impact parameter and measure the quasar-C IV cross-correlation function: ξ C \\scriptsize{IV-Q}(r) = (r/r_0)-γ with r0 = 7.5+2.8-1.4 h-1 Mpc and γ = 1.7+0.1-0.2. Consistent with previous work on larger scales, we infer that this highly ionized C IV gas traces massive (1012 M ⊙) halos.
Classical and quantum cosmology of minimal massive bigravity
NASA Astrophysics Data System (ADS)
Darabi, F.; Mousavi, M.
2016-10-01
In a Friedmann-Robertson-Walker (FRW) space-time background we study the classical cosmological models in the context of recently proposed theory of nonlinear minimal massive bigravity. We show that in the presence of perfect fluid the classical field equations acquire contribution from the massive graviton as a cosmological term which is positive or negative depending on the dynamical competition between two scale factors of bigravity metrics. We obtain the classical field equations for flat and open universes in the ordinary and Schutz representation of perfect fluid. Focusing on the Schutz representation for flat universe, we find classical solutions exhibiting singularities at early universe with vacuum equation of state. Then, in the Schutz representation, we study the quantum cosmology for flat universe and derive the Schrodinger-Wheeler-DeWitt equation. We find its exact and wave packet solutions and discuss on their properties to show that the initial singularity in the classical solutions can be avoided by quantum cosmology. Similar to the study of Hartle-Hawking no-boundary proposal in the quantum cosmology of de Rham, Gabadadze and Tolley (dRGT) massive gravity, it turns out that the mass of graviton predicted by quantum cosmology of the minimal massive bigravity is large at early universe. This is in agreement with the fact that at early universe the cosmological constant should be large.
Looking for early black holes signatures in the anisotropies of Cosmic backgrounds
NASA Astrophysics Data System (ADS)
Cappelluti, Nico
2016-04-01
We currently do not know how Super Massive Black Holes are seeded and grow to form the observed massive QSO at z~7. This is puzzling, because at that redshift the Universe was still too young to allow the growth of such massive black holes from stellar remnant black hole seeds. Theoretical models, taking into account the paucity of metals in the early Universe, explain this by invoking the formation of massive black holes seeds at z>10 as Direct Collapse Black holes of remnants of dead POPIII stars. As of today we cannot claim any detection of any high-z (z>7) black hole in their early stage of life. However, our recent measures of the arcminute scale joint fluctuations of the Cosmic X-ray Background and the Cosmic Infrared Background by Chandra and Spitzer can be explained by a population of highly absorbed z>10 Direct Collapse Black Holes.I will review the recent discoveries obtained with different instruments and by different teams and critically discuss these findings and the interpretations.
Large-scale quantum transport calculations for electronic devices with over ten thousand atoms
NASA Astrophysics Data System (ADS)
Lu, Wenchang; Lu, Yan; Xiao, Zhongcan; Hodak, Miro; Briggs, Emil; Bernholc, Jerry
The non-equilibrium Green's function method (NEGF) has been implemented in our massively parallel DFT software, the real space multigrid (RMG) code suite. Our implementation employs multi-level parallelization strategies and fully utilizes both multi-core CPUs and GPU accelerators. Since the cost of the calculations increases dramatically with the number of orbitals, an optimal basis set is crucial for including a large number of atoms in the ``active device'' part of the simulations. In our implementation, the localized orbitals are separately optimized for each principal layer of the device region, in order to obtain an accurate and optimal basis set. As a large example, we calculated the transmission characteristics of a Si nanowire p-n junction. The nanowire is along (110) direction in order to minimize the number dangling bonds that are saturated by H atoms. Its diameter is 3 nm. The length of 24 nm is necessary because of the long-range screening length in Si. Our calculations clearly show the I-V characteristics of a diode, i.e., the current increases exponentially with forward bias and is near zero with backward bias. Other examples will also be presented, including three-terminal transistors and large sensor structures.
Flares from Galactic Centre pulsars: a new class of X-ray transients?
NASA Astrophysics Data System (ADS)
Giannios, Dimitrios; Lorimer, Duncan R.
2016-06-01
Despite intensive searches, the only pulsar within 0.1 pc of the central black hole in our Galaxy, Sgr A*, is a radio-loud magnetar. Since magnetars are rare among the Galactic neutron star population, and a large number of massive stars are already known in this region, the Galactic Centre (GC) should harbour a large number of neutron stars. Population syntheses suggest several thousand neutron stars may be present in the GC. Many of these could be highly energetic millisecond pulsars which are also proposed to be responsible for the GC gamma-ray excess. We propose that the presence of a neutron star within 0.03 pc from Sgr A* can be revealed by the shock interactions with the disc around the central black hole. As we demonstrate, these interactions result in observable transient non-thermal X-ray and gamma-ray emission over time-scales of months, provided that the spin-down luminosity of the neutron star is Lsd ˜ 1035 erg s-1. Current limits on the population of normal and millisecond pulsars in the GC region suggest that a number of such pulsars are present with such luminosities.
Numerical simulation of the compressible Orszag-Tang vortex. Interim report, June 1988-February 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, R.B.; Picone, J.M.
Results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. Initial conditions consist of a nonrandom, periodic field in which the magnetic and velocity fields contain X-points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure-field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average flow Mach number of the flow. In the numerical simulations, this initial Mach number is varied from 0.2 to 0.6. These values correspond to average plasma beta valuesmore » ranging from 30.0 to 3.3, respectively. Compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as mass density and nonsolenoidal flow field. These effects include (1) retardation of growth of correlation between the magnetic field and the velocity field, (2) emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible-flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less
SpS5 - II. Stellar and wind parameters
NASA Astrophysics Data System (ADS)
Martins, F.; Bergemann, M.; Bestenlehner, J. M.; Crowther, P. A.; Hamann, W. R.; Najarro, F.; Nieva, M. F.; Przybilla, N.; Freimanis, J.; Hou, W.; Kaper, L.
2015-03-01
The development of infrared observational facilities has revealed a number of massive stars in obscured environments throughout the Milky Way and beyond. The determination of their stellar and wind properties from infrared diagnostics is thus required to take full advantage of the wealth of observations available in the near and mid infrared. However, the task is challenging. This session addressed some of the problems encountered and showed the limitations and successes of infrared studies of massive stars.
Massive Multi-Agent Systems Control
NASA Technical Reports Server (NTRS)
Campagne, Jean-Charles; Gardon, Alain; Collomb, Etienne; Nishida, Toyoaki
2004-01-01
In order to build massive multi-agent systems, considered as complex and dynamic systems, one needs a method to analyze and control the system. We suggest an approach using morphology to represent and control the state of large organizations composed of a great number of light software agents. Morphology is understood as representing the state of the multi-agent system as shapes in an abstract geometrical space, this notion is close to the notion of phase space in physics.
Testing no-scale supergravity with the Fermi Space Telescope LAT
NASA Astrophysics Data System (ADS)
Li, Tianjun; Maxin, James A.; Nanopoulos, Dimitri V.; Walker, Joel W.
2014-05-01
We describe a methodology for testing no-scale supergravity by the LAT instrument onboard the Fermi Space Telescope via observation of gamma ray emissions from lightest supersymmetric (SUSY) neutralino annihilations. For our test vehicle we engage the framework of the SUSY grand unified model no-scale flipped SU(5) with extra vector-like flippon multiplets derived from F-theory, known as { F}-SU(5). We show that through compression of the light stau and light bino neutralino mass difference, where internal bremsstrahlung photons give a dominant contribution, the photon yield from annihilation of SUSY dark matter can be elevated to a number of events potentially observable by the Fermi-LAT in the coming years. Likewise, the increased yield in no-scale { F}-SU(5) may also have rendered the existing observation of a 133 GeV monochromatic gamma ray line visible, if additional data should exclude systematic or statistical explanations. The question of intensity aside, no-scale { F}-SU(5) can indeed provide a natural weakly interacting massive particle candidate with a mass in the correct range to yield γγ and γZ emission lines at mχ ˜ 133 GeV and mχ ˜ 145 GeV, respectively. Additionally, we elucidate the emerging empirical connection between recent Planck satellite data and no-scale supergravity cosmological models which mimic the Starobinsky model of inflation. Together, these experiments furnish rich alternate avenues for testing no-scale { F}-SU(5), and similarly structured models, the results of which may lend independent credence to observations made at the Large Hadron Collider.
Identifying Protoclusters in the High Redshift Universe and Mapping Their Evolution
NASA Astrophysics Data System (ADS)
Franck, Jay Robert
2018-01-01
To investigate the growth and evolution of the earliest structures in the Universe, we identify more than 200 galaxy overdensities in the Candidate Cluster and Protocluster Catalog (CCPC). This compilation is produced by mining open astronomy data sets for over-densities of high redshift galaxies that are spectroscopically confirmed. At these redshifts, the Universe is only a few billion years old. This data mining approach yields a nearly 10 fold increase in the number of known protoclusters in the literature. The CCPC also includes the highest redshift, spectroscopically confirmed protocluster at z=6.56. For nearly 1500 galaxies contained in the CCPC between redshifts of 2.0
Extending applicability of bimetric theory: chameleon bigravity
NASA Astrophysics Data System (ADS)
De Felice, Antonio; Mukohyama, Shinji; Uzan, Jean-Philippe
2018-02-01
This article extends bimetric formulations of massive gravity to make the mass of the graviton to depend on its environment. This minimal extension offers a novel way to reconcile massive gravity with local tests of general relativity without invoking the Vainshtein mechanism. On cosmological scales, it is argued that the model is stable and that it circumvents the Higuchi bound, hence relaxing the constraints on the parameter space. Moreover, with this extension the strong coupling scale is also environmentally dependent in such a way that it is kept sufficiently higher than the expansion rate all the way up to the very early universe, while the present graviton mass is low enough to be phenomenologically interesting. In this sense the extended bigravity theory serves as a partial UV completion of the standard bigravity theory. This extension is very generic and robust and a simple specific example is described.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less
Massive cortical reorganization in sighted Braille readers.
Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin
2016-03-15
The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.
Rapid formation of supermassive black hole binaries in galaxy mergers with gas.
Mayer, L; Kazantzidis, S; Madau, P; Colpi, M; Quinn, T; Wadsley, J
2007-06-29
Supermassive black holes (SMBHs) are a ubiquitous component of the nuclei of galaxies. It is normally assumed that after the merger of two massive galaxies, a SMBH binary will form, shrink because of stellar or gas dynamical processes, and ultimately coalesce by emitting a burst of gravitational waves. However, so far it has not been possible to show how two SMBHs bind during a galaxy merger with gas because of the difficulty of modeling a wide range of spatial scales. Here we report hydrodynamical simulations that track the formation of a SMBH binary down to scales of a few light years after the collision between two spiral galaxies. A massive, turbulent, nuclear gaseous disk arises as a result of the galaxy merger. The black holes form an eccentric binary in the disk in less than 1 million years as a result of the gravitational drag from the gas rather than from the stars.
Why do galactic spins flip in the cosmic web? A Theory of Tidal Torques near saddles
NASA Astrophysics Data System (ADS)
Pichon, Christophe; Codis, Sandrine; Pogosyan, Dmitry; Dubois, Yohan; Desjacques, Vincent; Devriendt, Julien
2016-10-01
Filaments of the cosmic web drive spin acquisition of disc galaxies. The point process of filament-type saddle represent best this environment and can be used to revisit the Tidal Torque Theory in the context of an anisotropic peak (saddle) background split. The constrained misalignment between the tidal tensor and the Hessian of the density field generated in the vicinity of filament saddle points simply explains the corresponding transverse and longitudinal point-reflection symmetric geometry of spin distribution. It predicts in particular an azimuthal orientation of the spins of more massive galaxies and spin alignment with the filament for less massive galaxies. Its scale dependence also allows us to relate the transition mass corresponding to the alignment of dark matter halos' spin relative to the direction of their neighboring filament to this geometry, and to predict accordingly it's scaling with the mass of non linearity, as was measured in simulations.
Direct formation of supermassive black holes via multi-scale gas inflows in galaxy mergers.
Mayer, L; Kazantzidis, S; Escala, A; Callegari, S
2010-08-26
Observations of distant quasars indicate that supermassive black holes of billions of solar masses already existed less than a billion years after the Big Bang. Models in which the 'seeds' of such black holes form by the collapse of primordial metal-free stars cannot explain the rapid appearance of these supermassive black holes because gas accretion is not sufficiently efficient. Alternatively, these black holes may form by direct collapse of gas within isolated protogalaxies, but current models require idealized conditions, such as metal-free gas, to prevent cooling and star formation from consuming the gas reservoir. Here we report simulations showing that mergers between massive protogalaxies naturally produce the conditions for direct collapse into a supermassive black hole with no need to suppress cooling and star formation. Merger-driven gas inflows give rise to an unstable, massive nuclear gas disk of a few billion solar masses, which funnels more than 10(8) solar masses of gas to a sub-parsec-scale gas cloud in only 100,000 years. The cloud undergoes gravitational collapse, which eventually leads to the formation of a massive black hole. The black hole can subsequently grow to a billion solar masses on timescales of about 10(8) years by accreting gas from the surrounding disk.
Planckian Interacting Massive Particles as Dark Matter.
Garny, Mathias; Sandora, McCullen; Sloth, Martin S
2016-03-11
The standard model could be self-consistent up to the Planck scale according to the present measurements of the Higgs boson mass and top quark Yukawa coupling. It is therefore possible that new physics is only coupled to the standard model through Planck suppressed higher dimensional operators. In this case the weakly interacting massive particle miracle is a mirage, and instead minimality as dictated by Occam's razor would indicate that dark matter is related to the Planck scale, where quantum gravity is anyway expected to manifest itself. Assuming within this framework that dark matter is a Planckian interacting massive particle, we show that the most natural mass larger than 0.01M_{p} is already ruled out by the absence of tensor modes in the cosmic microwave background (CMB). This also indicates that we expect tensor modes in the CMB to be observed soon for this type of minimal dark matter model. Finally, we touch upon the Kaluza-Klein graviton mode as a possible realization of this scenario within UV complete models, as well as further potential signatures and peculiar properties of this type of dark matter candidate. This paradigm therefore leads to a subtle connection between quantum gravity, the physics of primordial inflation, and the nature of dark matter.
General relativistic viscous hydrodynamics of differentially rotating neutron stars
NASA Astrophysics Data System (ADS)
Shibata, Masaru; Kiuchi, Kenta; Sekiguchi, Yu-ichiro
2017-04-01
Employing a simplified version of the Israel-Stewart formalism for general-relativistic shear-viscous hydrodynamics, we perform axisymmetric general-relativistic simulations for a rotating neutron star surrounded by a massive torus, which can be formed from differentially rotating stars. We show that with our choice of a shear-viscous hydrodynamics formalism, the simulations can be stably performed for a long time scale. We also demonstrate that with a possibly high shear-viscous coefficient, not only viscous angular momentum transport works but also an outflow could be driven from a hot envelope around the neutron star for a time scale ≳100 ms with the ejecta mass ≳10-2 M⊙ , which is comparable to the typical mass for dynamical ejecta of binary neutron-star mergers. This suggests that massive neutron stars surrounded by a massive torus, which are typical outcomes formed after the merger of binary neutron stars, could be the dominant source for providing neutron-rich ejecta, if the effective shear viscosity is sufficiently high, i.e., if the viscous α parameter is ≳10-2. The present numerical result indicates the importance of a future high-resolution magnetohydrodynamics simulation that is the unique approach to clarify the viscous effect in the merger remnants of binary neutron stars by the first-principle manner.
High-mass X-ray binary populations. 1: Galactic modeling
NASA Technical Reports Server (NTRS)
Dalton, William W.; Sarazin, Craig L.
1995-01-01
Modern stellar evolutionary tracks are used to calculate the evolution of a very large number of massive binary star systems (M(sub tot) greater than or = 15 solar mass) which cover a wide range of total masses, mass ratios, and starting separations. Each binary is evolved accounting for mass and angular momentum loss through the supernova of the primary to the X-ray binary phase. Using the observed rate of star formation in our Galaxy and the properties of massive binaries, we calculate the expected high-mass X-ray binary (HMXRB) population in the Galaxy. We test various massive binary evolutionary scenarios by comparing the resulting HMXRB predictions with the X-ray observations. A major goal of this study is the determination of the fraction of matter lost from the system during the Roche lobe overflow phase. Curiously, we find that the total numbers of observable HMXRBs are nearly independent of this assumed mass-loss fraction, with any of the values tested here giving acceptable agreement between predicted and observed numbers. However, comparison of the period distribution of our HMXRB models with the observed period distribution does reveal a distinction among the various models. As a result of this comparison, we conclude that approximately 70% of the overflow matter is lost from a massive binary system during mass transfer in the Roche lobe overflow phase. We compare models constructed assuming that all X-ray emission is due to accretion onto the compact object from the donor star's wind with models that incorporate a simplified disk accretion scheme. By comparing the results of these models with observations, we conclude that the formation of disks in HMXRBs must be relatively common. We also calculate the rate of formation of double degenerate binaries, high velocity detached compact objects, and Thorne-Zytkow objects.
NASA Astrophysics Data System (ADS)
De Laurentis, Mariafelicia; De Martino, Ivan; Lazkoz, Ruth
2018-05-01
Alternative theories of gravity may serve to overcome several shortcomings of the standard cosmological model but, in their weak field limit, general relativity must be recovered so as to match the tight constraints at the Solar System scale. Therefore, testing such alternative models at scales of stellar systems could give a unique opportunity to confirm or rule them out. One of the most straightforward modifications is represented by analytical f (R )-gravity models that introduce a Yukawa-like modification to the Newtonian potential thus modifying the dynamics of particles. Using the geodesics equations, we have illustrated the amplitude of these modifications. First, we have integrated numerically the equations of motion showing the orbital precession of a particle around a massive object. Second, we have computed an analytic expression for the periastron advance of systems having their semimajor axis much shorter than the Yukawa-scale length. Finally, we have extended our results to the case of a binary system composed of two massive objects. Our analysis provides a powerful tool to obtain constraints on the underlying theory of gravity using current and forthcoming data sets.
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.
Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre
2017-06-01
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
NASA Astrophysics Data System (ADS)
Figura, Charles C.; Urquhart, James S.; Morgan, Lawrence
2015-01-01
We have conducted a detailed multi-wavelength investigation of a variety of massive star forming regions in order to characterise the impact of the interactions between the substructure of the dense protostellar clumps and their local environment, including feedback from the embedded proto-cluster.A selection of 70 MYSOs and HII regions identified by the RMS survey have been followed up with observations of the ammonia (1,1) and (2,2) inversion transitions made with the KFPA on the GBT. These maps have been combined with archival CO data to investigate the thermal and kinematic structure of the extended envelopes down to the dense clumps. We complement this larger-scale picture with high resolution near- and mid-infrared images to probe the properties of the embedded objects themselves.We present an overview of several sources from this sample that illustrate some of the the interactions that we observe. We find that high molecular column densities and kinetic temperatures are coincident with embedded sources and with shocks and outflows as exhibited in gas kinematics.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
The Circumgalactic Medium in Massive Halos
NASA Astrophysics Data System (ADS)
Chen, Hsiao-Wen
This chapter presents a review of the current state of knowledge on the cool (T ˜ 104 K) halo gas content around massive galaxies at z ≈ 0. 2-2. Over the last decade, significant progress has been made in characterizing the cool circumgalactic gas in massive halos of M h ≈ 1012-14 M⊙ at intermediate redshifts using absorption spectroscopy. Systematic studies of halo gas around massive galaxies beyond the nearby universe are made possible by large spectroscopic samples of galaxies and quasars in public archives. In addition to accurate and precise constraints for the incidence of cool gas in massive halos, detailed characterizations of gas kinematics and chemical compositions around massive quiescent galaxies at z ≈ 0. 5 have also been obtained. Combining all available measurements shows that infalling clouds from external sources are likely the primary source of cool gas detected at
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
Analytic solutions in nonlinear massive gravity.
Koyama, Kazuya; Niz, Gustavo; Tasinato, Gianmassimo
2011-09-23
We study spherically symmetric solutions in a covariant massive gravity model, which is a candidate for a ghost-free nonlinear completion of the Fierz-Pauli theory. There is a branch of solutions that exhibits the Vainshtein mechanism, recovering general relativity below a Vainshtein radius given by (r(g)m(2))(1/3), where m is the graviton mass and r(g) is the Schwarzschild radius of a matter source. Another branch of exact solutions exists, corresponding to de Sitter-Schwarzschild spacetimes where the curvature scale of de Sitter space is proportional to the mass squared of the graviton.
Mining the Obscured OB Star Population in Carina
NASA Astrophysics Data System (ADS)
Smith, Michael
2016-04-01
Massive OB stars are very influential objects in the ecology of galaxies like our own. Current catalogues of Galactic OB stars are heavily biased towards bright (g < 13) objects, only typically including fainter objects when found in prominent star clusters (Garmany et al., 1982; Reed, 2003; Maíz-Apellaniz et al., 2004). Exploitation of the VST Photometric Hα Survey (VPHAS+) allows us to build a robust catalogue of photometrically-selected OB stars across the entire Southern Galactic plane, both within clusters and in the field, down to ∼20th magnitude in g. For the first time, a complete accounting of the OB star runaway phenomenon becomes possible. Along with making the primary selection using VPHAS+ colours, I have performed Markov-Chain Monte Carlo fitting of the spectral energy distributions of the selected stars by combining VPHAS+ u, g, r, i with published J, H, K photometry. This gives rough constraints on effective temperature and distance, whilst delivering much more precise reddening parameters A0 and RV - allowing us to build a much richer picture of how extinction and extinction laws vary across the Galactic Plane. My thesis begins with a description of the method of photometric selection of OB star candidates and its validation across a 2 square degree field including the well-known young massive star cluster Westerlund 2 (Mohr-Smith et al., 2015). Following on from this I present spectroscopy with AAOmega of 283 candidates identified by our method, which confirms that ∼94% of the sample are the expected O and early B stars. I then develop this method further and apply it to a Galactic Plane strip of 42 square-degrees that runs from the Carina Arm tangent region to the much studied massive cluster in NGC 3603. A new aspect I attend to in this expansion of method is tightening up the uniform photometric calibration of the data, paying particular attention to the always-challenging u band. This leads to a new and reliable catalogue of 5915 OB stars. As well as increasing the numbers of identified massive stars in this large region of the sky by nearly an order of magnitude, a more complete picture of massive star formation in the Carina Arm has emerged. I have found a broad over-density of O stars around the highly luminous cluster NGC 3603 and have uncovered two new candidate OB clusters/associations. I have also paired up the ionization sources of a number of HII regions catalogued by the RMS survey. It is also shown that the OB star scale-height can serve as a roughly standard ruler, leading to the result that the OB star layer shows the onset of warping at RG ∼10kpc. My results confirm that this entire region requires a non-standard (3.5 < RV < 4.0) reddening law for distances greater than ∼2 kpc. The methods developed in this study are ready to roll out across the rest of the VPHAS+ footprint that has been observed to date. This extension will take in a strip ∼ ±2 degrees across the entire Southern Galactic mid-plane (a sky area of over 700 square degrees), within which we expect to find the majority of massive OB stars. This will result in the largest catalogue of Galactic OB stars to date.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
The Mass, Color, and Structural Evolution of Today’s Massive Galaxies Since z ˜ 5
NASA Astrophysics Data System (ADS)
Hill, Allison R.; Muzzin, Adam; Franx, Marijn; Clauwens, Bart; Schreiber, Corentin; Marchesini, Danilo; Stefanon, Mauro; Labbe, Ivo; Brammer, Gabriel; Caputi, Karina; Fynbo, Johan; Milvang-Jensen, Bo; Skelton, Rosalind E.; van Dokkum, Pieter; Whitaker, Katherine E.
2017-03-01
In this paper, we use stacking analysis to trace the mass growth, color evolution, and structural evolution of present-day massive galaxies ({log}({M}* /{M}⊙ )=11.5) out to z = 5. We utilize the exceptional depth and area of the latest UltraVISTA data release, combined with the depth and unparalleled seeing of CANDELS to gather a large, mass-selected sample of galaxies in the NIR (rest-frame optical to UV). Progenitors of present-day massive galaxies are identified via an evolving cumulative number density selection, which accounts for the effects of merging to correct for the systematic biases introduced using a fixed cumulative number density selection, and find progenitors grow in stellar mass by ≈ 1.5 {dex} since z = 5. Using stacking, we analyze the structural parameters of the progenitors and find that most of the stellar mass content in the central regions was in place by z˜ 2, and while galaxies continue to assemble mass at all radii, the outskirts experience the largest fractional increase in stellar mass. However, we find evidence of significant stellar mass build-up at r< 3 {kpc} beyond z> 4 probing an era of significant mass assembly in the interiors of present-day massive galaxies. We also compare mass assembly from progenitors in this study to the EAGLE simulation and find qualitatively similar assembly with z at r< 3 {kpc}. We identify z˜ 1.5 as a distinct epoch in the evolution of massive galaxies where progenitors transitioned from growing in mass and size primarily through in situ star formation in disks to a period of efficient growth in r e consistent with the minor merger scenario.
Cosmological structure formation in Decaying Dark Matter models
NASA Astrophysics Data System (ADS)
Cheng, Dalong; Chu, M.-C.; Tang, Jiayu
2015-07-01
The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.
Boozer, Allen H.
2017-03-24
The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scalemore » when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. Thus, the physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.« less
NASA Astrophysics Data System (ADS)
Spilker, Justin; Bezanson, Rachel; Barišić, Ivana; Bell, Eric; Lagos, Claudia del P.; Maseda, Michael; Muzzin, Adam; Pacifici, Camilla; Sobral, David; Straatman, Caroline; van der Wel, Arjen; van Dokkum, Pieter; Weiner, Benjamin; Whitaker, Katherine; Williams, Christina C.; Wu, Po-Feng
2018-06-01
A decade of study has established that the molecular gas properties of star-forming galaxies follow coherent scaling relations out to z ∼ 3, suggesting remarkable regularity of the interplay between molecular gas, star formation, and stellar growth. Passive galaxies, however, are expected to be gas-poor and therefore faint, and thus little is known about molecular gas in passive galaxies beyond the local universe. Here we present deep Atacama Large Millimeter/submillimeter Array observations of CO(2–1) emission in eight massive (M star ∼ 1011 M ⊙) galaxies at z ∼ 0.7 selected to lie a factor of 3–10 below the star-forming sequence at this redshift, drawn from the Large Early Galaxy Astrophysics Census survey. We significantly detect half the sample, finding molecular gas fractions ≲0.1. We show that the molecular and stellar rotational axes are broadly consistent, arguing that the molecular gas was not accreted after the galaxies became quiescent. We find that scaling relations extrapolated from the star-forming population overpredict both the gas fraction and gas depletion time for passive objects, suggesting the existence of either a break or large increase in scatter in these relations at low specific star formation rate. Finally, we show that the gas fractions of the passive galaxies we have observed at intermediate redshifts are naturally consistent with evolution into local, massive early-type galaxies by continued low-level star formation, with no need for further gas accretion or dynamical stabilization of the gas reservoirs in the intervening 6 billion years.
NASA Astrophysics Data System (ADS)
Zeballos, M.; Hughes, D. H.; Aretxaga, I.; Wilson, G.
2011-10-01
We present an analysis of the number density and spatial distribution of the population of millimetre galaxies (MMGs) towards 17 high-z active galaxies using 1.1 mm observations taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment (ASTE) and the James Clerk Maxwell Telescope (JCMT). The sample allows us to study the properties of MMGs in protocluster environments and compare them to the population in blank (unbiased) fields. The goal is to identify if these biased environments are responsible for differences in the number and distribution of dust-obscured star-forming galaxies and whether these changes support the suggestion that MMGs are the progenitors of massive (elliptical) galaxies we see today in the centre of rich clusters.
Kononowicz, Andrzej A; Berman, Anne H; Stathakarou, Natalia; McGrath, Cormac; Bartyński, Tomasz; Nowakowski, Piotr; Malawski, Maciej; Zary, Nabil
2015-09-10
Massive open online courses (MOOCs) have been criticized for focusing on presentation of short video clip lectures and asking theoretical multiple-choice questions. A potential way of vitalizing these educational activities in the health sciences is to introduce virtual patients. Experiences from such extensions in MOOCs have not previously been reported in the literature. This study analyzes technical challenges and solutions for offering virtual patients in health-related MOOCs and describes patterns of virtual patient use in one such course. Our aims are to reduce the technical uncertainty related to these extensions, point to aspects that could be optimized for a better learner experience, and raise prospective research questions by describing indicators of virtual patient use on a massive scale. The Behavioral Medicine MOOC was offered by Karolinska Institutet, a medical university, on the EdX platform in the autumn of 2014. Course content was enhanced by two virtual patient scenarios presented in the OpenLabyrinth system and hosted on the VPH-Share cloud infrastructure. We analyzed web server and session logs and a participant satisfaction survey. Navigation pathways were summarized using a visual analytics tool developed for the purpose of this study. The number of course enrollments reached 19,236. At the official closing date, 2317 participants (12.1% of total enrollment) had declared completing the first virtual patient assignment and 1640 (8.5%) participants confirmed completion of the second virtual patient assignment. Peak activity involved 359 user sessions per day. The OpenLabyrinth system, deployed on four virtual servers, coped well with the workload. Participant survey respondents (n=479) regarded the activity as a helpful exercise in the course (83.1%). Technical challenges reported involved poor or restricted access to videos in certain areas of the world and occasional problems with lost sessions. The visual analyses of user pathways display the parts of virtual patient scenarios that elicited less interest and may have been perceived as nonchallenging options. Analyzing the user navigation pathways allowed us to detect indications of both surface and deep approaches to the content material among the MOOC participants. This study reported on first inclusion of virtual patients in a MOOC. It adds to the body of knowledge by demonstrating how a biomedical cloud provider service can ensure technical capacity and flexible design of a virtual patient platform on a massive scale. The study also presents a new way of analyzing the use of branched virtual patients by visualization of user navigation pathways. Suggestions are offered on improvements to the design of virtual patients in MOOCs.
Berman, Anne H; Stathakarou, Natalia; McGrath, Cormac; Bartyński, Tomasz; Nowakowski, Piotr; Malawski, Maciej; Zary, Nabil
2015-01-01
Background Massive open online courses (MOOCs) have been criticized for focusing on presentation of short video clip lectures and asking theoretical multiple-choice questions. A potential way of vitalizing these educational activities in the health sciences is to introduce virtual patients. Experiences from such extensions in MOOCs have not previously been reported in the literature. Objective This study analyzes technical challenges and solutions for offering virtual patients in health-related MOOCs and describes patterns of virtual patient use in one such course. Our aims are to reduce the technical uncertainty related to these extensions, point to aspects that could be optimized for a better learner experience, and raise prospective research questions by describing indicators of virtual patient use on a massive scale. Methods The Behavioral Medicine MOOC was offered by Karolinska Institutet, a medical university, on the EdX platform in the autumn of 2014. Course content was enhanced by two virtual patient scenarios presented in the OpenLabyrinth system and hosted on the VPH-Share cloud infrastructure. We analyzed web server and session logs and a participant satisfaction survey. Navigation pathways were summarized using a visual analytics tool developed for the purpose of this study. Results The number of course enrollments reached 19,236. At the official closing date, 2317 participants (12.1% of total enrollment) had declared completing the first virtual patient assignment and 1640 (8.5%) participants confirmed completion of the second virtual patient assignment. Peak activity involved 359 user sessions per day. The OpenLabyrinth system, deployed on four virtual servers, coped well with the workload. Participant survey respondents (n=479) regarded the activity as a helpful exercise in the course (83.1%). Technical challenges reported involved poor or restricted access to videos in certain areas of the world and occasional problems with lost sessions. The visual analyses of user pathways display the parts of virtual patient scenarios that elicited less interest and may have been perceived as nonchallenging options. Analyzing the user navigation pathways allowed us to detect indications of both surface and deep approaches to the content material among the MOOC participants. Conclusions This study reported on first inclusion of virtual patients in a MOOC. It adds to the body of knowledge by demonstrating how a biomedical cloud provider service can ensure technical capacity and flexible design of a virtual patient platform on a massive scale. The study also presents a new way of analyzing the use of branched virtual patients by visualization of user navigation pathways. Suggestions are offered on improvements to the design of virtual patients in MOOCs. PMID:27731844
Number sense across the lifespan as revealed by a massive Internet-based sample
Halberda, Justin; Ly, Ryan; Wilmer, Jeremy B.; Naiman, Daniel Q.; Germine, Laura
2012-01-01
It has been difficult to determine how cognitive systems change over the grand time scale of an entire life, as few cognitive systems are well enough understood; observable in infants, adolescents, and adults; and simple enough to measure to empower comparisons across vastly different ages. Here we address this challenge with data from more than 10,000 participants ranging from 11 to 85 years of age and investigate the precision of basic numerical intuitions and their relation to students’ performance in school mathematics across the lifespan. We all share a foundational number sense that has been observed in adults, infants, and nonhuman animals, and that, in humans, is generated by neurons in the intraparietal sulcus. Individual differences in the precision of this evolutionarily ancient number sense may impact school mathematics performance in children; however, we know little of its role beyond childhood. Here we find that population trends suggest that the precision of one’s number sense improves throughout the school-age years, peaking quite late at ∼30 y. Despite this gradual developmental improvement, we find very large individual differences in number sense precision among people of the same age, and these differences relate to school mathematical performance throughout adolescence and the adult years. The large individual differences and prolonged development of number sense, paired with its consistent and specific link to mathematics ability across the age span, hold promise for the impact of educational interventions that target the number sense. PMID:22733748
NASA Astrophysics Data System (ADS)
Austermann, J. E.; Aretxaga, I.; Hughes, D. H.; Kang, Y.; Kim, S.; Lowenthal, J. D.; Perera, T. A.; Sanders, D. B.; Scott, K. S.; Scoville, N.; Wilson, G. W.; Yun, M. S.
2009-03-01
We report an overdensity of bright submillimetre galaxies (SMGs) in the 0.15 deg2 AzTEC/COSMOS survey and a spatial correlation between the SMGs and the optical-IR galaxy density at z <~ 1.1. This portion of the COSMOS field shows a ~3σ overdensity of robust SMG detections when compared to a background, or `blank-field', population model that is consistent with SMG surveys of fields with no extragalactic bias. The SMG overdensity is most significant in the number of very bright detections (14 sources with measured fluxes S1.1mm > 6 mJy), which is entirely incompatible with sample variance within our adopted blank-field number densities and infers an overdensity significance of >> 4σ. We find that the overdensity and spatial correlation to optical-IR galaxy density are most consistent with lensing of a background SMG population by foreground mass structures along the line of sight, rather than physical association of the SMGs with the z <~ 1.1 galaxies/clusters. The SMG positions are only weakly correlated with weak-lensing maps, suggesting that the dominant sources of correlation are individual galaxies and the more tenuous structures in the survey region, and not the massive and compact clusters. These results highlight the important roles cosmic variance and large-scale structure can play in the study of SMGs.
Prototype design for a predictive model to improve evacuation operations : technical report.
DOT National Transportation Integrated Search
2011-08-01
Mass evacuations of the Texas Gulf Coast remain a difficult challenge. These events are massive in scale, : highly complex, and entail an intricate, ever-changing conglomeration of technical and jurisdictional issues. : This project focused primarily...
An Overview of Mesoscale Modeling Software for Energetic Materials Research
2010-03-01
12 2.9 Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ...13 Table 10. LAMMPS summary...extensive reviews, lectures and workshops are available on multiscale modeling of materials applications (76-78). • Multi-phase mixtures of
NASA Astrophysics Data System (ADS)
Commerçon, B.; Hennebelle, P.; Levrier, F.; Launhardt, R.; Henning, Th.
2012-03-01
I will present radiation-magneto-hydrodynamics calculations of low-mass and massive dense core collapse, focusing on the first collapse and the first hydrostatic core (first Larson core) formation. The influence of magnetic field and initial mass on the fragmentation properties will be investigated. In the first part reporting low mass dense core collapse calculations, synthetic observations of spectral energy distributions will be derived, as well as classical observational quantities such as bolometric temperature and luminosity. I will show how the dust continuum can help to target first hydrostatic cores and to state about the nature of VeLLOs. Last, I will present synthetic ALMA observation predictions of first hydrostatic cores which may give an answer, if not definitive, to the fragmentation issue at the early Class 0 stage. In the second part, I will report the results of radiation-magneto-hydrodynamics calculations in the context of high mass star formation, using for the first time a self-consistent model for photon emission (i.e. via thermal emission and in radiative shocks) and with the high resolution necessary to resolve properly magnetic braking effects and radiative shocks on scales <100 AU (Commercon, Hennebelle & Henning ApJL 2011). In this study, we investigate the combined effects of magnetic field, turbulence, and radiative transfer on the early phases of the collapse and the fragmentation of massive dense cores (M=100 M_⊙). We identify a new mechanism that inhibits initial fragmentation of massive dense cores, where magnetic field and radiative transfer interplay. We show that this interplay becomes stronger as the magnetic field strength increases. We speculate that highly magnetized massive dense cores are good candidates for isolated massive star formation, while moderately magnetized massive dense cores are more appropriate to form OB associations or small star clusters. Finally we will also present synthetic observations of these collapsing massive dense cores.
NASA Astrophysics Data System (ADS)
Botteon, A.; Shimwell, T. W.; Bonafede, A.; Dallacasa, D.; Brunetti, G.; Mandal, S.; van Weeren, R. J.; Brüggen, M.; Cassano, R.; de Gasperin, F.; Hoang, D. N.; Hoeft, M.; Röttgering, H. J. A.; Savini, F.; White, G. J.; Wilber, A.; Venturi, T.
2018-05-01
Radio halos and radio relics are diffuse synchrotron sources that extend over Mpc-scales and are found in a number of merger galaxy clusters. They are believed to form as a consequence of the energy that is dissipated by turbulence and shocks in the intra-cluster medium (ICM). However, the precise physical processes that generate these steep synchrotron spectrum sources are still poorly constrained. We present a new LOFAR observation of the double galaxy cluster Abell 1758. This system is composed of A1758N, a massive cluster hosting a known giant radio halo, and A1758S, which is a less massive cluster whose diffuse radio emission is confirmed here for the first time. Our observations have revealed a radio halo and a candidate radio relic in A1758S, and a suggestion of emission along the bridge connecting the two systems which deserves confirmation. We combined the LOFAR data with archival VLA and GMRT observations to constrain the spectral properties of the diffuse emission. We also analyzed a deep archival Chandra observation and used this to provide evidence that A1758N and A1758S are in a pre-merger phase. The ICM temperature across the bridge that connects the two systems shows a jump which might indicate the presence of a transversal shock generated in the initial stage of the merger.
Report on noninvasive prenatal testing: classical and alternative approaches.
Pantiukh, Kateryna S; Chekanov, Nikolay N; Zaigrin, Igor V; Zotov, Alexei M; Mazur, Alexander M; Prokhortchouk, Egor B
2016-01-01
Concerns of traditional prenatal aneuploidy testing methods, such as low accuracy of noninvasive and health risks associated with invasive procedures, were overcome with the introduction of novel noninvasive methods based on genetics (NIPT). These were rapidly adopted into clinical practice in many countries after a series of successful trials of various independent submethods. Here we present results of own NIPT trial carried out in Moscow, Russia. 1012 samples were subjected to the method aimed at measuring chromosome coverage by massive parallel sequencing. Two alternative approaches are ascertained: one based on maternal/fetal differential methylation and another based on allelic difference. While the former failed to provide stable results, the latter was found to be promising and worthy of conducting a large-scale trial. One critical point in any NIPT approach is the determination of fetal cell-free DNA fraction, which dictates the reliability of obtained results for a given sample. We show that two different chromosome Y representation measures-by real-time PCR and by whole-genome massive parallel sequencing-are practically interchangeable (r=0.94). We also propose a novel method based on maternal/fetal allelic difference which is applicable in pregnancies with fetuses of either sex. Even in its pilot form it correlates well with chromosome Y coverage estimates (r=0.74) and can be further improved by increasing the number of polymorphisms.
Interactions in Massive Colliding Wind Binaries
NASA Technical Reports Server (NTRS)
Corcoran, M.
2012-01-01
The most massive stars (M> 60 Solar Mass) play crucial roles in altering the chemical and thermodynamic properties of their host galaxies. Stellar mass is the fundamental stellar parameter that determines their ancillary properties and which ultimately determines the fate of these stars and their influence on their galactic environs. Unfortunately, stellar mass becomes observationally and theoretically less well constrained as it increases. Theory becomes uncertain mostly because very massive stars are prone to strong, variable mass loss which is difficult to model. Observational constraints are uncertain too. Massive stars are rare, and massive binary stars (needed for dynamical determination of mass) are rarer still: and of these systems only a fraction have suitably high orbital inclinations for direct photometric and spectroscopic radial-velocity analysis. Even in the small number of cases in which a high-inclination binary near the upper mass limit can be identified, rotational broadening and contamination of spectral line features from thick circumstellar material (either natal clouds or produced by strong stellar wind driven mass loss from one or both of he stellar components) biases the analysis. In the wilds of the upper HR diagram, we're often left with indirect and circumstantial means of determining mass, a rather unsatisfactory state of affairs.
Increasing the reach of forensic genetics with massively parallel sequencing.
Budowle, Bruce; Schmedes, Sarah E; Wendt, Frank R
2017-09-01
The field of forensic genetics has made great strides in the analysis of biological evidence related to criminal and civil matters. More so, the discipline has set a standard of performance and quality in the forensic sciences. The advent of massively parallel sequencing will allow the field to expand its capabilities substantially. This review describes the salient features of massively parallel sequencing and how it can impact forensic genetics. The features of this technology offer increased number and types of genetic markers that can be analyzed, higher throughput of samples, and the capability of targeting different organisms, all by one unifying methodology. While there are many applications, three are described where massively parallel sequencing will have immediate impact: molecular autopsy, microbial forensics and differentiation of monozygotic twins. The intent of this review is to expose the forensic science community to the potential enhancements that have or are soon to arrive and demonstrate the continued expansion the field of forensic genetics and its service in the investigation of legal matters.
Gravitational Wave Signals from the First Massive Black Hole Seeds
NASA Astrophysics Data System (ADS)
Hartwig, Tilman; Agarwal, Bhaskar; Regan, John A.
2018-05-01
Recent numerical simulations reveal that the isothermal collapse of pristine gas in atomic cooling haloes may result in stellar binaries of supermassive stars with M* ≳ 104M⊙. For the first time, we compute the in-situ merger rate for such massive black hole remnants by combining their abundance and multiplicity estimates. For black holes with initial masses in the range 104 - 6M⊙ merging at redshifts z ≳ 15 our optimistic model predicts that LISA should be able to detect 0.6 mergers per year. This rate of detection can be attributed, without confusion, to the in-situ mergers of seeds from the collapse of very massive stars. Equally, in the case where LISA observes no mergers from heavy seeds at z ≳ 15 we can constrain the combined number density, multiplicity, and coalesence times of these high-redshift systems. This letter proposes gravitational wave signatures as a means to constrain theoretical models and processes that govern the abundance of massive black hole seeds in the early Universe.
Thought Leaders during Crises in Massive Social Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corley, Courtney D.; Farber, Robert M.; Reynolds, William
The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge tomore » carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.« less
Colony size as a species character in massive reef corals
NASA Astrophysics Data System (ADS)
Soong, Keryea
1993-07-01
In a study of seven massive, Caribbean corals, I have found major differences in reproductive behavior between species with large maximum colony sizes and species with smaller maximum colony sizes. Four species ( Diploria clivosa, D. strigosa, Montastrea cavernosa, Siderastrea siderea) which are large (<1000 cm2 in surface area) broadcast gametes during a short spawning season. Their puberty size is relatively large (>100 cm2, except M. cavernosa). In contrast, two small massive species (<100 cm2, Favia fragum and S. radians), and one medium-sized (100 1000 cm2, Porites astreoides) massive species, brood larvae during an extended season (year-round in Panama). The puberty size of the small species is only 2 4 cm2. Given these close associations between maximum colony sizes and a number of fundamental reproductive attributes, greater attention should be given to the colony size distributions of different species of reef corals in nature, since many important life history and population characters may be inferred.
On the curious spectrum of duality invariant higher-derivative gravity
Hohm, Olaf; Naseer, Usman; Zwiebach, Barton
2016-08-31
Here, we analyze the spectrum of the exactly duality and gauge invariant higher-derivative double field theory. While this theory is based on a chiral CFT and does not correspond to a standard string theory, our analysis illuminates a number of issues central in string theory. The full quadratic action is rewritten as a two-derivative theory with additional fields. This allows for a simple analysis of the spectrum, which contains two massive spin-2 ghosts and massive scalars, in addition to the massless fields. Moreover, in this formulation, the massless or tensionless limit α' → ∞ is non-singular and leads to anmore » enhanced gauge symmetry. We show that the massive modes can be integrated out exactly at the quadratic level, leading to an infinite series of higher-derivative corrections. Lastly, we present a ghost-free massive extension of linearized double field theory, which employs a novel mass term for the dilaton and metric.« less
Massive Binary Black Holes in the Cosmic Landscape
NASA Astrophysics Data System (ADS)
Colpi, Monica; Dotti, Massimo
2011-02-01
Binary black holes occupy a special place in our quest for understanding the evolution of galaxies along cosmic history. If massive black holes grow at the center of (pre-)galactic structures that experience a sequence of merger episodes, then dual black holes form as inescapable outcome of galaxy assembly, and can in principle be detected as powerful dual quasars. But, if the black holes reach coalescence, during their inspiral inside the galaxy remnant, then they become the loudest sources of gravitational waves ever in the universe. The Laser Interferometer Space Antenna is being developed to reveal these waves that carry information on the mass and spin of these binary black holes out to very large look-back times. Nature seems to provide a pathway for the formation of these exotic binaries, and a number of key questions need to be addressed: How do massive black holes pair in a merger? Depending on the properties of the underlying galaxies, do black holes always form a close Keplerian binary? If a binary forms, does hardening proceed down to the domain controlled by gravitational wave back reaction? What is the role played by gas and/or stars in braking the black holes, and on which timescale does coalescence occur? Can the black holes accrete on flight and shine during their pathway to coalescence? After outlining key observational facts on dual/binary black holes, we review the progress made in tracing their dynamics in the habitat of a gas-rich merger down to the smallest scales ever probed with the help of powerful numerical simulations. N-Body/hydrodynamical codes have proven to be vital tools for studying their evolution, and progress in this field is expected to grow rapidly in the effort to describe, in full realism, the physics of stars and gas around the black holes, starting from the cosmological large scale of a merger. If detected in the new window provided by the upcoming gravitational wave experiments, binary black holes will provide a deep view into the process of hierarchical clustering which is at the heart of the current paradigm of galaxy formation. They will also be exquisite probes for testing General Relativity, as the theory of gravity. The waveforms emitted during the inspiral, coalescence and ring-down phase carry in their shape the sign of a dynamically evolving space-time and the proof of the existence of an horizon.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
NASA Astrophysics Data System (ADS)
Yu, C. W.; Hodges, B. R.; Liu, F.
2017-12-01
Development of continental-scale river network models creates challenges where the massive amount of boundary condition data encounters the sensitivity of a dynamic nu- merical model. The topographic data sets used to define the river channel characteristics may include either corrupt data or complex configurations that cause instabilities in a numerical solution of the Saint-Venant equations. For local-scale river models (e.g. HEC- RAS), modelers typically rely on past experience to make ad hoc boundary condition adjustments that ensure a stable solution - the proof of the adjustment is merely the sta- bility of the solution. To date, there do not exist any formal methodologies or automated procedures for a priori detecting/fixing boundary conditions that cause instabilities in a dynamic model. Formal methodologies for data screening and adjustment are a critical need for simulations with a large number of river reaches that draw their boundary con- dition data from a wide variety of sources. At the continental scale, we simply cannot assume that we will have access to river-channel cross-section data that has been ade- quately analyzed and processed. Herein, we argue that problematic boundary condition data for unsteady dynamic modeling can be identified through numerical modeling with the steady-state Saint-Venant equations. The fragility of numerical stability increases with the complexity of branching in river network system and instabilities (even in an unsteady solution) are typically triggered by the nonlinear advection term in Saint-Venant equations. It follows that the behavior of the simpler steady-state equations (which retain the nonlin- ear term) can be used to screen the boundary condition data for problematic regions. In this research, we propose a graph-theory based method to isolate the location of corrupted boundary condition data in a continental-scale river network and demonstrate its utility with a network of O(10^4) elements. Acknowledgement: This research is supported by the National Science Foundation un- der grant number CCF-1331610.
Laboratory experiments on liquid fragmentation during Earth's core formation
NASA Astrophysics Data System (ADS)
Landeau, M.; Deguen, R.; Olson, P.
2013-12-01
Buoyancy-driven fragmentation of one liquid in another immiscible liquid likely occurred on a massive scale during the formation of the Earth, when dense liquid metal blobs were released within deep molten silicate magma oceans. Another example of this phenomenon is the sudden release of petroleum into the ocean during the Deepwater Horizon disaster (Gulf of Mexico, 2010). We present experiments on the instability and fragmentation of blobs of a heavy liquid released into a lighter immiscible liquid. During the fragmentation process, we observe deformation of the released fluid, formation of filamentary structures, capillary instability, and eventually drop formation. We find that, at low and intermediate Weber numbers (which measures the importance of inertia versus surface tension), the fragmentation regime mainly results from the competition between a Rayleigh-Taylor instability and the roll-up of a vortex ring. At sufficiently high Weber numbers (the relevant regime for core formation), the fragmentation process becomes turbulent. The large-scale flow then behaves as a turbulent vortex ring or a turbulent thermal: it forms a coherent structure whose shape remains self-similar during the fall and which grows by turbulent entrainment of ambient fluid. An integral model based on the entrainment assumption, and adapted to buoyant vortex rings with initial momentum, is consistent with our experimental data. This indicates that the concept of turbulent entrainment is valid for non-dispersed immiscible fluids at large Weber and Reynolds numbers. Series of photographs, turbulent fragmentation regime, time intervals of about 0.2 s. Portions (red boxes) have been magnified (on the right).
NASA Astrophysics Data System (ADS)
Massey, Philip
2000-08-01
We are proposing to survey M 31 for Wolf-Rayet stars (WRs) and red supergiants (RSGs), providing much needed information about how massive stars evolve at greater-than-solar metallicities. Our understanding of massive star evolution is hampered by the effects of mass-loss on these stars; at higher metallicities mass-loss effects become ever more pronounced. Our previous work on other Local Group galaxies (Massey & Johnson 1998) has shown that the number of RSGs to WRs correlates well with metallicity, changing by a factor of 6 from NGC 6822 (log O/H+12=8.3) to the inner parts of M 33 (8.7). Our study of five small regions in M 31 suggests that above this value the ratio of RSGs to WRs doesn't change: does this mean that no massive star that becomes a WR spends any time as a RSG at above solar metallicities? We fear instead that our sample (selected, afterall, for containing WR stars) was not sufficiently well-mixed in age to provide useful global values; the study we propose here will survey all of M 31. Detection of WRs will provide fundamental data not only on massive star evolution, but also act as tracers of the most massive stars, and improve our knowledge of recent star-formation in the Andromeda Galaxy.
Energy efficiency and allometry of movement of swimming and flying animals.
Bale, Rahul; Hao, Max; Bhalla, Amneet Pal Singh; Patankar, Neelesh A
2014-05-27
Which animals use their energy better during movement? One metric to answer this question is the energy cost per unit distance per unit weight. Prior data show that this metric decreases with mass, which is considered to imply that massive animals are more efficient. Although useful, this metric also implies that two dynamically equivalent animals of different sizes will not be considered equally efficient. We resolve this longstanding issue by first determining the scaling of energy cost per unit distance traveled. The scale is found to be M(2/3) or M(1/2), where M is the animal mass. Second, we introduce an energy-consumption coefficient (CE) defined as energy per unit distance traveled divided by this scale. CE is a measure of efficiency of swimming and flying, analogous to how drag coefficient quantifies aerodynamic drag on vehicles. Derivation of the energy-cost scale reveals that the assumption that undulatory swimmers spend energy to overcome drag in the direction of swimming is inappropriate. We derive allometric scalings that capture trends in data of swimming and flying animals over 10-20 orders of magnitude by mass. The energy-consumption coefficient reveals that swimmers beyond a critical mass, and most fliers are almost equally efficient as if they are dynamically equivalent; increasingly massive animals are not more efficient according to the proposed metric. Distinct allometric scalings are discovered for large and small swimmers. Flying animals are found to require relatively more energy compared with swimmers.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784
Energy efficiency and allometry of movement of swimming and flying animals
Bale, Rahul; Hao, Max; Bhalla, Amneet Pal Singh; Patankar, Neelesh A.
2014-01-01
Which animals use their energy better during movement? One metric to answer this question is the energy cost per unit distance per unit weight. Prior data show that this metric decreases with mass, which is considered to imply that massive animals are more efficient. Although useful, this metric also implies that two dynamically equivalent animals of different sizes will not be considered equally efficient. We resolve this longstanding issue by first determining the scaling of energy cost per unit distance traveled. The scale is found to be M2/3 or M1/2, where M is the animal mass. Second, we introduce an energy-consumption coefficient (CE) defined as energy per unit distance traveled divided by this scale. CE is a measure of efficiency of swimming and flying, analogous to how drag coefficient quantifies aerodynamic drag on vehicles. Derivation of the energy-cost scale reveals that the assumption that undulatory swimmers spend energy to overcome drag in the direction of swimming is inappropriate. We derive allometric scalings that capture trends in data of swimming and flying animals over 10–20 orders of magnitude by mass. The energy-consumption coefficient reveals that swimmers beyond a critical mass, and most fliers are almost equally efficient as if they are dynamically equivalent; increasingly massive animals are not more efficient according to the proposed metric. Distinct allometric scalings are discovered for large and small swimmers. Flying animals are found to require relatively more energy compared with swimmers. PMID:24821764
NASA Astrophysics Data System (ADS)
Dann, J. C.
2007-12-01
A challenge of Archean volcanology is to reconstruct submarine flow fields by mapping and analyzing vertically dipping sequences of lavas. Some flow fields are bound by sediments and/or seafloor alteration that mark clear gaps in volcanism. Flow fields in the Lower Komati Fm are defined by alternating layers of komatiite (26% MgO) and komatiitic basalt (15% MgO). Five komatiite flow fields (100-200m thick) repeat the same stratigraphic zoning of spinifex overlying massive komatiite, and each flow field has a distinct Al2O3/CaO, a ratio unaffected by olivine fractionation, consistent with the contention that each komatiite flow field represents a distinct batch of mantle melting. Although massive and spinifex komatiite form distinct stratigraphic units on a map scale, detailed outcrop mapping reveals that the change in flow type represents a transition within a single flow field. In one type of transition, thin massive flows alternate with spinifex flow lobes of a compound flow unit. In another, a vesicular flow along the boundary links the underlying massive komatiite and overlying spinifex flows in time. The vesicular flow has alternating spinifex and vesicular layers that form a distinctive crust above a thick massive interior. Locally, this crust is tilted, intruded by massive komatiite from the interior, and overlain by a thick breccia including a spinifex flow broken into blocks and rotated like dominoes by the tilting. These outcrop relations indicate that spinifex flow lobes were starting to flow over the vesicular flow before it had undergone differential inflation, a temporal link between the lower massive and upper spinifex komatiites consistent with their belonging to the same flow field. The transition in flow type may reflect 1) an overlap of proximal and distal facies of komatiite flows as eruption rates waned and/or 2) thermal maturation prior to eruption. Early, cooler, crystal-rich, massive lava, flowing out as thick sheet flows, was replaced by hotter, crystal-poor, less degassed lava, flowing out as spinifex flows.
Standard Model Background of the Cosmological Collider.
Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi
2017-06-30
The inflationary universe can be viewed as a "cosmological collider" with an energy of the Hubble scale, producing very massive particles and recording their characteristic signals in primordial non-Gaussianities. To utilize this collider to explore any new physics at very high scales, it is a prerequisite to understand the background signals from the particle physics standard model. In this Letter we describe the standard model background of the cosmological collider.
A Liver-centric Multiscale Modeling Framework for Xenobiotics
We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study foc...
Mycotoxins: A fungal genomics perspective
USDA-ARS?s Scientific Manuscript database
The chemical and enzymatic diversity in the fungal kingdom is staggering. Large-scale fungal genome sequencing projects are generating a massive catalog of secondary metabolite biosynthetic genes and pathways. Fungal natural products are a boon and bane to man as valuable pharmaceuticals and harmful...
Constructing Neuronal Network Models in Massively Parallel Environments.
Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
Constructing Neuronal Network Models in Massively Parallel Environments
Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808
A hot compact dust disk around a massive young stellar object.
Kraus, Stefan; Hofmann, Karl-Heinz; Menten, Karl M; Schertl, Dieter; Weigelt, Gerd; Wyrowski, Friedrich; Meilland, Anthony; Perraut, Karine; Petrov, Romain; Robbe-Dubois, Sylvie; Schilke, Peter; Testi, Leonardo
2010-07-15
Circumstellar disks are an essential ingredient of the formation of low-mass stars. It is unclear, however, whether the accretion-disk paradigm can also account for the formation of stars more massive than about 10 solar masses, in which strong radiation pressure might halt mass infall. Massive stars may form by stellar merging, although more recent theoretical investigations suggest that the radiative-pressure limit may be overcome by considering more complex, non-spherical infall geometries. Clear observational evidence, such as the detection of compact dusty disks around massive young stellar objects, is needed to identify unambiguously the formation mode of the most massive stars. Here we report near-infrared interferometric observations that spatially resolve the astronomical-unit-scale distribution of hot material around a high-mass ( approximately 20 solar masses) young stellar object. The image shows an elongated structure with a size of approximately 13 x 19 astronomical units, consistent with a disk seen at an inclination angle of approximately 45 degrees . Using geometric and detailed physical models, we found a radial temperature gradient in the disk, with a dust-free region less than 9.5 astronomical units from the star, qualitatively and quantitatively similar to the disks observed in low-mass star formation. Perpendicular to the disk plane we observed a molecular outflow and two bow shocks, indicating that a bipolar outflow emanates from the inner regions of the system.
High pressure hydriding of sponge-Zr in steam-hydrogen mixtures
NASA Astrophysics Data System (ADS)
Soo Kim, Yeon; Wang, Wei-E.; Olander, D. R.; Yagnik, S. K.
1997-07-01
Hydriding kinetics of thin sponge-Zr layers metallurgically bonded to a Zircaloy disk has been studied by thermogravimetry in the temperature range 350-400°C in 7 MPa hydrogen-steam mixtures. Some specimens were prefilmed with a thin oxide layer prior to exposure to the reactant gas; all were coated with a thin layer of gold to avoid premature reaction at edges. Two types of hydriding were observed in prefilmed specimens, viz., a slow hydrogen absorption process that precedes an accelerated (massive) hydriding. At 7 MPa total pressure, the critical ratio of H 2/H 2O above which massive hydriding occurs at 400°C is ˜ 200. The critical H 2/H 20 ratio is shifted to ˜2.5 × 103 at 350°C. The slow hydriding process occurs only when conditions for hydriding and oxidation are approximately equally favorable. Based on maximum weight gain, the specimen is completely converted to δ-ZrH 2 by massive hydriding in ˜5 h at a hydriding rate of ˜10 -6 mol H/cm 2 s. Incubation times of 10-20 h prior to the onset of massive hydriding increases with prefilm oxide thickness in the range of 0-10 μm. By changing to a steam-enriched gas, massive hydriding that initially started in a steam-starved condition was arrested by re-formation of a protective oxide scale.
Topical perspective on massive threading and parallelism.
Farber, Robert M
2011-09-01
Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.
Barnes, S.-J.; Zientek, M.L.; Severson, M.J.
1997-01-01
The tectonic setting of intraplate magmas, typically a plume intersecting a rift, is ideal for the development of Ni - Cu - platinum-group element-bearing sulphides. The plume transports metal-rich magmas close to the mantle - crust boundary. The interaction of the rift and plume permits rapid transport of the magma into the crust, thus ensuring that no sulphides are lost from the magma en route to the crust. The rift may contain sediments which could provide the sulphur necessary to bring about sulphide saturation in the magmas. The plume provides large volumes of mafic magma; thus any sulphides that form can collect metals from a large volume of magma and consequently the sulphides will be metal rich. The large volume of magma provides sufficient heat to release large quantities of S from the crust, thus providing sufficient S to form a large sulphide deposit. The composition of the sulphides varies on a number of scales: (i) there is a variation between geographic areas, in which sulphides from the Noril'sk - Talnakh area are the richest in metals and those from the Muskox intrusion are poorest in metals; (ii) there is a variation between textural types of sulphides, in which disseminated sulphides are generally richer in metals than the associated massive and matrix sulphides; and (iii) the massive and matrix sulphides show a much wider range of compositions than the disseminated sulphides, and on the basis of their Ni/Cu ratio the massive and matrix sulphides can be divided into Cu rich and Fe rich. The Cu-rich sulphides are also enriched in Pt, Pd, and Au; in contrast, the Fe-rich sulphides are enriched in Fe, Os, Ir, Ru, and Rh. Nickel concentrations are similar in both. Differences in the composition between the sulphides from different areas may be attributed to a combination of differences in composition of the silicate magma from which the sulphides segregated and differences in the ratio of silicate to sulphide liquid (R factors). The higher metal content of the disseminated sulphides relative to the massive and matrix sulphides may be due to the fact that the disseminated sulphides equilibrated with a larger volume of magma than massive and matrix sulphides. The difference in composition between the Cu- and Fe-rich sulphides may be the result of the fractional crystallization of monosulphide solid solution from a sulphide liquid, with the Cu-rich sulphides representing the liquid and the Fe-rich sulphides representing the cumulate.
Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection
NASA Astrophysics Data System (ADS)
Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro
We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okumura, Teppei; Seljak, Uroš; Desjacques, Vincent, E-mail: teppei@ewha.ac.kr, E-mail: useljak@berkeley.edu, E-mail: dvince@physik.uzh.ch
It was recently shown that the power spectrum in redshift space can be written as a sum of cross-power spectra between number weighted velocity moments, of which the lowest are density and momentum density. We investigate numerically the properties of these power spectra for simulated galaxies and dark matter halos and compare them to the dark matter power spectra, generalizing the concept of the bias in density-density power spectra. Because all of the quantities are number weighted this approach is well defined even for sparse systems such as massive halos. This contrasts to the previous approaches to RSD where velocitymore » correlations have been explored, but velocity field is a poorly defined concept for sparse systems. We find that the number density weighting leads to a strong scale dependence of the bias terms for momentum density auto-correlation and cross-correlation with density. This trend becomes more significant for the more biased halos and leads to an enhancement of RSD power relative to the linear theory. Fingers-of-god effects, which in this formalism come from the correlations of the higher order moments beyond the momentum density, lead to smoothing of the power spectrum and can reduce this enhancement of power from the scale dependent bias, but are relatively small for halos with no small scale velocity dispersion. In comparison, for a more realistic galaxy sample with satellites the small scale velocity dispersion generated by satellite motions inside the halos leads to a larger power suppression on small scales, but this depends on the satellite fraction and on the details of how the satellites are distributed inside the halo. We investigate several statistics such as the two-dimensional power spectrum P(k,μ), where μ is the angle between the Fourier mode and line of sight, its multipole moments, its powers of μ{sup 2}, and configuration space statistics. Overall we find that the nonlinear effects in realistic galaxy samples such as luminous red galaxies affect the redshift space clustering on very large scales: for example, the quadrupole moment is affected by 10% for k < 0.1hMpc{sup −1}, which means that these effects need to be understood if we want to extract cosmological information from the redshift space distortions.« less
Massive stars, disks, and clustered star formation
NASA Astrophysics Data System (ADS)
Moeckel, Nickolas Barry
The formation of an isolated massive star is inherently more complex than the relatively well-understood collapse of an isolated, low-mass star. The dense, clustered environment where massive stars are predominantly found further complicates the picture, and suggests that interactions with other stars may play an important role in the early life of these objects. In this thesis we present the results of numerical hydrodynamic experiments investigating interactions between a massive protostar and its lower-mass cluster siblings. We explore the impact of these interactions on the orientation of disks and outflows, which are potentially observable indications of encounters during the formation of a star. We show that these encounters efficiently form eccentric binary systems, and in clusters similar to Orion they occur frequently enough to contribute to the high multiplicity of massive stars. We suggest that the massive protostar in Cepheus A is currently undergoing a series of interactions, and present simulations tailored to that system. We also apply the numerical techniques used in the massive star investigations to a much lower-mass regime, the formation of planetary systems around Solar- mass stars. We perform a small number of illustrative planet-planet scattering experiments, which have been used to explain the eccentricity distribution of extrasolar planets. We add the complication of a remnant gas disk, and show that this feature has the potential to stabilize the system against strong encounters between planets. We present preliminary simulations of Bondi-Hoyle accretion onto a protoplanetary disk, and consider the impact of the flow on the disk properties as well as the impact of the disk on the accretion flow.
Smith, Bruce D.; Tippens, C.L.; Flanigan, V.J.; Sadek, Hamdy
1983-01-01
Laboratory spectral induced polarization (SIP) measurements on 29 carbonaceous schist samples from the Wadi Bidah district show that most are associated with very long polarization decays or, equivalently, large time constants. In contrast, measurements on two massive sulfide samples indicate shorter polarization decays or smaller time constants. This difference in time constants for the polarization process results in two differences in the phase spectra in the frequency range of from 0.06 to 1Hz. First, phase values of carbonaceous rocks generally decrease as a function of increasing frequency. Second, phase values of massive sulfide-bearing rocks increase as a function of increasing frequency. These results from laboratory measurements agree well with those from other reported SIP measurements on graphites and massive sulfides from the Canadian Shield. Four SIP lines, measured by using a 50-m dipole-dipole array, were surveyed at the Rabathan 4 prospect to test how well the results of laboratory sample measurements can be applied to larger scale field measurements. Along one line, located entirely over carbonaceous schists, the phase values decreased as a function of increasing frequency. Along a second line, located over both massive sulfides and carbonaceous schists as defined by drilling, the phase values measured over carbonaceous schists decreased as a function of increasing frequency, whereas those measured over massive sulfides increased. In addition, parts of two lines were surveyed down the axes of the massive sulfide and carbonaceous units. The phase values along these lines showed similar differences between the carbonaceous schists and massive sulfides. To date, the SIP survey and the SIP laboratory measurements have produced the only geophysical data that indicate an electrical difference between the massive sulfide-bearing rocks and the surrounding carbonaceous rocks in the Wadi Bidah district. However, additional sample and field measurements in areas of known mineralization would fully evaluate the SIP method as applied to various geologic environments and styles of massive sulfide mineralization. Additionally, the efficiency of SIP surveys in delineating areas of sulfide mineralization might be improved by surveying lines down the axes of known electrical conductors. An evaluation of the applied research done on the SIP method to date suggests that this technique offers significant exploration applications to massive sulfide exploration in the Kingdom of Saudi Arabia.
Spontaneous Breaking of Scale Invariance in U(N) Chern-Simons Gauge Theories in Three Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardeen, William A.
2015-09-24
I explore the existence of a massive phase in a conformally invariant U(N) Chern-Simons gauge theories in D = 3 with matter fields in the fundamental representation. These models have attracted recent attention as being dual, in the conformal phase, to theories of higher spin gravity on AdS 4. Using the 0t Hooft large N expansion, exact solutions are obtained for scalar current correlators in the massive phase where the conformal symmetry is spontaneously broken. A massless dilaton appears as a composite state, and its properties are discussed. Solutions exist for matters field that are either bosons or fermions.
Spontaneous Breaking of Scale Invariance in U(N) Chern-Simons Gauge Theories in Three Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardeen, William
2014-10-24
I explore the existence of a massive phase in a conformally invariant U(N) Chern-Simons gauge theories in D = 3 with matter fields in the fundamental representation. These models have attracted recent attention as being dual, in the conformal phase, to theories of higher spin gravity on AdS 4. Using the 1t Hooft large N expansion, exact solutions are obtained for scalar current correlators in the massive phase where the conformal symmetry is spontaneously broken. A massless dilaton appears as a composite state, and its properties are discussed. Solutions exist for matters field that are either bosons or fermions.
Utopian dream: a new farm bill.
Nestle, Marion
2012-01-01
In the fall of 2011, I taught a graduate food studies course at New York University devoted to the farm bill, a massive and massively opaque piece of legislation passed most recently in 2008 and up for renewal in 2012. The farm bill supports farmers, of course, but also specifies how the United States deals with such matters as conservation, forestry, energy policy, organic food production, international food aid, and domestic food assistance. My students came from programs in nutrition, food studies, public health, public policy, and law, all united in the belief that a smaller scale, more regionalized, and more sustainable food system would be healthier for people and the planet.
QCD corrections to massive color-octet vector boson pair production
NASA Astrophysics Data System (ADS)
Freitas, Ayres; Wiegand, Daniel
2017-09-01
This paper describes the calculation of the next-to-leading order (NLO) QCD corrections to massive color-octet vector boson pair production at hadron colliders. As a concrete framework, a two-site coloron model with an internal parity is chosen, which can be regarded as an effective low-energy approximation of Kaluza-Klein gluon physics in universal extra dimensions. The renormalization procedure involves several subtleties, which are discussed in detail. The impact of the NLO corrections is relatively modest, amounting to a reduction of 11-14% in the total cross-section, but they significantly reduce the scale dependence of the LO result.
Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinar, Ali; Kolda, Tamara G.; Carlberg, Kevin Thomas
Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.
NASA Astrophysics Data System (ADS)
Benini, Luca
2017-06-01
The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. CMOS technology can still take us a long way toward this goal, but technology scaling is losing steam. Energy efficiency improvement will increasingly hinge on architecture, circuits, design techniques such as heterogeneous 3D integration, mixed-signal preprocessing, event-based approximate computing and non-Von-Neumann architectures for scalable acceleration.
Achab, Sophia; Nicolier, Magali; Mauny, Frédéric; Monnin, Julie; Trojak, Benoit; Vandel, Pierre; Sechter, Daniel; Gorwood, Philip; Haffen, Emmanuel
2011-08-26
Massively Multiplayer Online Role-Playing Games (MMORPGs) are a very popular and enjoyable leisure activity, and there is a lack of international validated instruments to assess excessive gaming. With the growing number of gamers worldwide, adverse effects (isolation, hospitalizations, excessive use, etc.) are observed in a minority of gamers, which is a concern for society and for the scientific community. In the present study, we focused on screening gamers at potential risk of MMORPG addiction. In this exploratory study, we focused on characteristics, online habits and problematic overuse in adult MMORPG gamers. In addition to socio-demographical data and gamer behavioral patterns, 3 different instruments for screening addiction were used in French MMORPG gamers recruited online over 10 consecutive months: the substance dependence criteria for the Diagnostic and Statistical Manual of Mental Disorder, fourth revised edition (DSM-IV-TR) that has been adapted for MMORPG (DAS), the qualitative Goldberg Internet Addiction Disorder scale (GIAD) and the quantitative Orman Internet Stress Scale (ISS). For all scales, a score above a specific threshold defined positivity. The 448 participating adult gamers were mainly young adult university graduates living alone in urban areas. Participants showed high rates of both Internet addiction (44.2% for GIAD, 32.6% for ISS) and DAS positivity (27.5%). Compared to the DAS negative group, DAS positive gamers reported significantly higher rates of tolerance phenomenon (increased amount of time in online gaming to obtain the desired effect) and declared significantly more social, financial (OR: 4.85), marital (OR: 4.61), family (OR: 4.69) and/or professional difficulties (OR: 4.42) since they started online gaming. Furthermore, these gamers self-reported significantly higher rates (3 times more) of irritability, daytime sleepiness, sleep deprivation due to play, low mood and emotional changes since online gaming onset. The DAS appeared to be a good first-line instrument to screen MMORPG addiction in online gamers. This study found high MMORPG addiction rates, and self-reported adverse symptoms in important aspects of life, including mood and sleep. This confirms the need to set up relevant prevention programs against online game overuse.
2011-01-01
Background Massively Multiplayer Online Role-Playing Games (MMORPGs) are a very popular and enjoyable leisure activity, and there is a lack of international validated instruments to assess excessive gaming. With the growing number of gamers worldwide, adverse effects (isolation, hospitalizations, excessive use, etc.) are observed in a minority of gamers, which is a concern for society and for the scientific community. In the present study, we focused on screening gamers at potential risk of MMORPG addiction. Methods In this exploratory study, we focused on characteristics, online habits and problematic overuse in adult MMORPG gamers. In addition to socio-demographical data and gamer behavioral patterns, 3 different instruments for screening addiction were used in French MMORPG gamers recruited online over 10 consecutive months: the substance dependence criteria for the Diagnostic and Statistical Manual of Mental Disorder, fourth revised edition (DSM-IV-TR) that has been adapted for MMORPG (DAS), the qualitative Goldberg Internet Addiction Disorder scale (GIAD) and the quantitative Orman Internet Stress Scale (ISS). For all scales, a score above a specific threshold defined positivity. Results The 448 participating adult gamers were mainly young adult university graduates living alone in urban areas. Participants showed high rates of both Internet addiction (44.2% for GIAD, 32.6% for ISS) and DAS positivity (27.5%). Compared to the DAS negative group, DAS positive gamers reported significantly higher rates of tolerance phenomenon (increased amount of time in online gaming to obtain the desired effect) and declared significantly more social, financial (OR: 4.85), marital (OR: 4.61), family (OR: 4.69) and/or professional difficulties (OR: 4.42) since they started online gaming. Furthermore, these gamers self-reported significantly higher rates (3 times more) of irritability, daytime sleepiness, sleep deprivation due to play, low mood and emotional changes since online gaming onset. Conclusions The DAS appeared to be a good first-line instrument to screen MMORPG addiction in online gamers. This study found high MMORPG addiction rates, and self-reported adverse symptoms in important aspects of life, including mood and sleep. This confirms the need to set up relevant prevention programs against online game overuse. PMID:21871089
NASA Technical Reports Server (NTRS)
Cen, Renyue
1994-01-01
The mass and velocity distributions in the outskirts (0.5-3.0/h Mpc) of simulated clusters of galaxies are examined for a suite of cosmogonic models (two Omega(sub 0) = 1 and two Omega(sub 0) = 0.2 models) utilizing large-scale particle-mesh (PM) simulations. Through a series of model computations, designed to isolate the different effects, we find that both Omega(sub 0) and P(sub k) (lambda less than or = 16/h Mpc) are important to the mass distributions in clusters of galaxies. There is a correlation between power, P(sub k), and density profiles of massive clusters; more power tends to point to the direction of a stronger correlation between alpha and M(r less than 1.5/h Mpc); i.e., massive clusters being relatively extended and small mass clusters being relatively concentrated. A lower Omega(sub 0) universe tends to produce relatively concentrated massive clusters and relatively extended small mass clusters compared to their counterparts in a higher Omega(sub 0) model with the same power. Models with little (initial) small-scale power, such as the hot dark matter (HDM) model, produce more extended mass distributions than the isothermal distribution for most of the mass clusters. But the cold dark matter (CDM) models show mass distributions of most of the clusters more concentrated than the isothermal distribution. X-ray and gravitational lensing observations are beginning providing useful information on the mass distribution in and around clusters; some interesting constraints on Omega(sub 0) and/or the (initial) power of the density fluctuations on scales lambda less than or = 16/h Mpc (where linear extrapolation is invalid) can be obtained when larger observational data sets, such as the Sloan Digital Sky Survey, become available.
Cosmic string with a light massive neutrino
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Stebbins, Albert
1992-01-01
We have estimated the power spectra of density fluctuations produced by cosmic strings with neutrino hot dark matter (HDM). Normalizing at 8/h Mpc, we find that the spectrum has more power on small scales than HDM + inflation, less than cold dark matter (CDM) + inflation, and significantly less the CDM + strings. With HDM, large wakes give significant contribution to the power on the galaxy scale and may give rise to large sheets of galaxies.
Thermodynamic glass transition in a spin glass without time-reversal symmetry
Baños, Raquel Alvarez; Cruz, Andres; Fernandez, Luis Antonio; Gil-Narvion, Jose Miguel; Gordillo-Guerrero, Antonio; Guidetti, Marco; Iñiguez, David; Maiorano, Andrea; Marinari, Enzo; Martin-Mayor, Victor; Monforte-Garcia, Jorge; Muñoz Sudupe, Antonio; Navarro, Denis; Parisi, Giorgio; Perez-Gaviro, Sergio; Ruiz-Lorenzo, Juan Jesus; Schifano, Sebastiano Fabio; Seoane, Beatriz; Tarancon, Alfonso; Tellez, Pedro; Tripiccione, Raffaele; Yllanes, David
2012-01-01
Spin glasses are a longstanding model for the sluggish dynamics that appear at the glass transition. However, spin glasses differ from structural glasses in a crucial feature: they enjoy a time reversal symmetry. This symmetry can be broken by applying an external magnetic field, but embarrassingly little is known about the critical behavior of a spin glass in a field. In this context, the space dimension is crucial. Simulations are easier to interpret in a large number of dimensions, but one must work below the upper critical dimension (i.e., in d < 6) in order for results to have relevance for experiments. Here we show conclusive evidence for the presence of a phase transition in a four-dimensional spin glass in a field. Two ingredients were crucial for this achievement: massive numerical simulations were carried out on the Janus special-purpose computer, and a new and powerful finite-size scaling method. PMID:22493229
The formation and build-up of the red-sequence over the past 9 Gyr in VIPERS
NASA Astrophysics Data System (ADS)
Fritz, Alexander; Abbas, U.; Adami, C.; Arnouts, S.; Bel, J.; Bolzonella, M.; Bottini, D.; Branchini, E.; Burden, A.; Cappi, A.; Coupon, J.; Cucciati, O.; Davidzon, I.; De Lucia, G.; de la Torre, S.; Di Porto, C.; Franzetti, P.; Fumana, M.; Garilli, B.; Granett, B. R.; Guzzo, L.; Ilbert, O.; Iovino, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marchetti, A.; Marinoni, C.; Marulli, F.; McCracken, H. J.; Mellier, Y.; Moscardini, L.; Nichol, R. C.; Paioro, L.; Peacock, J. A.; Percival, W. J.; Polletta, M.; Pollo, A.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zamorani, G.; Zanichelli, A.; VIPERS Team
2015-02-01
We present the Luminosity Function (LF) and Colour-Magnitude Relation (CMR) using ~45000 galaxies drawn from the VIMOS Public Extragalactic Redshift Survey (VIPERS). Using different selection criteria, we define several samples of early-type galaxies and explore their impact on the evolution of the red-sequence (RS) and the effects of dust. Our results suggest a rapid build-up of the RS within a short time scale. We find a rise in the number density of early-type galaxies and a strong evolution in LF and CMR. Massive galaxies exist already 9 Gyr ago and experience an efficient quenching of their star formation at z = 1, followed by a passive evolution with only limited merging activity. In contrast, low-mass galaxies indicate a different mass assembly history and cause a slow build-up of the CMR over cosmic time.
Missing matter in the vicinity of the sun
NASA Technical Reports Server (NTRS)
Bahcall, John N.
1986-01-01
The Poisson and Vlasov equations are solved numerically for realistic Galaxy models which include multiple disk components, a Population II spheroid, and an unseen massive halo. The total amount of matter in the vicinity of the sun is determined by comparing the observed distributions of tracer stars, samples of F dwarfs, and K giants with the predictions of the Galaxy models. Results are obtained for a number of different assumed distributions of the unseen disk mass. For all the observed samples, typical models imply that about half of the mass in the solar vicinity must be in the form of unobserved matter. The volume density of unobserved material near the sun is about 0.1 solar mass/cu pc; the corresponding column density is about 30 solar mass/sq pc. This so far unseen material must be in a disk with an exponential scale height of less than 0.7 kpc.
NASA Astrophysics Data System (ADS)
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-11-01
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-11-08
The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-01-01
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity. PMID:27824075
Applications of piezoelectric materials in oilfield services.
Goujon, Nicolas; Hori, Hiroshi; Liang, Kenneth K; Sinha, Bikash K
2012-09-01
Piezoelectric materials are used in many applications in the oilfield services industry. Four illustrative examples are given in this paper: marine seismic survey, precision pressure measurement, sonic logging-while-drilling, and ultrasonic bore-hole imaging. In marine seismics, piezoelectric hydrophones are deployed on a massive scale in a relatively benign environment. Hence, unit cost and device reliability are major considerations. The remaining three applications take place downhole in a characteristically harsh environment with high temperature and high pressure among other factors. The number of piezoelectric devices involved is generally small but otherwise highly valued. The selection of piezoelectric materials is limited, and the devices have to be engineered to withstand the operating conditions. With the global demand for energy increasing in the foreseeable future, the search for hydrocarbon resources is reaching into deeper and hotter wells. There is, therefore, a continuing and pressing need for high-temperature and high-coupling piezoelectric materials.
General relativity: An erfc metric
NASA Astrophysics Data System (ADS)
Plamondon, Réjean
2018-06-01
This paper proposes an erfc potential to incorporate in a symmetric metric. One key feature of this model is that it relies on the existence of an intrinsic physical constant σ, a star-specific proper length that scales all its surroundings. Based thereon, the new metric is used to study the space-time geometry of a static symmetric massive object, as seen from its interior. The analytical solutions to the Einstein equation are presented, highlighting the absence of singularities and discontinuities in such a model. The geodesics are derived in their second- and first-order differential formats. Recalling the slight impact of the new model on the classical general relativity tests in the solar system, a number of facts and open problems are briefly revisited on the basis of a heuristic definition of σ. A special attention is given to gravitational collapses and non-singular black holes.
aTRAM 2.0: An Improved, Flexible Locus Assembler for NGS Data
Allen, Julie M; LaFrance, Raphael; Folk, Ryan A; Johnson, Kevin P; Guralnick, Robert P
2018-01-01
Massive strides have been made in technologies for collecting genome-scale data. However, tools for efficiently and flexibly assembling raw outputs into downstream analytical workflows are still nascent. aTRAM 1.0 was designed to assemble any locus from genome sequencing data but was neither optimized for efficiency nor able to serve as a single toolkit for all assembly needs. We have completely re-implemented aTRAM and redesigned its structure for faster read retrieval while adding a number of key features to improve flexibility and functionality. The software can now (1) assemble single- or paired-end data, (2) utilize both read directions in the database, (3) use an additional de novo assembly module, and (4) leverage new built-in pipelines to automate common workflows in phylogenomics. Owing to reimplementation of databasing strategies, we demonstrate that aTRAM 2.0 is much faster across all applications compared to the previous version. PMID:29881251
Visualizing Internet routing changes.
Lad, Mohit; Massey, Dan; Zhang, Lixia
2006-01-01
Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes.
Nonsingular cosmology from evolutionary quantum gravity
NASA Astrophysics Data System (ADS)
Cianfrani, Francesco; Montani, Giovanni; Pittorino, Fabrizio
2014-11-01
We provide a cosmological implementation of the evolutionary quantum gravity, describing an isotropic Universe, in the presence of a negative cosmological constant and a massive (preinflationary) scalar field. We demonstrate that the considered Universe has a nonsingular quantum behavior, associated to a primordial bounce, whose ground state has a high occupation number. Furthermore, in such a vacuum state, the super-Hamiltonian eigenvalue is negative, corresponding to a positive emerging dust energy density. The regularization of the model is performed via a polymer quantum approach to the Universe scale factor and the proper classical limit is then recovered, in agreement with a preinflationary state of the Universe. Since the dust energy density is redshifted by the Universe de Sitter phase and the cosmological constant does not enter the ground state eigenvalue, we get a late-time cosmology, compatible with the present observations, endowed with a turning point in the far future.
The Sabin live poliovirus vaccination trials in the USSR, 1959.
Horstmann, D. M.
1991-01-01
Widespread use of the Sabin live attenuated poliovirus vaccine has had tremendous impact on the disease worldwide, virtually eliminating it from a number of countries, including the United States. Early proof of its safety and effectiveness was presented in 1959 by Russian investigators, who had staged massive trials in the USSR, involving millions of children. Their positive results were at first viewed in the United States and elsewhere with some skepticism, but the World Health Organization favored proceeding with large-scale trials, and responded to the claims made by Russian scientists by sending a representative to the USSR to review in detail the design and execution of the vaccine programs and the reliability of their results. The report that followed was a positive endorsement of the findings and contributed to the acceptance of the Sabin vaccine in the United States, where it has been the polio vaccine of choice since the mid-1960s. PMID:1814062
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griebel, M., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de; Rüttgers, A., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de
The multiscale FENE model is applied to a 3D square-square contraction flow problem. For this purpose, the stochastic Brownian configuration field method (BCF) has been coupled with our fully parallelized three-dimensional Navier-Stokes solver NaSt3DGPF. The robustness of the BCF method enables the numerical simulation of high Deborah number flows for which most macroscopic methods suffer from stability issues. The results of our simulations are compared with that of experimental measurements from literature and show a very good agreement. In particular, flow phenomena such as a strong vortex enhancement, streamline divergence and a flow inversion for highly elastic flows are reproduced.more » Due to their computational complexity, our simulations require massively parallel computations. Using a domain decomposition approach with MPI, the implementation achieves excellent scale-up results for up to 128 processors.« less
Quasar evolution and the growth of black holes
NASA Technical Reports Server (NTRS)
Small, Todd A.; Blandford, Roger D.
1992-01-01
A 'minimalist' model of AGN evolution is analyzed that links the measured luminosity function to an elementary description of black hole accretion. The observed luminosity function of bright AGN is extrapolated and simple prescriptions for the growth and luminosity of black holes are introduced to infer quasar birth rates, mean fueling rates, and relict black hole distribution functions. It is deduced that the mean accretion rate scales as (M exp -1./5)(t exp -6.7) and that, for the most conservative model used, the number of relict black holes per decade declines only as M exp -0.4 for black hole masses between 3 x 10 exp 7 and 3 x 10 exp 9 solar masses. If all sufficiently massive galaxies pass through a quasar phase with asymptotic black hole mass a monotonic function of the galaxy mass, then it is possible to compare the space density of galaxies with estimated central masses to that of distant quasars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less
The remnant of a merger between two dwarf galaxies in Andromeda II.
Amorisco, N C; Evans, N W; van de Ven, G
2014-03-20
Driven by gravity, massive structures like galaxies and clusters of galaxies are believed to grow continuously through hierarchical merging and accretion of smaller systems. Observational evidence of accretion events is provided by the coherent stellar streams crossing the outer haloes of massive galaxies, such as the Milky Way or Andromeda. At similar mass scales, around 10(11) solar masses in stars, further evidence of merging activity is also ample. Mergers of lower-mass galaxies are expected within the hierarchical process of galaxy formation, but have hitherto not been seen for galaxies with less than about 10(9) solar masses in stars. Here we report the kinematic detection of a stellar stream in one of the satellite galaxies of Andromeda, the dwarf spheroidal Andromeda II, which has a mass of only 10(7) solar masses in stars. The properties of the stream show that we are observing the remnant of a merger between two dwarf galaxies. This had a drastic influence on the dynamics of the remnant, which is now rotating around its projected major axis. The stellar stream in Andromeda II illustrates the scale-free character of the formation of galaxies, down to the lowest galactic mass scales.
Halo models of HI selected galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Choudhury, Tirthankar Roy; Paranjape, Aseem
2018-06-01
Modelling the distribution of neutral hydrogen (HI) in dark matter halos is important for studying galaxy evolution in the cosmological context. We use a novel approach to infer the HI-dark matter connection at the massive end (m_H{I} > 10^{9.8} M_{⊙}) from radio HI emission surveys, using optical properties of low-redshift galaxies as an intermediary. In particular, we use a previously calibrated optical HOD describing the luminosity- and colour-dependent clustering of SDSS galaxies and describe the HI content using a statistical scaling relation between the optical properties and HI mass. This allows us to compute the abundance and clustering properties of HI-selected galaxies and compare with data from the ALFALFA survey. We apply an MCMC-based statistical analysis to constrain the free parameters related to the scaling relation. The resulting best-fit scaling relation identifies massive HI galaxies primarily with optically faint blue centrals, consistent with expectations from galaxy formation models. We compare the Hi-stellar mass relation predicted by our model with independent observations from matched Hi-optical galaxy samples, finding reasonable agreement. As a further application, we make some preliminary forecasts for future observations of HI and optical galaxies in the expected overlap volume of SKA and Euclid/LSST.
Formation of young massive clusters from turbulent molecular clouds
NASA Astrophysics Data System (ADS)
Fujii, Michiko; Portegies Zwart, Simon
2015-08-01
We simulate the formation and evolution of young star clusters using smoothed-particle hydrodynamics (SPH) and direct N-body methods. We start by performing SPH simulations of the giant molecular cloud with a turbulent velocity field, a mass of 10^4 to 10^6 M_sun, and a density between 17 and 1700 cm^-3. We continue the SPH simulations for a free-fall time scale, and analyze the resulting structure of the collapsed cloud. We subsequently replace a density-selected subset of SPH particles with stars. As a consequence, the local star formation efficiency exceeds 30 per cent, whereas globally only a few per cent of the gas is converted to stars. The stellar distribution is very clumpy with typically a dozen bound conglomerates that consist of 100 to 10000 stars. We continue to evolve the stars dynamically using the collisional N-body method, which accurately treats all pairwise interactions, stellar collisions and stellar evolution. We analyze the results of the N-body simulations at 2 Myr and 10 Myr. From dense massive molecular clouds, massive clusters grow via hierarchical merging of smaller clusters. The shape of the cluster mass function that originates from an individual molecular cloud is consistent with a Schechter function with a power-law slope of beta = -1.73 at 2 Myr and beta = -1.67 at 10 Myr, which fits to observed cluster mass function of the Carina region. The superposition of mass functions have a power-law slope of < -2, which fits the observed mass function of star clusters in the Milky Way, M31 and M83. We further find that the mass of the most massive cluster formed in a single molecular cloud with a mass of M_g scales with 6.1 M_g^0.51 which also agrees with recent observation in M51. The molecular clouds which can form massive clusters are much denser than those typical in the Milky Way. The velocity dispersion of such molecular clouds reaches 20 km/s and it is consistent with the relative velocity of the molecular clouds observed near NGC 3603 and Westerlund 2, for which a triggered star formation by cloud-cloud collisions is suggested.
1985-09-01
velocity 75-mm gun that fired a tungsten carbide antitank round, and the massive Mark VI Tiger tank , which carried a version. of the deadly 88-mm gun...guns would be able to penetrate the frontal armor of the massive Mark VI Tiger tank at a comfortable two thousand yards.27 Prior to the invasion, the...conceived counterattack aimed at recapturing the Sidi-bou-Zid position and. was badly battered in the ensuing German ambush,17 . A damaged Mark VI Tiger
A Massively Parallel Code for Polarization Calculations
NASA Astrophysics Data System (ADS)
Akiyama, Shizuka; Höflich, Peter
2001-03-01
We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.
Screening large-scale association study data: exploiting interactions using random forests.
Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul
2004-12-10
Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.
New HST/STIS Spectroscopy of Massive Members of R136 in 30 Doradus
NASA Astrophysics Data System (ADS)
Bostroem, Kyra; Walborn, Nolan; Crowther, Paul; Caballero-Nieves, Saida; Lennon, Daniel; Maíz Apellániz, Jesús
2013-06-01
We display new (in some cases, the first ever) spatially resolved optical and UV spectroscopy of a number of early O-type stars in R136, the massive core cluster of 30 Doradus in the LMC. Some of them are of the earliest spectral types, O2-O3, which accompany the more luminous WN members that are the most massive stars known, near or exceeding 300~M_⊙ initially. These results are relevant to the very top of the IMF and to the structure and formation of starburst clusters. The data are from HST/STIS programs GO 12465/13052 (PI Crowther), in which the long slit was stepped across the inner 4 arcsec (1 parsec) of R136, yielding both optical photospheric and FUV stellar-wind spectra of at least 100 resolved members, many of them for the first time. The optical data were obtained at 4 epochs to support eventual radial-velocity detection of spectroscopic binaries. This program vitally complements the VLT-FLAMES Tarantula Survey of the wider stellar content of 30 Doradus, by adding that of the massive core cluster, which is inaccessible to such observations from the ground. These combined datasets will provide unprecedented information about massive stellar evolution and starbursts.
Eta Carinae in the Context of the Most Massive Stars
NASA Technical Reports Server (NTRS)
Gull, Theodore R.; Damineli, Augusto
2009-01-01
Eta Car, with its historical outbursts, visible ejecta and massive, variable winds, continues to challenge both observers and modelers. In just the past five years over 100 papers have been published on this fascinating object. We now know it to be a massive binary system with a 5.54-year period. In January 2009, Car underwent one of its periodic low-states, associated with periastron passage of the two massive stars. This event was monitored by an intensive multi-wavelength campaign ranging from -rays to radio. A large amount of data was collected to test a number of evolving models including 3-D models of the massive interacting winds. August 2009 was an excellent time for observers and theorists to come together and review the accumulated studies, as have occurred in four meetings since 1998 devoted to Eta Car. Indeed, Car behaved both predictably and unpredictably during this most recent periastron, spurring timely discussions. Coincidently, WR140 also passed through periastron in early 2009. It, too, is a intensively studied massive interacting binary. Comparison of its properties, as well as the properties of other massive stars, with those of Eta Car is very instructive. These well-known examples of evolved massive binary systems provide many clues as to the fate of the most massive stars. What are the effects of the interacting winds, of individual stellar rotation, and of the circumstellar material on what we see as hypernovae/supernovae? We hope to learn. Topics discussed in this 1.5 day Joint Discussion were: Car: the 2009.0 event: Monitoring campaigns in X-rays, optical, radio, interferometry WR140 and HD5980: similarities and differences to Car LBVs and Eta Carinae: What is the relationship? Massive binary systems, wind interactions and 3-D modeling Shapes of the Homunculus & Little Homunculus: what do we learn about mass ejection? Massive stars: the connection to supernovae, hypernovae and gamma ray bursters Where do we go from here? (future directions) The Science Organizing Committee: Co-chairs: Augusto Damineli (Brazil) & Theodore R. Gull (USA). Members: D. John Hillier (USA), Gloria Koenigsberger (Mexico), Georges Meynet (Switzerland), Nidia Morrell (Chile), Atsuo T. Okazaki (Japan), Stanley P. Owocki (USA), Andy M.T. Pol- lock (Spain), Nathan Smith (USA), Christiaan L. Sterken (Belgium), Nicole St Louis (Canada), Karel A. van der Hucht (Netherlands), Roberto Viotti (Italy) and GerdWeigelt (Germany)
PREPping Students for Authentic Science
ERIC Educational Resources Information Center
Dolan, Erin L.; Lally, David J.; Brooks, Eric; Tax, Frans E.
2008-01-01
In this article, the authors describe a large-scale research collaboration, the Partnership for Research and Education in Plants (PREP), which has capitalized on publicly available databases that contain massive amounts of biological information; stock centers that house and distribute inexpensive organisms with different genotypes; and the…
Large-scale enrichment and discovery of gene-associated SNPs
USDA-ARS?s Scientific Manuscript database
With the recent advent of massively parallel pyrosequencing by 454 Life Sciences it has become feasible to cost-effectively identify numerous single nucleotide polymorphisms (SNPs) within the recombinogenic regions of the maize (Zea mays L.) genome. We developed a modified version of hypomethylated...
The massive scale of the 1997-1998 El Nino-associated coral bleaching event underscores the need for strategies to mitigate biodiversity losses resulting from temperature-induced coral mortality. As baseline sea surface temperatures continue to rise, climate change may represent ...
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Genomics pipelines and data integration: challenges and opportunities in the research setting
Davis-Turak, Jeremy; Courtney, Sean M.; Hazard, E. Starr; Glen, W. Bailey; da Silveira, Willian; Wesselman, Timothy; Harbin, Larry P.; Wolf, Bethany J.; Chung, Dongjun; Hardiman, Gary
2017-01-01
Introduction The emergence and mass utilization of high-throughput (HT) technologies, including sequencing technologies (genomics) and mass spectrometry (proteomics, metabolomics, lipids), has allowed geneticists, biologists, and biostatisticians to bridge the gap between genotype and phenotype on a massive scale. These new technologies have brought rapid advances in our understanding of cell biology, evolutionary history, microbial environments, and are increasingly providing new insights and applications towards clinical care and personalized medicine. Areas covered The very success of this industry also translates into daunting big data challenges for researchers and institutions that extend beyond the traditional academic focus of algorithms and tools. The main obstacles revolve around analysis provenance, data management of massive datasets, ease of use of software, interpretability and reproducibility of results. Expert Commentary The authors review the challenges associated with implementing bioinformatics best practices in a large-scale setting, and highlight the opportunity for establishing bioinformatics pipelines that incorporate data tracking and auditing, enabling greater consistency and reproducibility for basic research, translational or clinical settings. PMID:28092471
Massive outflow properties suggest AGN fade slowly
NASA Astrophysics Data System (ADS)
Zubovas, Kastytis
2018-01-01
Massive large-scale active galactic nucleus (AGN) outflows are an important element of galaxy evolution, being a way through which the AGN can affect most of the host galaxy. However, outflows evolve on time-scales much longer than typical AGN episode durations, therefore most AGN outflows are not observed simultaneously with the AGN episode that inflated them. It is therefore remarkable that rather tight correlations between outflow properties and AGN luminosity exist. In this paper, I show that such correlations can be preserved during the fading phase of the AGN episode, provided that the AGN luminosity evolves as a power law with exponent αd ∼ 1 at late times. I also show that subsequent AGN episodes that illuminate an ongoing outflow are unlikely to produce outflow momentum or energy rates rising above the observed correlations. However, there may be many difficult-to-detect outflows with momentum and energy rates lower than expected from the current AGN luminosity. Detailed observations of AGN outflow properties might help constrain the activity histories of typical and/or individual AGN.
Genomics pipelines and data integration: challenges and opportunities in the research setting.
Davis-Turak, Jeremy; Courtney, Sean M; Hazard, E Starr; Glen, W Bailey; da Silveira, Willian A; Wesselman, Timothy; Harbin, Larry P; Wolf, Bethany J; Chung, Dongjun; Hardiman, Gary
2017-03-01
The emergence and mass utilization of high-throughput (HT) technologies, including sequencing technologies (genomics) and mass spectrometry (proteomics, metabolomics, lipids), has allowed geneticists, biologists, and biostatisticians to bridge the gap between genotype and phenotype on a massive scale. These new technologies have brought rapid advances in our understanding of cell biology, evolutionary history, microbial environments, and are increasingly providing new insights and applications towards clinical care and personalized medicine. Areas covered: The very success of this industry also translates into daunting big data challenges for researchers and institutions that extend beyond the traditional academic focus of algorithms and tools. The main obstacles revolve around analysis provenance, data management of massive datasets, ease of use of software, interpretability and reproducibility of results. Expert commentary: The authors review the challenges associated with implementing bioinformatics best practices in a large-scale setting, and highlight the opportunity for establishing bioinformatics pipelines that incorporate data tracking and auditing, enabling greater consistency and reproducibility for basic research, translational or clinical settings.
Projection Effects of Large-scale Structures on Weak-lensing Peak Abundances
NASA Astrophysics Data System (ADS)
Yuan, Shuo; Liu, Xiangkun; Pan, Chuzhong; Wang, Qiao; Fan, Zuhui
2018-04-01
High peaks in weak lensing (WL) maps originate dominantly from the lensing effects of single massive halos. Their abundance is therefore closely related to the halo mass function and thus a powerful cosmological probe. However, besides individual massive halos, large-scale structures (LSS) along lines of sight also contribute to the peak signals. In this paper, with ray-tracing simulations, we investigate the LSS projection effects. We show that for current surveys with a large shape noise, the stochastic LSS effects are subdominant. For future WL surveys with source galaxies having a median redshift z med ∼ 1 or higher, however, they are significant. For the cosmological constraints derived from observed WL high-peak counts, severe biases can occur if the LSS effects are not taken into account properly. We extend the model of Fan et al. by incorporating the LSS projection effects into the theoretical considerations. By comparing with simulation results, we demonstrate the good performance of the improved model and its applicability in cosmological studies.
Massive cortical reorganization in sighted Braille readers
Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin
2016-01-01
The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills. DOI: http://dx.doi.org/10.7554/eLife.10762.001 PMID:26976813
Tacchella, S; Carollo, C M; Renzini, A; Förster Schreiber, N M; Lang, P; Wuyts, S; Cresci, G; Dekel, A; Genzel, R; Lilly, S J; Mancini, C; Newman, S; Onodera, M; Shapley, A; Tacconi, L; Woo, J; Zamorani, G
2015-04-17
Most present-day galaxies with stellar masses ≥10(11) solar masses show no ongoing star formation and are dense spheroids. Ten billion years ago, similarly massive galaxies were typically forming stars at rates of hundreds solar masses per year. It is debated how star formation ceased, on which time scales, and how this "quenching" relates to the emergence of dense spheroids. We measured stellar mass and star-formation rate surface density distributions in star-forming galaxies at redshift 2.2 with ~1-kiloparsec resolution. We find that, in the most massive galaxies, star formation is quenched from the inside out, on time scales less than 1 billion years in the inner regions, up to a few billion years in the outer disks. These galaxies sustain high star-formation activity at large radii, while hosting fully grown and already quenched bulges in their cores. Copyright © 2015, American Association for the Advancement of Science.
[First chemical mass attack in history of wars, Bolimów, January 31, 1915].
Zieliński, Andrzej
2010-01-01
World War I was the conflict, during which it was first used chemical warfare on a massive scale. The earliest chemical attack occurred on the Western Front in October 1914 in Neuve Chapelle, but its effects were so minimal that the Allies learned about it only after the war from German documents. The attack in the area Bolimow, made by the Germans against the Russian army with artillery shells containing gas T (xylyl and benzyl bromides), was therefore the first attack on a massive scale recorded on the victim side. The attack, which occurred after it made it possible to obtain some tactical success, but without a strategic breakthrough. Some of the later German attacks on the eastern front where chlorine was used proved to be more effective, but despite the many victims there was not any major strategic success achieved. The Russians did not take attempts to use chemical weapons in the First World War.
Issues of nanoelectronics: a possible roadmap.
Wang, Kang L
2002-01-01
In this review, we will discuss a possible roadmap in scaling a nanoelectronic device from today's CMOS technology to the ultimate limit when the device fails. In other words, at the limit, CMOS will have a severe short channel effect, significant power dissipation in its quiescent (standby) state, and problems related to other essential characteristics. Efforts to use structures such as the double gate, vertical surround gate, and SOI to improve the gate control have continually been made. Other types of structures using SiGe source/drain, asymmetric Schottky source/drain, and the like will be investigated as viable structures to achieve ultimate CMOS. In reaching its scaling limit, tunneling will be an issue for CMOS. The tunneling current through the gate oxide and between the source and drain will limit the device operation. When tunneling becomes significant, circuits may incorporate tunneling devices with CMOS to further increase the functionality per device count. We will discuss both the top-down and bottom-up approaches in attaining the nanometer scale and eventually the atomic scale. Self-assembly is used as a bottom-up approach. The state of the art is reviewed, and the challenges of the multiple-step processing in using the self-assembly approach are outlined. Another facet of the scaling trend is to decrease the number of electrons in devices, ultimately leading to single electrons. If the size of a single-electron device is scaled in such a way that the Coulomb self-energy is higher than the thermal energy (at room temperature), a single-electron device will be able to operate at room temperature. In principle, the speed of the device will be fast as long as the capacitance of the load is also scaled accordingly. The single-electron device will have a small drive current, and thus the load capacitance, including those of interconnects and fanouts, must be small to achieve a reasonable speed. However, because the increase in the density (and/or functionality) of integrated circuits is the principal driver, the wiring or interconnects will increase and become the bottleneck for the design of future high-density and high-functionality circuits, particularly for single-electron devices. Furthermore, the massive interconnects needed in the architecture used today will result in an increase in load capacitance. Thus for single-electron device circuits, it is critical to have minimal interconnect loads. And new types of architectures with minimal numbers of global interconnects will be needed. Cellular automata, which need only nearest-neighbor interconnects, are discussed as a plausible example. Other architectures such as neural networks are also possible. Examples of signal processing using cellular automata are discussed. Quantum computing and information processing are based on quantum mechanical descriptions of individual particles correlated among each other. A quantum bit or qubit is described as a linear superposition of the wave functions of a two-state system, for example, the spin of a particle. With the interaction of two qubits, they are connected in a "wireless fashion" using wave functions via quantum mechanical interaction, referred to as entanglement. The interconnection by the nonlocality of wave functions affords a massive parallel nature for computing or so-called quantum parallelism. We will describe the potential and solid-state implementations of quantum computing and information, using electron spin and/or nuclear spin in Si and Ge. Group IV elements have a long coherent time and other advantages. The example of using SiGe for g factor engineering will be described.
LoCuSS: The infall of X-ray groups onto massive clusters
NASA Astrophysics Data System (ADS)
Haines, C. P.; Finoguenov, A.; Smith, G. P.; Babul, A.; Egami, E.; Mazzotta, P.; Okabe, N.; Pereira, M. J.; Bianconi, M.; McGee, S. L.; Ziparo, F.; Campusano, L. E.; Loyola, C.
2018-03-01
Galaxy clusters are expected to form hierarchically in a ΛCDM universe, growing primarily through mergers with lower mass clusters and the continual accretion of group-mass halos. Galaxy clusters assemble late, doubling their masses since z ˜ 0.5, and so the outer regions of clusters should be replete with accreting group-mass systems. We present an XMM-Newton survey to search for X-ray groups in the infall regions of 23 massive galaxy clusters (
LoCuSS: The infall of X-ray groups on to massive clusters
NASA Astrophysics Data System (ADS)
Haines, C. P.; Finoguenov, A.; Smith, G. P.; Babul, A.; Egami, E.; Mazzotta, P.; Okabe, N.; Pereira, M. J.; Bianconi, M.; McGee, S. L.; Ziparo, F.; Campusano, L. E.; Loyola, C.
2018-07-01
Galaxy clusters are expected to form hierarchically in a Λ cold dark matter (ΛCDM) universe, growing primarily through mergers with lower mass clusters and the continual accretion of group-mass haloes. Galaxy clusters assemble late, doubling their masses since z ˜ 0.5, and so the outer regions of clusters should be replete with accreting group-mass systems. We present an XMM-Newton survey to search for X-ray groups in the infall regions of 23 massive galaxy clusters (
Formation of massive, dense cores by cloud-cloud collisions
NASA Astrophysics Data System (ADS)
Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.
2018-03-01
We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.
Formation of massive, dense cores by cloud-cloud collisions
NASA Astrophysics Data System (ADS)
Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.
2018-05-01
We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.
NASA Astrophysics Data System (ADS)
van der Burg, Remco F. J.; Hoekstra, Henk; Muzzin, Adam; Sifón, Cristóbal; Viola, Massimo; Bremer, Malcolm N.; Brough, Sarah; Driver, Simon P.; Erben, Thomas; Heymans, Catherine; Hildebrandt, Hendrik; Holwerda, Benne W.; Klaes, Dominik; Kuijken, Konrad; McGee, Sean; Nakajima, Reiko; Napolitano, Nicola; Norberg, Peder; Taylor, Edward N.; Valentijn, Edwin
2017-11-01
In recent years, many studies have reported substantial populations of large galaxies with low surface brightness in local galaxy clusters. Various theories that aim to explain the presence of such ultra-diffuse galaxies (UDGs) have since been proposed. A key question that will help to distinguish between models is whether UDGs have counterparts in host haloes with lower masses, and if so, what their abundance as a function of halo mass is. We here extend our previous study of UDGs in galaxy clusters to galaxy groups. We measure the abundance of UDGs in 325 spectroscopically selected groups from the Galaxy And Mass Assembly (GAMA) survey. We make use of the overlapping imaging from the ESO Kilo-Degree Survey (KiDS), from which we can identify galaxies with mean surface brightnesses within their effective radii down to 25.5 mag arcsec-2 in the r band. We are able to measure a significant overdensity of UDGs (with sizes reff ≥ 1.5 kpc) in galaxy groups down to M200 = 1012 M⊙, a regime where approximately only one in ten groups contains a UDG that we can detect. We combine measurements of the abundance of UDGs in haloes that cover three orders of magnitude in halo mass, finding that their numbers scale quite steeply with halo mass: NUDG(R < R200) ∝ M2001.11±0.07. To better interpret this, we also measure the mass-richness relation for brighter galaxies down to Mr* + 2.5 in the same GAMA groups, and find a much shallower relation of NBright(R < R200) ∝ M2000.78±0.05. This shows that compared to bright galaxies, UDGs are relatively more abundant in massive clusters than in groups. We discuss the implications, but it is still unclear whether this difference is related to a higher destruction rate of UDGs in groups or if massive haloes have a positive effect on UDG formation.
NASA Astrophysics Data System (ADS)
Brockmann, J. M.; Schuh, W.-D.
2011-07-01
The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.
Computer analysis of digital sky surveys using citizen science and manual classification
NASA Astrophysics Data System (ADS)
Kuminski, Evan; Shamir, Lior
2015-01-01
As current and future digital sky surveys such as SDSS, LSST, DES, Pan-STARRS and Gaia create increasingly massive databases containing millions of galaxies, there is a growing need to be able to efficiently analyze these data. An effective way to do this is through manual analysis, however, this may be insufficient considering the extremely vast pipelines of astronomical images generated by the present and future surveys. Some efforts have been made to use citizen science to classify galaxies by their morphology on a larger scale than individual or small groups of scientists can. While these citizen science efforts such as Zooniverse have helped obtain reasonably accurate morphological information about large numbers of galaxies, they cannot scale to provide complete analysis of billions of galaxy images that will be collected by future ventures such as LSST. Since current forms of manual classification cannot scale to the masses of data collected by digital sky surveys, it is clear that in order to keep up with the growing databases some form of automation of the data analysis will be required, and will work either independently or in combination with human analysis such as citizen science. Here we describe a computer vision method that can automatically analyze galaxy images and deduce galaxy morphology. Experiments using Galaxy Zoo 2 data show that the performance of the method increases as the degree of agreement between the citizen scientists gets higher, providing a cleaner dataset. For several morphological features, such as the spirality of the galaxy, the algorithm agreed with the citizen scientists on around 95% of the samples. However, the method failed to analyze some of the morphological features such as the number of spiral arms, and provided accuracy of just ~36%.
Light domain walls, massive neutrinos and the large scale structure of the Universe
NASA Technical Reports Server (NTRS)
Massarotti, Alessandro
1991-01-01
Domain walls generated through a cosmological phase transition are considered, which interact nongravitationally with light neutrinos. At a redshift z greater than or equal to 10(exp 4), the network grows rapidly and is virtually decoupled from the matter. As the friction with the matter becomes dominant, a comoving network scale close to that of the comoving horizon scale at z of approximately 10(exp 4) gets frozen. During the later phases, the walls produce matter wakes of a thickness d of approximately 10h(exp -1)Mpc, that may become seeds for the formation of the large scale structure observed in the Universe.
Hub-filament System in IRAS 05480+2545: Young Stellar Cluster and 6.7 GHz Methanol Maser
NASA Astrophysics Data System (ADS)
Dewangan, L. K.; Ojha, D. K.; Baug, T.
2017-07-01
To probe the star formation (SF) process, we present a multi-wavelength study of IRAS 05480+2545 (hereafter I05480+2545). Analysis of Herschel data reveals a massive clump (M clump ˜ 1875 {M}⊙ ; peak N(H2) ˜ 4.8 × 1022 cm-2 A V ˜ 51 mag) containing the 6.7 GHz methanol maser and I05480+2545, which is also depicted in a temperature range of 18-26 K. Several noticeable parsec-scale filaments are detected in the Herschel 250 μm image and seem to be radially directed to the massive clump. It resembles more of a “hub-filament” system. Deeply embedded young stellar objects (YSOs) have been identified using the 1-5 μm photometric data, and a significant fraction of YSOs and their clustering are spatially found toward the massive clump, revealing the intense SF activities. An infrared counterpart (IRc) of the maser is investigated in the Spitzer 3.6-4.5 μm images. The IRc does not appear as a point-like source and is most likely associated with the molecular outflow. Based on the 1.4 GHz and Hα continuum images, the ionized emission is absent toward the IRc, indicating that the massive clump harbors an early phase of a massive protostar before the onset of an ultracompact H II region. Together, the I05480+2545 is embedded in a very similar “hub-filament” system to those seen in the Rosette Molecular Cloud. The outcome of the present work indicates the role of filaments in the formation of the massive star-forming clump and cluster of YSOs, which might help channel material to the central hub configuration and the clump/core.
NASA Technical Reports Server (NTRS)
Dahlburg, R. B.; Picone, J. M.
1989-01-01
The results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2-0.6. These values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, R.B.; Picone, J.M.
In this paper the results of fully compressible, Fourier collocation, numerical simulations of the Orszag--Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2--0.6. Thesemore » values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less
Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...
2016-11-16
Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less
Arthroscopic Repair for Chronic Massive Rotator Cuff Tears: A Systematic Review.
Henry, Patrick; Wasserstein, David; Park, Sam; Dwyer, Tim; Chahal, Jaskarndip; Slobogean, Gerard; Schemitsch, Emil
2015-12-01
To systematically review the available evidence for arthroscopic repair of chronic massive rotator cuff tears and identify patient demographics, pre- and post-operative functional limitations, reparability and repair techniques, and retear rates. Medline, Embase, the Cochrane Database of Systematic Reviews, and the Cochrane Central Register of Controlled Trials were searched to identify all clinical papers describing arthroscopic repair of chronic massive rotator cuff tears. Papers were excluded if a definition of "massive" was not provided, if the definition of "massive" was considered inappropriate by agreement between the 2 reviewers, or if patients with smaller tears were also included in the study population. Study quality and clinical outcome data were pooled and summarized. There were 18 papers that met the eligibility criteria; they involved 954 patients with a mean age of 63 (range, 37 to 87), 48% of whom were female. There were 5 prospective and 13 retrospective study designs. The overall study quality was poor according to the Modified Coleman Methodology Score. Of the 954 repairs, 81% were complete repairs and 19% were partial repairs. The follow-up range was between 33 and 52 months, and the mean duration between symptom onset and surgery was 24 months. Single-row repairs were performed in 56% or patients, and double-row repairs were performed in 44%. A pooled analysis demonstrated an improvement in visual analog scale from 5.9 to 1.7, active range of motion from 125° to 169°, and the Constant-Murley score from 49 to 74. The pooled retear rate was 79%. Arthroscopic repair of chronic massive rotator cuff tears is associated with complete repair in the majority of cases and consistently improves pain, range of motion, and functional outcome scores; however, the retear rate is high. Existing research on massive rotator cuff repair is limited to poor- to fair-quality studies. Level IV, systematic review including Level IV studies. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Probing massive stars around gamma-ray burst progenitors
NASA Astrophysics Data System (ADS)
Lu, Wenbin; Kumar, Pawan; Smoot, George F.
2015-10-01
Long gamma-ray bursts (GRBs) are produced by ultra-relativistic jets launched from core collapse of massive stars. Most massive stars form in binaries and/or in star clusters, which means that there may be a significant external photon field (EPF) around the GRB progenitor. We calculate the inverse-Compton scattering of EPF by the hot electrons in the GRB jet. Three possible cases of EPF are considered: the progenitor is (I) in a massive binary system, (II) surrounded by a Wolf-Rayet-star wind and (III) in a dense star cluster. Typical luminosities of 1046-1050 erg s-1 in the 1-100 GeV band are expected, depending on the stellar luminosity, binary separation (I), wind mass-loss rate (II), stellar number density (III), etc. We calculate the light curve and spectrum in each case, taking fully into account the equal-arrival time surfaces and possible pair-production absorption with the prompt γ-rays. Observations can put constraints on the existence of such EPFs (and hence on the nature of GRB progenitors) and on the radius where the jet internal dissipation process accelerates electrons.
Do Vehicle Recalls Reduce the Number of Accidents? The Case of the U.S. Car Market
ERIC Educational Resources Information Center
Bae, Yong-Kyun; Benitez-Silva, Hugo
2011-01-01
The number of automobile recalls in the U.S. has increased sharply in the last two decades, and the numbers of units involved are often counted in the millions. In 2010 alone, over 20 million vehicles were recalled in the United States, and the massive recalls of full model lines by Toyota have brought this issue to the front pages around the…
NASA Astrophysics Data System (ADS)
Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald
2017-04-01
With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.
Timescales of Massive Human Entrainment
Fusaroli, Riccardo; Perlman, Marcus; Mislove, Alan; Paxton, Alexandra; Matlock, Teenie; Dale, Rick
2015-01-01
The past two decades have seen an upsurge of interest in the collective behaviors of complex systems composed of many agents entrained to each other and to external events. In this paper, we extend the concept of entrainment to the dynamics of human collective attention. We conducted a detailed investigation of the unfolding of human entrainment—as expressed by the content and patterns of hundreds of thousands of messages on Twitter—during the 2012 US presidential debates. By time-locking these data sources, we quantify the impact of the unfolding debate on human attention at three time scales. We show that collective social behavior covaries second-by-second to the interactional dynamics of the debates: A candidate speaking induces rapid increases in mentions of his name on social media and decreases in mentions of the other candidate. Moreover, interruptions by an interlocutor increase the attention received. We also highlight a distinct time scale for the impact of salient content during the debates: Across well-known remarks in each debate, mentions in social media start within 5–10 seconds after it occurs; peak at approximately one minute; and slowly decay in a consistent fashion across well-known events during the debates. Finally, we show that public attention after an initial burst slowly decays through the course of the debates. Thus we demonstrate that large-scale human entrainment may hold across a number of distinct scales, in an exquisitely time-locked fashion. The methods and results pave the way for careful study of the dynamics and mechanisms of large-scale human entrainment. PMID:25880357
ZnGeSb2: a promising thermoelectric material with tunable ultra-high conductivity.
Sreeparvathy, P C; Kanchana, V; Vaitheeswaran, G; Christensen, N E
2016-09-21
First principles calculations predict the promising thermoelectric material ZnGeSb 2 with a huge power factor (S 2 σ/τ) on the order of 3 × 10 17 W m -1 K -2 s -1 , due to the ultra-high electrical conductivity scaled by a relaxation time of around 8.5 × 10 25 Ω -1 m -1 s -1 , observed in its massive Dirac state. The observed electrical conductivity is higher than the well-established Dirac materials, and is almost carrier concentration independent with similar behaviour of both n and p type carriers, which may certainly attract device applications. The low range of thermal conductivity is also evident from the phonon dispersion. Our present study further reports the gradual phase change of ZnGeSb 2 from a normal semiconducting state, through massive Dirac states, to a topological semi-metal. The maximum power factor is observed in the massive Dirac states compared to the other two states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meru, Farzana; Juhász, Attila; Ilee, John D.
The young star Elias 2–27 has recently been observed to posses a massive circumstellar disk with two prominent large-scale spiral arms. In this Letter, we perform three-dimensional Smoothed Particle Hydrodynamics simulations, radiative transfer modeling, synthetic ALMA imaging, and an unsharped masking technique to explore three possibilities for the origin of the observed structures—an undetected companion either internal or external to the spirals, and a self-gravitating disk. We find that a gravitationally unstable disk and a disk with an external companion can produce morphology that is consistent with the observations. In addition, for the latter, we find that the companion couldmore » be a relatively massive planetary-mass companion (≲10–13 M {sub Jup}) and located at large radial distances (between ≈300–700 au). We therefore suggest that Elias 2–27 may be one of the first detections of a disk undergoing gravitational instabilities, or a disk that has recently undergone fragmentation to produce a massive companion.« less
Efficient discovery of overlapping communities in massive networks
Gopalan, Prem K.; Blei, David M.
2013-01-01
Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks. PMID:23950224
Resolving Supercritical Orion Cores
NASA Astrophysics Data System (ADS)
Li, Di; Chapman, N.; Goldsmith, P.; Velusamy, T.
2009-01-01
The theoretical framework for high mass star formation (HMSF) is unclear. Observations reveal a seeming dichotomy between high- and low-mass star formation, with HMSF occurring only in Giant Molecular Clouds (GMC), mostly in clusters, and with higher star formation efficiencies than low-mass star formation. One crucial constraint to any theoretical model is the dynamical state of massive cores, in particular, whether a massive core is in supercritical collapse. Based on the mass-size relation of dust emission, we select likely unstable targets from a sample of massive cores (Li et al. 2007 ApJ 655, 351) in the nearest GMC, Orion. We have obtained N2H+ (1-0) maps using CARMA with resolution ( 2.5", 0.006 pc) significantly better than existing observations. We present observational and modeling results for ORI22. By revealing the dynamic structure down to Jeans scale, CARMA data confirms the dominance of gravity over turbulence in this cores. This work was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
NASA Astrophysics Data System (ADS)
Hachay, O. A.; Khachay, O. Y.; Klimko, V. K.; Shipeev, O. V.
2012-04-01
Geological medium is an open dynamical system, which is influenced on different scales by natural and man-made impacts, which change the medium state and lead as a result to a complicated many ranked hierarchic evolution. That is the subject of geo synergetics. Paradigm of physical mesomechanics, which was advanced by academician Panin V.E. and his scientific school, which includes the synergetic approach is a constructive method for research and changing the state of heterogenic materials [1]. That result had been obtained on specimens of different materials. In our results of research of no stationary geological medium in a frame of natural experiments in real rock massifs, which are under high man-made influence it was shown, that the state dynamics can be revealed with use synergetics in hierarchic medium. Active and passive geophysical monitoring plays a very important role for research of the state of dynamical geological systems. It can be achieved by use electromagnetic and seismic fields. Our experience of that research showed the changing of the system state reveals on the space scales and times in the parameters, which are linked with the peculiarities of the medium of the second or higher ranks [2-5]. Results of seismological and electromagnetic information showed the mutual additional information on different space-time levels of rock massive state, which are energetic influenced by explosions, used in mining technology. It is revealed a change of nonlinearity degree in time of the massive state by active influence on it. The description of massive movement in a frame of linear dynamical system does not satisfy the practical situation. The received results are of great significance because for the first time we could find the coincidences with the mathematical theory of open systems and experimental natural results with very complicated structure. On that base we developed a new processing method for the seismological information which can be used in real time for estimation of the disaster degree changing in mine massive. The work was supported by the grant RFBR 10-05-00013. 1. Panin,V.E. et all. 1995. Physical mesomechanics and computer construction of materials . Novosibirsk.: Nauka, SIFR. V.1 pp. 350. 2. Hachay, O.A. 2006. "The problem of the research of redistribution of stress and phase states of massive between high man-made influences," Mining information and analytic bulletin, 5:109-115. 3. Hachay, O.A. and Khachay, O.Yu. 2008. "Theoretical approaches for system of geophysical state control validation of geological medium by man-made influence," Mining information and analytic bulletin 1:161-169. 4. Hachay, O.A., and Khachay, O.Yu. 2009. "Results of electromagnetic and seismic monitoring of the state of rock massive by use the approach of the open dynamical systems,"presented at the EGU2009 - EGU General Assembly 2009, session: Thermo- hydro- mechanical coupling in stressed rock, 19 April 19 - 24 April 2009. 5. Hachay, O.A. "Synergetic events in geological medium and nonlinear features of wave propagation," presented at the EGU2009 - EGU General Assembly 2009, session: Solid Earth geocomplexity: surface processes, morphology and natural resources over wide ranges of scale, 19 April 19 - 24 April 2009.
Hiding in Plain Sight: An Abundance of Compact Massive Spheroids in the Local Universe
NASA Astrophysics Data System (ADS)
Graham, Alister W.; Dullo, Bililign T.; Savorgnan, Giulia A. D.
2015-05-01
It has been widely remarked that compact, massive, elliptical-like galaxies are abundant at high redshifts but exceedingly rare in the universe today, implying significant evolution such that their sizes at z ˜ 2 ± 0.6 have increased by factors of 3 to 6 to become today’s massive elliptical galaxies. These claims have been based on studies that measured the half-light radii of galaxies as though they are all single-component systems. Here we identify 21 spheroidal stellar systems within 90 Mpc that have half-light, major-axis radii {{R}e}≲ 2 kpc, stellar masses 0.7× {{10}11}\\lt {{M}*}/ {{M}⊙ }\\lt 1.4× {{10}11}, and Sérsic indices typically around a value of n = 2-3. This abundance of compact, massive spheroids in our own backyard—with a number density of 6.9× {{10}-6} Mpc-3 (or 3.5 × 10-5 Mpc-3 per unit dex-1 in stellar mass)—and with the same physical properties as the high-redshift galaxies, had been overlooked because they are encased in stellar disks that usually result in galaxy sizes notably larger than 2 kpc. Moreover, this number density is a lower limit because it has not come from a volume-limited sample. The actual density may be closer to 10-4, although further work is required to confirm this. We therefore conclude that not all massive “spheroids” have undergone dramatic structural and size evolution since z ˜ 2 ± 0.6. Given that the bulges of local early-type disk galaxies are known to consist of predominantly old stars that existed at z ˜ 2, it seems likely that some of the observed high-redshift spheroids did not increase in size by building (three-dimensional) triaxial envelopes as commonly advocated, and that the growth of (two-dimensional) disks has also been important over the past 9-11 billion years.
NASA Astrophysics Data System (ADS)
Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.
2012-07-01
Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.
Splatterplots: overcoming overdraw in scatter plots.
Mayorga, Adrian; Gleicher, Michael
2013-09-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the data set as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how Splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Splatterplots: Overcoming Overdraw in Scatter Plots
Mayorga, Adrian; Gleicher, Michael
2014-01-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen. PMID:23846097
Splatterplots: Overcoming Overdraw in Scatter Plots.
Mayorga, Adrian; Gleicher, Michael
2013-03-20
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
The massive scale of the 1997–1998 El Nino–associated coral bleaching event underscores the need for strategies to mitigate biodiversity losses resulting from temperature-induced coral mortality. As baseline sea surface temperatures continue to rise, climate change may represent ...
The three-zone composite productivity model for a multi-fractured horizontal shale gas well
NASA Astrophysics Data System (ADS)
Qi, Qian; Zhu, Weiyao
2018-02-01
Due to the nano-micro pore structures and the massive multi-stage multi-cluster hydraulic fracturing in shale gas reservoirs, the multi-scale seepage flows are much more complicated than in most other conventional reservoirs, and are crucial for the economic development of shale gas. In this study, a new multi-scale non-linear flow model was established and simplified, based on different diffusion and slip correction coefficients. Due to the fact that different flow laws existed between the fracture network and matrix zone, a three-zone composite model was proposed. Then, according to the conformal transformation combined with the law of equivalent percolation resistance, the productivity equation of a horizontal fractured well, with consideration given to diffusion, slip, desorption, and absorption, was built. Also, an analytic solution was derived, and the interference of the multi-cluster fractures was analyzed. The results indicated that the diffusion of the shale gas was mainly in the transition and Fick diffusion regions. The matrix permeability was found to be influenced by slippage and diffusion, which was determined by the pore pressure and diameter according to the Knudsen number. It was determined that, with the increased half-lengths of the fracture clusters, flow conductivity of the fractures, and permeability of the fracture network, the productivity of the fractured well also increased. Meanwhile, with the increased number of fractures, the distance between the fractures decreased, and the productivity slowly increased due to the mutual interfere of the fractures.
The Spatial-Kinematic Structure of the Region of Massive Star Formation S255N on Various Scales
NASA Astrophysics Data System (ADS)
Zemlyanukha, P. M.; Zinchenko, I. I.; Salii, S. V.; Ryabukhina, O. L.; Liu, S.-Y.
2018-05-01
The results of a detailed analysis of SMA, VLA, and IRAM observations of the region of massive star formation S255N in CO(2-1), N2H+(3-2), NH3(1, 1), C18O(2-1) and some other lines is presented. Combining interferometer and single-dish data has enabled a more detailed investigation of the gas kinematics in the moleclar core on various spatial scales. There are no signs of rotation or isotropic compression on the scale of the region as whole. The largest fragments of gas (≈0.3 pc) are located near the boundary of the regions of ionized hydrogen S255 and S257. Some smaller-scale fragments are associated with protostellar clumps. The kinetic temperatures of these fragments lie in the range 10-80 K. A circumstellar torus with inner radius R in ≈ 8000 AU and outer radius R out ≈ 12 000 AU has been detected around the clump SMA1. The rotation profile indicates the existence of a central object with mass ≈8.5/ sin2( i) M ⊙. SMA1 is resolved into two clumps, SMA1-NE and SMA1-SE, whose temperatures are≈150Kand≈25 K, respectively. To all appearances, the torus is involved in the accretion of surrounding gas onto the two protostellar clumps.
Relativistic space-charge-limited current for massive Dirac fermions
NASA Astrophysics Data System (ADS)
Ang, Y. S.; Zubair, M.; Ang, L. K.
2017-04-01
A theory of relativistic space-charge-limited current (SCLC) is formulated to determine the SCLC scaling, J ∝Vα/Lβ , for a finite band-gap Dirac material of length L biased under a voltage V . In one-dimensional (1D) bulk geometry, our model allows (α ,β ) to vary from (2,3) for the nonrelativistic model in traditional solids to (3/2,2) for the ultrarelativistic model of massless Dirac fermions. For 2D thin-film geometry we obtain α =β , which varies between 2 and 3/2, respectively, at the nonrelativistic and ultrarelativistic limits. We further provide rigorous proof based on a Green's-function approach that for a uniform SCLC model described by carrier-density-dependent mobility, the scaling relations of the 1D bulk model can be directly mapped into the case of 2D thin film for any contact geometries. Our simplified approach provides a convenient tool to obtain the 2D thin-film SCLC scaling relations without the need of explicitly solving the complicated 2D problems. Finally, this work clarifies the inconsistency in using the traditional SCLC models to explain the experimental measurement of a 2D Dirac semiconductor. We conclude that the voltage scaling 3 /2 <α <2 is a distinct signature of massive Dirac fermions in a Dirac semiconductor and is in agreement with experimental SCLC measurements in MoS2.
The build up of the correlation between halo spin and the large-scale structure
NASA Astrophysics Data System (ADS)
Wang, Peng; Kang, Xi
2018-01-01
Both simulations and observations have confirmed that the spin of haloes/galaxies is correlated with the large-scale structure (LSS) with a mass dependence such that the spin of low-mass haloes/galaxies tend to be parallel with the LSS, while that of massive haloes/galaxies tend to be perpendicular with the LSS. It is still unclear how this mass dependence is built up over time. We use N-body simulations to trace the evolution of the halo spin-LSS correlation and find that at early times the spin of all halo progenitors is parallel with the LSS. As time goes on, mass collapsing around massive halo is more isotropic, especially the recent mass accretion along the slowest collapsing direction is significant and it brings the halo spin to be perpendicular with the LSS. Adopting the fractional anisotropy (FA) parameter to describe the degree of anisotropy of the large-scale environment, we find that the spin-LSS correlation is a strong function of the environment such that a higher FA (more anisotropic environment) leads to an aligned signal, and a lower anisotropy leads to a misaligned signal. In general, our results show that the spin-LSS correlation is a combined consequence of mass flow and halo growth within the cosmic web. Our predicted environmental dependence between spin and large-scale structure can be further tested using galaxy surveys.
Sulphide mineralization and wall-rock alteration in ophiolites and modern oceanic spreading centres
Koski, R.A.
1983-01-01
Massive and stockwork Fe-Cu-Zn (Cyprus type) sulphide deposits in the upper parts of ophiolite complexes represent hydrothermal mineralization at ancient accretionary plate boundaries. These deposits are probable metallogenic analogues of the polymetallic sulphide deposits recently discovered along modern oceanic spreading centres. Genetic models for these deposits suggest that mineralization results from large-scale circulation of sea-water through basaltic basement along the tectonically active axis of spreading, a zone of high heat flow. The high geothermal gradient above 1 to 2 km deep magma chambers emplaced below the ridge axis drives the convective circulation cell. Cold oxidizing sea-water penetrating the crust on the ridge flanks becomes heated and evolves into a highly reduced somewhat acidic hydrothermal solvent during interaction with basaltic wall-rock. Depending on the temperature and water/rock ratio, this fluid is capable of leaching and transporting iron, manganese, and base metals; dissolved sea-water sulphate is reduced to sulphide. At the ridge axis, the buoyant hydrothermal fluid rises through permeable wall-rocks, and fluid flow may be focussed along deep-seated fractures related to extensional tectonic processes. Metal sulphides are precipitated along channelways as the ascending fluid undergoes adiabatic expansion and then further cooling during mixing with ambient sub-sea-floor water. Vigorous fluid flow results in venting of reduced fluid at the sea-floor/sea-water interface and deposition of massive sulphide. A comparison of sulphide mineralization and wall-rock alteration in ancient and modern spreading centre environments supports this genetic concept. Massive sulphide deposits in ophiolites generally occur in clusters of closely spaced (< 1-5 km) deposits. Individual deposits are a composite of syngenetic massive sulphide and underlying epigenetic stockwork-vein mineralization. The massive sulphide occurs as concordant tabular, lenticular, or saucer-shaped bodies in pillow lavas and pillow-lava breccia; massive lava flows, hyalcoclastite, tuff, and bedded radolarian chert are less commonly associated rock types. These massive sulphide zones are as much as 700 m long, 200 m wide, and 50 m thick. The pipe-, funnel-, or keel-shaped stockwork zone may extend to a dehpth of 1 km in the sheeted-dike complex. Several deposits in Cyprus are confined to grabens or the hanging wall of premineralization normal faults. Polymetallic massive sulphide deposits and active hydrothermal vents at medium- to fast-rate spreading centres (the East Pacific Rise at lat. 21??N, the Galapagos Spreading Centre at long. 86??W, the Juan de Fuca Ridge at lat. 45??N., and the Southern Trough of Guaymas Basin, Gulf of California) have interdeposit spacings on a scale of tens or hundreds of metres, and are spatially associated with structural ridges or grabens within the narrow (< 5 km) axial valleys of the rift zones. Although the most common substrate for massive sulphide accumulations is stacked sequences of pillow basalt and sheet flows, the sea-floor underlying numerous deposits in Guaymas Basin consists of diatomaceous ooze and terrigenous clastic sediment that is intruded by diabase sills. Mound-like massive sulphide deposits, as much as 30 m wide and 5m high, occur over actively discharging vents on the East Pacific Rise, and many of these deposits serve as the base for narrow chimneys and spires of equal or greater height. Sulphides on the Juan de Fuca Ridge appear to form more widespread blanket deposits in the shallow axial-valley depression. The largest deposit found to date, along the axial ridge of the Galapagos Spreading Centre, has a tabular form and a length of 1000 m, a width of 200 m, and a height of 30 m. The sulphide assemblage in both massive and vein mineralization in Cyprus type deposits is characteristically simple: abundant pyrite or, less commonly, pyrrhotite accompanied by minor marcasite, chalcopyrite
NASA Astrophysics Data System (ADS)
Jensen, Kristan
2018-01-01
We conjecture a new sequence of dualities between Chern-Simons gauge theories simultaneously coupled to fundamental bosons and fermions. These dualities reduce to those proposed by Aharony when the number of bosons or fermions is zero. Our conjecture passes a number of consistency checks. These include the matching of global symmetries and consistency with level/rank duality in massive phases.
How Very Massive Metal-Free Stars Start Cosmological Reionization
NASA Technical Reports Server (NTRS)
Wise, John H.; Abel, Tom
2008-01-01
The initial conditions and relevant physics for the formation of the earliest galaxies are well specified in the concordance cosmology. Using ab initio cosmological Eulerian adaptive mesh refinement radiation hydrodynamical calculations, we discuss how very massive stars start the process of cosmological reionization. The models include nonequilibrium primordial gas chemistry and cooling processes and accurate radiation transport in the case B approximation using adaptively ray-traced photon packages, retaining the time derivative in the transport equation. Supernova feedback is modeled by thermal explosions triggered at parsec scales. All calculations resolve the local Jeans length by at least 16 grid cells at all times and as such cover a spatial dynamic range of approx.10(exp 6). These first sources of reionization are highly intermittent and anisotropic and first photoionize the small-scale voids surrounding the halos they form in, rather than the dense filaments they are embedded in. As the merging objects form larger, dwarf-sized galaxies, the escape fraction of UV radiation decreases and the H II regions only break out on some sides of the galaxies, making them even more anisotropic. In three cases, SN blast waves induce star formation in overdense regions that were formed earlier from ionization front instabilities. These stars form tens of parsecs away from the center of their parent DM halo. Approximately five ionizing photons are needed per sustained ionization when star formation in 10(exp 6) stellar Mass halos is dominant in the calculation. As the halos become larger than approx.10(exp 7) Stellar Mass, the ionizing photon escape fraction decreases, which in turn increases the number of photons per ionization to 15-50, in calculations with stellar feedback only. Radiative feedback decreases clumping factors by 25% when compared to simulations without star formation and increases the average temperature of ionized gas to values between 3000 and 10,000 K.
Yan, Jia; Haaijer, Suzanne C M; Op den Camp, Huub J M; Niftrik, Laura; Stahl, David A; Könneke, Martin; Rush, Darci; Sinninghe Damsté, Jaap S; Hu, Yong Y; Jetten, Mike S M
2012-01-01
In marine oxygen minimum zones (OMZs), ammonia-oxidizing archaea (AOA) rather than marine ammonia-oxidizing bacteria (AOB) may provide nitrite to anaerobic ammonium-oxidizing (anammox) bacteria. Here we demonstrate the cooperation between marine anammox bacteria and nitrifiers in a laboratory-scale model system under oxygen limitation. A bioreactor containing ‘Candidatus Scalindua profunda’ marine anammox bacteria was supplemented with AOA (Nitrosopumilus maritimus strain SCM1) cells and limited amounts of oxygen. In this way a stable mixed culture of AOA, and anammox bacteria was established within 200 days while also a substantial amount of endogenous AOB were enriched. ‘Ca. Scalindua profunda’ and putative AOB and AOA morphologies were visualized by transmission electron microscopy and a C18 anammox [3]-ladderane fatty acid was highly abundant in the oxygen-limited culture. The rapid oxygen consumption by AOA and AOB ensured that anammox activity was not affected. High expression of AOA, AOB and anammox genes encoding for ammonium transport proteins was observed, likely caused by the increased competition for ammonium. The competition between AOA and AOB was found to be strongly related to the residual ammonium concentration based on amoA gene copy numbers. The abundance of archaeal amoA copy numbers increased markedly when the ammonium concentration was below 30 μM finally resulting in almost equal abundance of AOA and AOB amoA copy numbers. Massive parallel sequencing of mRNA and activity analyses further corroborated equal abundance of AOA and AOB. PTIO addition, inhibiting AOA activity, was employed to determine the relative contribution of AOB versus AOA to ammonium oxidation. The present study provides the first direct evidence for cooperation of archaeal ammonia oxidation with anammox bacteria by provision of nitrite and consumption of oxygen. PMID:23057688
Research on Peer Grading in an Astronomy Massive Open Online Course
NASA Astrophysics Data System (ADS)
Formanek, Martin; Impey, Chris David; Wenger, Matthew; Sonam, Tenzin; Buxner, Sanlyn
2017-01-01
Massive Open Online Courses (MOOCs) are opportunities for thousands of students to take university level courses at little to no cost. The aim of this talk is to present and analyze an often used assessment tool in MOOCs - peer grading. We collected a wealth of data on peer grading process during our session based MOOC “Astronomy: Exploring Time and Space” offered through Coursera in Spring 2015. We found that peer-grading participants are different from the general course population. Additionally, we found that peer grading participation is the single best predictor for course completion. We compared three different essay-based peer graded assignments throughout the course according to the lengths of submitted essays, time spent grading, number of essays graded by individual users, and a percentage of relevant videos watched. In all of these criteria participation in the first assignment turned out to be statistically significantly different from the other two. Finally we investigated validity and reliability of peer graders by comparing their grades with trained undergraduate graders and instructors on a subsample of 300 essays. Although we found out that validity and reliability of peer grading is limited, we were still able to show that results of peer grading strongly correlate with the final grades from the course and also invested effort in general. Therefore despite its shortcomings, peer grading still manages to identify good students and is very viable tool useful for MOOC-scale formative assessment.
Large Scale Document Inversion using a Multi-threaded Computing System
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2018-01-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrascosa, M.; García-Cabañes, A.; Jubera, M.
The application of evanescent photovoltaic (PV) fields, generated by visible illumination of Fe:LiNbO{sub 3} substrates, for parallel massive trapping and manipulation of micro- and nano-objects is critically reviewed. The technique has been often referred to as photovoltaic or photorefractive tweezers. The main advantage of the new method is that the involved electrophoretic and/or dielectrophoretic forces do not require any electrodes and large scale manipulation of nano-objects can be easily achieved using the patterning capabilities of light. The paper describes the experimental techniques for particle trapping and the main reported experimental results obtained with a variety of micro- and nano-particles (dielectricmore » and conductive) and different illumination configurations (single beam, holographic geometry, and spatial light modulator projection). The report also pays attention to the physical basis of the method, namely, the coupling of the evanescent photorefractive fields to the dielectric response of the nano-particles. The role of a number of physical parameters such as the contrast and spatial periodicities of the illumination pattern or the particle deposition method is discussed. Moreover, the main properties of the obtained particle patterns in relation to potential applications are summarized, and first demonstrations reviewed. Finally, the PV method is discussed in comparison to other patterning strategies, such as those based on the pyroelectric response and the electric fields associated to domain poling of ferroelectric materials.« less
Large Scale Document Inversion using a Multi-threaded Computing System.
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2017-06-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.
Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.
Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias
2011-01-01
The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.
Tidal stresses and energy gaps in microstate geometries
NASA Astrophysics Data System (ADS)
Tyukov, Alexander; Walker, Robert; Warner, Nicholas P.
2018-02-01
We compute energy gaps and study infalling massive geodesic probes in the new families of scaling, microstate geometries that have been constructed recently and for which the holographic duals are known. We find that in the deepest geometries, which have the lowest energy gaps, the geodesic deviation shows that the stress reaches the Planck scale long before the probe reaches the cap of the geometry. Such probes must therefore undergo a stringy transition as they fall into microstate geometry. We discuss the scales associated with this transition and comment on the implications for scrambling in microstate geometries.
A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission
Parker, Jon; Epstein, Joshua M.
2013-01-01
The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120
Shifting from Stewardship to Analytics of Massive Science Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Doyle, R.; Law, E.; Hughes, S.; Huang, T.; Mahabal, A.
2015-12-01
Currently, the analysis of large data collections is executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Data collection, archiving and analysis from future remote sensing missions, be it from earth science satellites, planetary robotic missions, or massive radio observatories may not scale as more capable instruments stress existing architectural approaches and systems due to more continuous data streams, data from multiple observational platforms, and measurements and models from different agencies. A new paradigm is needed in order to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural choices, data processing, management, analysis, etc are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections. Future observational systems, including satellite and airborne experiments, and research in climate modeling will significantly increase the size of the data requiring new methodological approaches towards data analytics where users can more effectively interact with the data and apply automated mechanisms for data reduction, reduction and fusion across these massive data repositories. This presentation will discuss architecture, use cases, and approaches for developing a big data analytics strategy across multiple science disciplines.
NASA Astrophysics Data System (ADS)
Xiong, Yao; Suen, Hoi K.
2018-03-01
The development of massive open online courses (MOOCs) has launched an era of large-scale interactive participation in education. While massive open enrolment and the advances of learning technology are creating exciting potentials for lifelong learning in formal and informal ways, the implementation of efficient and effective assessment is still problematic. To ensure that genuine learning occurs, both assessments for learning (formative assessments), which evaluate students' current progress, and assessments of learning (summative assessments), which record students' cumulative progress, are needed. Providers' more recent shift towards the granting of certificates and digital badges for course accomplishments also indicates the need for proper, secure and accurate assessment results to ensure accountability. This article examines possible assessment approaches that fit open online education from formative and summative assessment perspectives. The authors discuss the importance of, and challenges to, implementing assessments of MOOC learners' progress for both purposes. Various formative and summative assessment approaches are then identified. The authors examine and analyse their respective advantages and disadvantages. They conclude that peer assessment is quite possibly the only universally applicable approach in massive open online education. They discuss the promises, practical and technical challenges, current developments in and recommendations for implementing peer assessment. They also suggest some possible future research directions.
Macconi, Daniela; Bonomelli, Maria; Benigni, Ariela; Plati, Tiziana; Sangalli, Fabio; Longaretti, Lorena; Conti, Sara; Kawachi, Hiroshi; Hill, Prue; Remuzzi, Giuseppe; Remuzzi, Andrea
2006-01-01
Changes in podocyte number or density have been suggested to play an important role in renal disease progression. Here, we investigated the temporal relationship between glomerular podocyte number and development of proteinuria and glomerulosclerosis in the male Munich Wistar Fromter (MWF) rat. We also assessed whether changes in podocyte number affect podocyte function and focused specifically on the slit diaphragm-associated protein nephrin. Age-matched Wistar rats were used as controls. Estimation of podocyte number per glomerulus was determined by digital morphometry of WT1-positive cells. MWF rats developed moderate hypertension, massive proteinuria, and glomerulosclerosis with age. Glomerular hypertrophy was already observed at 10 weeks of age and progressively increased thereafter. By contrast, mean podocyte number per glomerulus was lower than normal in young animals and further decreased with time. As a consequence, the capillary tuft volume per podocyte was more than threefold increased in older rats. Electron microscopy showed important changes in podocyte structure of MWF rats, with expansion of podocyte bodies surrounding glomerular filtration membrane. Glomerular nephrin expression was markedly altered in MWF rats and inversely correlated with both podocyte loss and proteinuria. Our findings suggest that reduction in podocyte number is an important determinant of podocyte dysfunction and progressive impairment of the glomerular permselectivity that lead to the development of massive proteinuria and ultimately to renal scarring. PMID:16400008
NASA Astrophysics Data System (ADS)
El Mellah, I.; Casse, F.
2017-05-01
Classical supergiant X-ray binaries host a neutron star orbiting a supergiant OB star and display persistent X-ray luminosities of 1035-1037 erg s-1. The stellar wind from the massive companion is believed to be the main source of matter accreted by the compact object. With this first paper, we introduce a ballistic model to evaluate the influence of the orbital effects on the structure of the accelerating winds that participate to the accretion process. Thanks to the parametrization we retained the numerical pipeline we designed, we can investigate the supersonic flow and the subsequent observables as a function of a reduced set of characteristic numbers and scales. We show that the shape of the permanent flow is entirely determined by the mass ratio, the filling factor, the Eddington factor and the α-force multiplier that drives the stellar wind acceleration. Provided scales such as the orbital period are known, we can trace back the observables to evaluate the mass accretion rates, the accretion mechanism, the shearing of the inflow and the stellar parameters. We discuss the likelihood of wind-formed accretion discs around the accretors in each case and confront our model to three persistent supergiant X-ray binaries (Vela X-1, IGR J18027-2016, XTE J1855-026).
ENVIRONMENTAL CONDITIONS IN NORTHERN GULF OF MEXICO COASTAL WATERS FOLLOWING HURRICANE KATRINA
On the morning of August 29, 2005 Hurricane Katrina struck the coast of Louisiana, between New Orleans and Biloxi, Mississippi, as a strong category three hurricane on the Saffir-Simpson scale. The massive winds and flooding had the potential for a tremendous environmental impac...
NASP and ISPA Response to the Japanese Natural Disaster
ERIC Educational Resources Information Center
Pfohl, Bill; Cowan, Katherine
2011-01-01
The authors have worked together with the NASP (National Association of School Psychologists) National Emergency Assistance Team (NEAT) for a decade to help coordinate communications around large-scale crisis response efforts. The massive earthquake and tsunami that devastated the northeastern part of Japan and the subsequent response represented…
Morality, Inquiry, and the University
ERIC Educational Resources Information Center
Mourad, Roger P.
2016-01-01
Given that human suffering persists globally on a massive scale, are scholars doing all they ought to be in the pursuit of knowledge? To explore this question, the author analyzes works by Alasdair MacIntyre, Nicholas Maxwell, and Bill Readings. Based on implications derived from their moral critiques of higher education, an alternative, broadened…
Coordinating the Commons: Diversity & Dynamics in Open Collaborations
ERIC Educational Resources Information Center
Morgan, Jonathan T.
2013-01-01
The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia's history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia…
Money, Policy Tangled in Wisconsin Labor Feud
ERIC Educational Resources Information Center
Cavanagh, Sean
2011-01-01
Gov. Scott Walker's sweeping proposal to scale back collective bargaining rights for most public employees in Wisconsin has sparked a rancorous standoff with teachers across the state--and fueled speculation about whether similar plans will gain traction in other parts of the country. But as massive demonstrations played out in Madison--an…
Revolutionizing the Use of Natural History Collections in Education
ERIC Educational Resources Information Center
Powers, Karen E.; Prather, L. Alan; Cook, Joseph A.; Woolley, James; Bart, Henry L., Jr.; Monfils, Anna K.; Sierwald, Petra
2014-01-01
Natural history collections are an irreplaceable and extensive record of life, and form the basis of our understanding of biodiversity on our planet. Broad-scale educational accessibility to these vast specimen collections, specimen images, and their associated data is currently severely hampered. With emerging technologies and massive efforts…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennig, C.; Mohr, Joseph J.; Zenteno, A.
We study the galaxy populations in 74 Sunyaev–Zeldovich effect selected clusters from the South Pole Telescope survey, which have been imaged in the science verification phase of the Dark Energy Survey. The sample extends up to z ~ 1.1 with 4 × 10 14 M⊙ ≤ M200 ≤ 3 × 10 15M⊙. Using the band containing the 4000 Å break and its redward neighbour, we study the colour–magnitude distributions of cluster galaxies to ~m* + 2, finding that: (1)The intrinsic rest frame g – r colour width of the red sequence (RS) population is ~0.03 out to z ~ 0.85 with a preference for an increase to ~0.07 at z = 1, and (2) the prominence of the RS declines beyond z ~ 0.6. The spatial distribution of cluster galaxies is well described by the NFW profile out to 4R200 with a concentration of c g = 3.59more » $$+0.20\\atop{–0.18}$$, 5.37$$+0.27\\atop{-0.24}$$ and 1.38$$+0.21\\atop{-0.19}$$ for the full, the RS and the blue non-RS populations, respectively, but with ~40 per cent to 55 per cent cluster to cluster variation and no statistically significant redshift or mass trends. The number of galaxies within the virial region N200 exhibits a mass trend indicating that the number of galaxies per unit total mass is lower in the most massive clusters, and shows no significant redshift trend. The RS fraction within R200 is (68 ± 3) per cent at z = 0.46, varies from ~55 per cent at z = 1 to ~80 per cent at z = 0.1 and exhibits intrinsic variation among clusters of ~14 per cent. Finally, we discuss a model that suggests that the observed redshift trend in RS fraction favours a transformation time-scale for infalling field galaxies to become RS galaxies of 2–3 Gyr.« less
Hennig, C.; Mohr, Joseph J.; Zenteno, A.; ...
2017-01-23
We study the galaxy populations in 74 Sunyaev–Zeldovich effect selected clusters from the South Pole Telescope survey, which have been imaged in the science verification phase of the Dark Energy Survey. The sample extends up to z ~ 1.1 with 4 × 10 14 M⊙ ≤ M200 ≤ 3 × 10 15M⊙. Using the band containing the 4000 Å break and its redward neighbour, we study the colour–magnitude distributions of cluster galaxies to ~m* + 2, finding that: (1)The intrinsic rest frame g – r colour width of the red sequence (RS) population is ~0.03 out to z ~ 0.85 with a preference for an increase to ~0.07 at z = 1, and (2) the prominence of the RS declines beyond z ~ 0.6. The spatial distribution of cluster galaxies is well described by the NFW profile out to 4R200 with a concentration of c g = 3.59more » $$+0.20\\atop{–0.18}$$, 5.37$$+0.27\\atop{-0.24}$$ and 1.38$$+0.21\\atop{-0.19}$$ for the full, the RS and the blue non-RS populations, respectively, but with ~40 per cent to 55 per cent cluster to cluster variation and no statistically significant redshift or mass trends. The number of galaxies within the virial region N200 exhibits a mass trend indicating that the number of galaxies per unit total mass is lower in the most massive clusters, and shows no significant redshift trend. The RS fraction within R200 is (68 ± 3) per cent at z = 0.46, varies from ~55 per cent at z = 1 to ~80 per cent at z = 0.1 and exhibits intrinsic variation among clusters of ~14 per cent. Finally, we discuss a model that suggests that the observed redshift trend in RS fraction favours a transformation time-scale for infalling field galaxies to become RS galaxies of 2–3 Gyr.« less
Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles
2004-07-15
Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.
The Origin of IRS 16: Dynamically Driven In-Spiral of a Dense Star Cluster to the Galactic Center?
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon F.; McMillan, Stephen L. W.; Gerhard, Ortwin
2003-08-01
We use direct N-body simulations to study the in-spiral and internal evolution of dense star clusters near the Galactic center. These clusters sink toward the center owing to dynamical friction with the stellar background and may go into core collapse before being disrupted by the Galactic tidal field. If a cluster reaches core collapse before disruption, its dense core, which has become rich in massive stars, survives to reach close to the Galactic center. When it eventually dissolves, the cluster deposits a disproportionate number of massive stars in the innermost parsec of the Galactic nucleus. Comparing the spatial distribution and kinematics of the massive stars with observations of IRS 16, a group of young He I stars near the Galactic center, we argue that this association may have formed in this way.