Large scale, synchronous variability of marine fish populations driven by commercial exploitation.
Frank, Kenneth T; Petrie, Brian; Leggett, William C; Boyce, Daniel G
2016-07-19
Synchronous variations in the abundance of geographically distinct marine fish populations are known to occur across spatial scales on the order of 1,000 km and greater. The prevailing assumption is that this large-scale coherent variability is a response to coupled atmosphere-ocean dynamics, commonly represented by climate indexes, such as the Atlantic Multidecadal Oscillation and North Atlantic Oscillation. On the other hand, it has been suggested that exploitation might contribute to this coherent variability. This possibility has been generally ignored or dismissed on the grounds that exploitation is unlikely to operate synchronously at such large spatial scales. Our analysis of adult fishing mortality and spawning stock biomass of 22 North Atlantic cod (Gadus morhua) stocks revealed that both the temporal and spatial scales in fishing mortality and spawning stock biomass were equivalent to those of the climate drivers. From these results, we conclude that greater consideration must be given to the potential of exploitation as a driving force behind broad, coherent variability of heavily exploited fish species.
ATLAS and LHC computing on CRAY
NASA Astrophysics Data System (ADS)
Sciacca, F. G.; Haug, S.; ATLAS Collaboration
2017-10-01
Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.
Spatial scale of similarity as an indicator of metacommunity stability in exploited marine systems.
Shackell, Nancy L; Fisher, Jonathan A D; Frank, Kenneth T; Lawton, Peter
2012-01-01
The spatial scale of similarity among fish communities is characteristically large in temperate marine systems: connectivity is enhanced by high rates of dispersal during the larval/juvenile stages and the increased mobility of large-bodied fish. A larger spatial scale of similarity (low beta diversity) is advantageous in heavily exploited systems because locally depleted populations are more likely to be "rescued" by neighboring areas. We explored whether the spatial scale of similarity changed from 1970 to 2006 due to overfishing of dominant, large-bodied groundfish across a 300 000-km2 region of the Northwest Atlantic. Annually, similarities among communities decayed slowly with increasing geographic distance in this open system, but through time the decorrelation distance declined by 33%, concomitant with widespread reductions in biomass, body size, and community evenness. The decline in connectivity stemmed from an erosion of community similarity among local subregions separated by distances as small as 100 km. Larger fish, of the same species, contribute proportionally more viable offspring, so observed body size reductions will have affected maternal output. The cumulative effect of nonlinear maternal influences on egg/larval quality may have compromised the spatial scale of effective larval dispersal, which may account for the delayed recovery of certain member species. Our study adds strong support for using the spatial scale of similarity as an indicator of metacommunity stability both to understand the spatial impacts of exploitation and to refine how spatial structure is used in management plans.
Compromising genetic diversity in the wild: Unmonitored large-scale release of plants and animals
Linda Laikre; Michael K. Schwartz; Robin S. Waples; Nils Ryman; F. W. Allendorf; C. S. Baker; D. P. Gregovich; M. M. Hansen; J. A. Jackson; K. C. Kendall; K. McKelvey; M. C. Neel; I. Olivieri; R. Short Bull; J. B. Stetz; D. A. Tallmon; C. D. Vojta; D. M. Waller
2010-01-01
Large-scale exploitation of wild animals and plants through fishing, hunting and logging often depends on augmentation through releases of translocated or captively raised individuals. Such releases are performed worldwide in vast numbers. Augmentation can be demographically and economically beneficial but can also cause four types of adverse genetic change to wild...
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
Exploration–exploitation trade-off features a saltatory search behaviour
Volchenkov, Dimitri; Helbach, Jonathan; Tscherepanow, Marko; Kühnel, Sina
2013-01-01
Searching experiments conducted in different virtual environments over a gender-balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatio-temporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration–exploitation trade-off in humans (‘should I stay’ or ‘should I go’). The model exhibits a variety of saltatory behaviours, ranging from Lévy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. PMID:23782535
Research directions in large scale systems and decentralized control
NASA Technical Reports Server (NTRS)
Tenney, R. R.
1980-01-01
Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.
Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,
1985-10-07
ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL
Jung, Yousung; Shao, Yihan; Head-Gordon, Martin
2007-09-01
The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.
Wavelet Analysis for RADARSAT Exploitation: Demonstration of Algorithms for Maritime Surveillance
2007-02-01
this study , we demonstrate wavelet analysis for exploitation of RADARSAT ocean imagery, including wind direction estimation, oceanic and atmospheric ...of image striations that can arise as a texture pattern caused by turbulent coherent structures in the marine atmospheric boundary layer. The image...associated change in the pattern texture (i.e., the nature of the turbulent atmospheric structures) across the front. Due to the large spatial scale of
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
Large-Angular-Scale Clustering as a Clue to the Source of UHECRs
NASA Astrophysics Data System (ADS)
Berlind, Andreas A.; Farrar, Glennys R.
We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Large-scale absence of sharks on reefs in the greater-Caribbean: a footprint of human pressures.
Ward-Paige, Christine A; Mora, Camilo; Lotze, Heike K; Pattengill-Semmens, Christy; McClenachan, Loren; Arias-Castro, Ery; Myers, Ransom A
2010-08-05
In recent decades, large pelagic and coastal shark populations have declined dramatically with increased fishing; however, the status of sharks in other systems such as coral reefs remains largely unassessed despite a long history of exploitation. Here we explore the contemporary distribution and sighting frequency of sharks on reefs in the greater-Caribbean and assess the possible role of human pressures on observed patterns. We analyzed 76,340 underwater surveys carried out by trained volunteer divers between 1993 and 2008. Surveys were grouped within one km2 cells, which allowed us to determine the contemporary geographical distribution and sighting frequency of sharks. Sighting frequency was calculated as the ratio of surveys with sharks to the total number of surveys in each cell. We compared sighting frequency to the number of people in the cell vicinity and used population viability analyses to assess the effects of exploitation on population trends. Sharks, with the exception of nurse sharks occurred mainly in areas with very low human population or strong fishing regulations and marine conservation. Population viability analysis suggests that exploitation alone could explain the large-scale absence; however, this pattern is likely to be exacerbated by additional anthropogenic stressors, such as pollution and habitat degradation, that also correlate with human population. Human pressures in coastal zones have lead to the broad-scale absence of sharks on reefs in the greater-Caribbean. Preventing further loss of sharks requires urgent management measures to curb fishing mortality and to mitigate other anthropogenic stressors to protect sites where sharks still exist. The fact that sharks still occur in some densely populated areas where strong fishing regulations are in place indicates the possibility of success and encourages the implementation of conservation measures.
NASA Technical Reports Server (NTRS)
1975-01-01
Unregulated uses of the oceans may threaten the global ecological balance, alter plant and animal life and significantly impact the global climatic systems. Recent plans to locate large scale structures on the oceans and to exploit the mineral riches of the seas pose even greater risk to the ecological system. Finally, increasing use of the oceans for large scale transport greatly enhances the probability of collision, polluting spills and international conflict.
Skin Friction Reduction Through Large-Scale Forcing
NASA Astrophysics Data System (ADS)
Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer
2017-11-01
Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.
2011-12-28
shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number...by CMEs; (2) the angular orientation of newly emerged magnetic flux on the solar surface relative to stable filaments plays a role in how rapidly the...potential of exploiting ISOON observations to increase our understanding of solar eruptions, a requirement for improved prediction and mitigation of space
Cockbain, Ella; Ashby, Matthew; Brayley, Helen
2017-10-01
Child sexual exploitation is increasingly recognized nationally and internationally as a pressing child protection, crime prevention, and public health issue. In the United Kingdom, for example, a recent series of high-profile cases has fueled pressure on policy makers and practitioners to improve responses. Yet, prevailing discourse, research, and interventions around child sexual exploitation have focused overwhelmingly on female victims. This study was designed to help redress fundamental knowledge gaps around boys affected by sexual exploitation. This was achieved through rigorous quantitative analysis of individual-level data for 9,042 users of child sexual exploitation services in the United Kingdom. One third of the sample were boys, and gender was associated with statistically significant differences on many variables. The results of this exploratory study highlight the need for further targeted research and more nuanced and inclusive counter-strategies.
Recent and future liquid metal experiments on homogeneous dynamo action and magnetic instabilities
NASA Astrophysics Data System (ADS)
Stefani, Frank; Gerbeth, Gunter; Giesecke, Andre; Gundrum, Thomas; Kirillov, Oleg; Seilmayer, Martin; Gellert, Marcus; Rüdiger, Günther; Gailitis, Agris
2011-10-01
The present status of the Riga dynamo experiment is summarized and the prospects for its future exploitation are evaluated. We further discuss the plans for a large-scale precession driven dynamo experiment to be set-up in the framework of the new installation DRESDYN (DREsden Sodium facility for dynamo and thermohydraulic studies) at Helmholtz-Zentrum Dresden-Rossendorf. We report recent investigations of the magnetorotational instability and the Tayler instability and sketch the plans for another large-scale liquid sodium facility devoted to the combined study of both effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fosalba, Pablo; Dore, Olivier
2007-11-15
Cross correlation between the cosmic microwave background (CMB) and large-scale structure is a powerful probe of dark energy and gravity on the largest physical scales. We introduce a novel estimator, the CMB-velocity correlation, that has most of its power on large scales and that, at low redshift, delivers up to a factor of 2 higher signal-to-noise ratio than the recently detected CMB-dark matter density correlation expected from the integrated Sachs-Wolfe effect. We propose to use a combination of peculiar velocities measured from supernovae type Ia and kinetic Sunyaev-Zeldovich cluster surveys to reveal this signal and forecast dark energy constraints thatmore » can be achieved with future surveys. We stress that low redshift peculiar velocity measurements should be exploited with complementary deeper large-scale structure surveys for precision cosmology.« less
NASA Astrophysics Data System (ADS)
Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana
2017-04-01
The SENTINEL-1 (S1) mission is designed to provide operational capability for continuous mapping of the Earth thanks to its two polar-orbiting satellites (SENTINEL-1A and B) performing C-band synthetic aperture radar (SAR) imaging. It is, indeed, characterized by enhanced revisit frequency, coverage and reliability for operational services and applications requiring long SAR data time series. Moreover, SENTINEL-1 is specifically oriented to interferometry applications with stringent requirements based on attitude and orbit accuracy and it is intrinsically characterized by small spatial and temporal baselines. Consequently, SENTINEL-1 data are particularly suitable to be exploited through advanced interferometric techniques such as the well-known DInSAR algorithm referred to as Small BAseline Subset (SBAS), which allows the generation of deformation time series and displacement velocity maps. In this work we present an advanced interferometric processing chain, based on the Parallel SBAS (P-SBAS) approach, for the massive processing of S1 Interferometric Wide Swath (IWS) data aimed at generating deformation time series in efficient, automatic and systematic way. Such a DInSAR chain is designed to exploit distributed computing infrastructures, and more specifically Cloud Computing environments, to properly deal with the storage and the processing of huge S1 datasets. In particular, since S1 IWS data are acquired with the innovative Terrain Observation with Progressive Scans (TOPS) mode, we could benefit from the structure of S1 data, which are composed by bursts that can be considered as separate acquisitions. Indeed, the processing is intrinsically parallelizable with respect to such independent input data and therefore we basically exploited this coarse granularity parallelization strategy in the majority of the steps of the SBAS processing chain. Moreover, we also implemented more sophisticated parallelization approaches, exploiting both multi-node and multi-core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.
From Wardens Air Force to Boyds Air Force
2016-04-01
changing events.8 In this respect, armed forces can be viewed more accurately as perpetually evolving ecosystems than the unresponsive closed...large-scale full- motion video (FMV) exploitation. In the near-term, the service is already exploring emerging technology that can scan video for
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
NASA Astrophysics Data System (ADS)
Ray, R. K.; Syed, T. H.; Saha, Dipankar; Sarkar, B. C.; Patre, A. K.
2017-12-01
Extracted groundwater, 90% of which is used for irrigated agriculture, is central to the socio-economic development of India. A lack of regulation or implementation of regulations, alongside unrecorded extraction, often leads to over exploitation of large-scale common-pool resources like groundwater. Inevitably, management of groundwater extraction (draft) for irrigation is critical for sustainability of aquifers and the society at large. However, existing assessments of groundwater draft, which are mostly available at large spatial scales, are inadequate for managing groundwater resources that are primarily exploited by stakeholders at much finer scales. This study presents an estimate, projection and analysis of fine-scale groundwater draft in the Seonath-Kharun interfluve of central India. Using field surveys of instantaneous discharge from irrigation wells and boreholes, annual groundwater draft for irrigation in this area is estimated to be 212 × 106 m3, most of which (89%) is withdrawn during non-monsoon season. However, the density of wells/boreholes, and consequent extraction of groundwater, is controlled by the existing hydrogeological conditions. Based on trends in the number of abstraction structures (1982-2011), groundwater draft for the year 2020 is projected to be approximately 307 × 106 m3; hence, groundwater draft for irrigation in the study area is predicted to increase by ˜44% within a span of 8 years. Central to the work presented here is the approach for estimation and prediction of groundwater draft at finer scales, which can be extended to critical groundwater zones of the country.
Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction
NASA Astrophysics Data System (ADS)
Song, Chenchen; Martínez, Todd J.
2017-01-01
In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N2.5) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.
Atomic orbital-based SOS-MP2 with tensor hypercontraction. II. Local tensor hypercontraction.
Song, Chenchen; Martínez, Todd J
2017-01-21
In the first paper of the series [Paper I, C. Song and T. J. Martinez, J. Chem. Phys. 144, 174111 (2016)], we showed how tensor-hypercontracted (THC) SOS-MP2 could be accelerated by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs). This reduced the formal scaling of the SOS-MP2 energy calculation to cubic with respect to system size. The computational bottleneck then becomes the THC metric matrix inversion, which scales cubically with a large prefactor. In this work, the local THC approximation is proposed to reduce the computational cost of inverting the THC metric matrix to linear scaling with respect to molecular size. By doing so, we have removed the primary bottleneck to THC-SOS-MP2 calculations on large molecules with O(1000) atoms. The errors introduced by the local THC approximation are less than 0.6 kcal/mol for molecules with up to 200 atoms and 3300 basis functions. Together with the graphical processing unit techniques and locality-exploiting approaches introduced in previous work, the scaled opposite spin MP2 (SOS-MP2) calculations exhibit O(N 2.5 ) scaling in practice up to 10 000 basis functions. The new algorithms make it feasible to carry out SOS-MP2 calculations on small proteins like ubiquitin (1231 atoms/10 294 atomic basis functions) on a single node in less than a day.
NASA Technical Reports Server (NTRS)
Criswell, D. R. (Editor)
1976-01-01
The practicality of exploiting the moon, not only as a source of materials for large habitable structures at Lagrangian points, but also as a base for colonization is discussed in abstracts of papers presented at a special session on lunar utilization. Questions and answers which followed each presentation are included after the appropriate abstract. Author and subject indexes are provided.
The ups and downs of trophic control in continental shelf ecosystems.
Frank, Kenneth T; Petrie, Brian; Shackell, Nancy L
2007-05-01
Traditionally, marine ecosystem structure was thought to be determined by phytoplankton dynamics. However, an integrated view on the relative roles of top-down (consumer-driven) and bottom-up (resource-driven) forcing in large-scale, exploited marine ecosystems is emerging. Long time series of scientific survey data, underpinning the management of commercially exploited species such as cod, are being used to diagnose mechanisms that could affect the composition and relative abundance of species in marine food webs. By assembling published data from studies in exploited North Atlantic ecosystems, we found pronounced geographical variation in top-down and bottom-up trophic forcing. The data suggest that ecosystem susceptibility to top-down control and their resiliency to exploitation are related to species richness and oceanic temperature conditions. Such knowledge could be used to produce ecosystem guidelines to regulate and manage fisheries in a sustainable fashion.
Valuing Tropical Rainforest Protection Using the Contingent Valuation Method
Randall A. Kramer; D. Evan Mercer; Narendra Sharma
1996-01-01
In the last several decades, the intensity and scale of forest exploitation have increased significantly. A large number of developing countries experiencing increasing deforestation trends are also facing acute shortages of fuelwood, fodder, industrial timber, and other forest products for domestic USC. Besides potential environmental degradation, depletion of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vixie, Kevin R.
This is the final report for the project "Geometric Analysis for Data Reduction and Structure Discovery" in which insights and tools from geometric analysis were developed and exploited for their potential to large scale data challenges.
Exploiting Large-Scale Drug-Protein Interaction Information for Computational Drug Repurposing
2014-06-20
other anti- HIV drugs, even though it is not approved as a mono- therapy for HIV. Surprisingly, two statins, atorvastatin and lovastatin, scored among...infected persons. Amprenavir 19.6 For treatment of HIV-1 infection in combination with other antiretroviral agents. Atorvastatin 19.3 For
Body size mediated coexistence of consumers competing for resources in space
Basset, A.; Angelis, D.L.
2007-01-01
Body size is a major phenotypic trait of individuals that commonly differentiates co-occurring species. We analyzed inter-specific competitive interactions between a large consumer and smaller competitors, whose energetics, selection and giving-up behaviour on identical resource patches scaled with individual body size. The aim was to investigate whether pure metabolic constraints on patch behaviour of vagile species can determine coexistence conditions consistent with existing theoretical and experimental evidence. We used an individual-based spatially explicit simulation model at a spatial scale defined by the home range of the large consumer, which was assumed to be parthenogenic and semelparous. Under exploitative conditions, competitive coexistence occurred in a range of body size ratios between 2 and 10. Asymmetrical competition and the mechanism underlying asymmetry, determined by the scaling of energetics and patch behaviour with consumer body size, were the proximate determinant of inter-specific coexistence. The small consumer exploited patches more efficiently, but searched for profitable patches less effectively than the larger competitor. Therefore, body-size related constraints induced niche partitioning, allowing competitive coexistence within a set of conditions where the large consumer maintained control over the small consumer and resource dynamics. The model summarises and extends the existing evidence of species coexistence on a limiting resource, and provides a mechanistic explanation for decoding the size-abundance distribution patterns commonly observed at guild and community levels. ?? Oikos.
NASA Astrophysics Data System (ADS)
Valtonen, Katariina; Leppänen, Mauri
Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.
Macroweather Predictions and Climate Projections using Scaling and Historical Observations
NASA Astrophysics Data System (ADS)
Hébert, R.; Lovejoy, S.; Del Rio Amador, L.
2017-12-01
There are two fundamental time scales that are pertinent to decadal forecasts and multidecadal projections. The first is the lifetime of planetary scale structures, about 10 days (equal to the deterministic predictability limit), and the second is - in the anthropocene - the scale at which the forced anthropogenic variability exceeds the internal variability (around 16 - 18 years). These two time scales define three regimes of variability: weather, macroweather and climate that are respectively characterized by increasing, decreasing and then increasing varibility with scale.We discuss how macroweather temperature variability can be skilfully predicted to its theoretical stochastic predictability limits by exploiting its long-range memory with the Stochastic Seasonal and Interannual Prediction System (StocSIPS). At multi-decadal timescales, the temperature response to forcing is approximately linear and this can be exploited to make projections with a Green's function, or Climate Response Function (CRF). To make the problem tractable, we exploit the temporal scaling symmetry and restrict our attention to global mean forcing and temperature response using a scaling CRF characterized by the scaling exponent H and an inner scale of linearity τ. An aerosol linear scaling factor α and a non-linear volcanic damping exponent ν were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference using historical data and these allow us to analytically calculate a median (and likely 66% range) for the transient climate response, and for the equilibrium climate sensitivity: 1.6K ([1.5,1.8]K) and 2.4K ([1.9,3.4]K) respectively. Aerosol forcing typically has large uncertainty and we find a modern (2005) forcing very likely range (90%) of [-1.0, -0.3] Wm-2 with median at -0.7 Wm-2. Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to Representative Concentration Pathway (RCP) 2.6 for which the probability to remain under 1.5 K is 48%. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability. This underscores that over the next century, the state of the environment will be strongly influenced by past, present and future economical policies.
Linking crop yield anomalies to large-scale atmospheric circulation in Europe.
Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J
2017-06-15
Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.
A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA
NASA Astrophysics Data System (ADS)
Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing
Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
Behavior under the Microscope: Increasing the Resolution of Our Experimental Procedures
ERIC Educational Resources Information Center
Palmer, David C.
2010-01-01
Behavior analysis has exploited conceptual tools whose experimental validity has been amply demonstrated, but their relevance to large-scale and fine-grained behavioral phenomena remains uncertain, because the experimental analysis of these domains faces formidable obstacles of measurement and control. In this essay I suggest that, at least at the…
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.
Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen
2015-09-01
With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.
Polar ocean ecosystems in a changing world.
Smetacek, Victor; Nicol, Stephen
2005-09-15
Polar organisms have adapted their seasonal cycles to the dynamic interface between ice and water. This interface ranges from the micrometre-sized brine channels within sea ice to the planetary-scale advance and retreat of sea ice. Polar marine ecosystems are particularly sensitive to climate change because small temperature differences can have large effects on the extent and thickness of sea ice. Little is known about the interactions between large, long-lived organisms and their planktonic food supply. Disentangling the effects of human exploitation of upper trophic levels from basin-wide, decade-scale climate cycles to identify long-term, global trends is a daunting challenge facing polar bio-oceanography.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a 'game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a `game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
Regional groundwater flow modeling of the Geba basin, northern Ethiopia
NASA Astrophysics Data System (ADS)
Gebreyohannes, Tesfamichael; De Smedt, Florimond; Walraevens, Kristine; Gebresilassie, Solomon; Hussien, Abdelwassie; Hagos, Miruts; Amare, Kassa; Deckers, Jozef; Gebrehiwot, Kindeya
2017-05-01
The Geba basin is one of the most food-insecure areas of the Tigray regional state in northern Ethiopia due to recurrent drought resulting from erratic distribution of rainfall. Since the beginning of the 1990s, rain-fed agriculture has been supported through small-scale irrigation schemes mainly by surface-water harvesting, but success has been limited. Hence, use of groundwater for irrigation purposes has gained considerable attention. The main purpose of this study is to assess groundwater resources in the Geba basin by means of a MODFLOW modeling approach. The model is calibrated using observed groundwater levels, yielding a clear insight into the groundwater flow systems and reserves. Results show that none of the hydrogeological formations can be considered as aquifers that can be exploited for large-scale groundwater exploitation. However, aquitards can be identified that can support small-scale groundwater abstraction for irrigation needs in regions that are either designated as groundwater discharge areas or where groundwater levels are shallow and can be tapped by hand-dug wells or shallow boreholes.
Lichtenberg, Peter A; Gross, Evan; Ficker, Lisa J
2018-06-08
This work examines the clinical utility of the scoring system for the Lichtenberg Financial Decision-making Rating Scale (LFDRS) and its usefulness for decision making capacity and financial exploitation. Objective 1 was to examine the clinical utility of a person centered, empirically supported, financial decision making scale. Objective 2 was to determine whether the risk-scoring system created for this rating scale is sufficiently accurate for the use of cutoff scores in cases of decisional capacity and cases of suspected financial exploitation. Objective 3 was to examine whether cognitive decline and decisional impairment predicted suspected financial exploitation. Two hundred independently living, non-demented community-dwelling older adults comprised the sample. Participants completed the rating scale and other cognitive measures. Receiver operating characteristic curves were in the good to excellent range for decisional capacity scoring, and in the fair to good range for financial exploitation. Analyses supported the conceptual link between decision making deficits and risk for exploitation, and supported the use of the risk-scoring system in a community-based population. This study adds to the empirical evidence supporting the use of the rating scale as a clinical tool assessing risk for financial decisional impairment and/or financial exploitation.
Constructing Neuronal Network Models in Massively Parallel Environments.
Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
Constructing Neuronal Network Models in Massively Parallel Environments
Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus
2017-01-01
Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Large-scale quantum networks based on graphs
NASA Astrophysics Data System (ADS)
Epping, Michael; Kampermann, Hermann; Bruß, Dagmar
2016-05-01
Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.
Spreng, R. Nathan; Cassidy, Benjamin N; Darboh, Bri S; DuPre, Elizabeth; Lockrow, Amber W; Setton, Roni; Turner, Gary R
2017-01-01
Abstract Background Age-related brain changes leading to altered socioemotional functioning may increase vulnerability to financial exploitation. If confirmed, this would suggest a novel mechanism leading to heightened financial exploitation risk in older adults. Development of predictive neural markers could facilitate increased vigilance and prevention. In this preliminary study, we sought to identify structural and functional brain differences associated with financial exploitation in older adults. Methods Financially exploited older adults (n = 13, 7 female) and a matched cohort of older adults who had been exposed to, but avoided, a potentially exploitative situation (n = 13, 7 female) were evaluated. Using magnetic resonance imaging, we examined cortical thickness and resting state functional connectivity. Behavioral data were collected using standardized cognitive assessments, self-report measures of mood and social functioning. Results The exploited group showed cortical thinning in anterior insula and posterior superior temporal cortices, regions associated with processing affective and social information, respectively. Functional connectivity encompassing these regions, within default and salience networks, was reduced, while between network connectivity was increased. Self-reported anger and hostility was higher for the exploited group. Conclusions We observed financial exploitation associated with brain differences in regions involved in socioemotional functioning. These exploratory and preliminary findings suggest that alterations in brain regions implicated in socioemotional functioning may be a marker of financial exploitation risk. Large-scale, prospective studies are necessary to validate this neural mechanism, and develop predictive markers for use in clinical practice. PMID:28369260
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
Memristive crypto primitive for building highly secure physical unclonable functions
NASA Astrophysics Data System (ADS)
Gao, Yansong; Ranasinghe, Damith C.; Al-Sarawi, Said F.; Kavehei, Omid; Abbott, Derek
2015-08-01
Physical unclonable functions (PUFs) exploit the intrinsic complexity and irreproducibility of physical systems to generate secret information. The advantage is that PUFs have the potential to provide fundamentally higher security than traditional cryptographic methods by preventing the cloning of devices and the extraction of secret keys. Most PUF designs focus on exploiting process variations in Complementary Metal Oxide Semiconductor (CMOS) technology. In recent years, progress in nanoelectronic devices such as memristors has demonstrated the prevalence of process variations in scaling electronics down to the nano region. In this paper, we exploit the extremely large information density available in nanocrossbar architectures and the significant resistance variations of memristors to develop an on-chip memristive device based strong PUF (mrSPUF). Our novel architecture demonstrates desirable characteristics of PUFs, including uniqueness, reliability, and large number of challenge-response pairs (CRPs) and desirable characteristics of strong PUFs. More significantly, in contrast to most existing PUFs, our PUF can act as a reconfigurable PUF (rPUF) without additional hardware and is of benefit to applications needing revocation or update of secure key information.
Memristive crypto primitive for building highly secure physical unclonable functions.
Gao, Yansong; Ranasinghe, Damith C; Al-Sarawi, Said F; Kavehei, Omid; Abbott, Derek
2015-08-04
Physical unclonable functions (PUFs) exploit the intrinsic complexity and irreproducibility of physical systems to generate secret information. The advantage is that PUFs have the potential to provide fundamentally higher security than traditional cryptographic methods by preventing the cloning of devices and the extraction of secret keys. Most PUF designs focus on exploiting process variations in Complementary Metal Oxide Semiconductor (CMOS) technology. In recent years, progress in nanoelectronic devices such as memristors has demonstrated the prevalence of process variations in scaling electronics down to the nano region. In this paper, we exploit the extremely large information density available in nanocrossbar architectures and the significant resistance variations of memristors to develop an on-chip memristive device based strong PUF (mrSPUF). Our novel architecture demonstrates desirable characteristics of PUFs, including uniqueness, reliability, and large number of challenge-response pairs (CRPs) and desirable characteristics of strong PUFs. More significantly, in contrast to most existing PUFs, our PUF can act as a reconfigurable PUF (rPUF) without additional hardware and is of benefit to applications needing revocation or update of secure key information.
Memristive crypto primitive for building highly secure physical unclonable functions
Gao, Yansong; Ranasinghe, Damith C.; Al-Sarawi, Said F.; Kavehei, Omid; Abbott, Derek
2015-01-01
Physical unclonable functions (PUFs) exploit the intrinsic complexity and irreproducibility of physical systems to generate secret information. The advantage is that PUFs have the potential to provide fundamentally higher security than traditional cryptographic methods by preventing the cloning of devices and the extraction of secret keys. Most PUF designs focus on exploiting process variations in Complementary Metal Oxide Semiconductor (CMOS) technology. In recent years, progress in nanoelectronic devices such as memristors has demonstrated the prevalence of process variations in scaling electronics down to the nano region. In this paper, we exploit the extremely large information density available in nanocrossbar architectures and the significant resistance variations of memristors to develop an on-chip memristive device based strong PUF (mrSPUF). Our novel architecture demonstrates desirable characteristics of PUFs, including uniqueness, reliability, and large number of challenge-response pairs (CRPs) and desirable characteristics of strong PUFs. More significantly, in contrast to most existing PUFs, our PUF can act as a reconfigurable PUF (rPUF) without additional hardware and is of benefit to applications needing revocation or update of secure key information. PMID:26239669
Lithic Landscapes: Early Human Impact from Stone Tool Production on the Central Saharan Environment
Foley, Robert A.; Lahr, Marta Mirazón
2015-01-01
Humans have had a major impact on the environment. This has been particularly intense in the last millennium but has been noticeable since the development of food production and the associated higher population densities in the last 10,000 years. The use of fire and over-exploitation of large mammals has also been recognized as having an effect on the world’s ecology, going back perhaps 100,000 years or more. Here we report on an earlier anthropogenic environmental change. The use of stone tools, which dates back over 2.5 million years, and the subsequent evolution of a technologically-dependent lineage required the exploitation of very large quantities of rock. However, measures of the impact of hominin stone exploitation are rare and inherently difficult. The Messak Settafet, a sandstone massif in the Central Sahara (Libya), is littered with Pleistocene stone tools on an unprecedented scale and is, in effect, a man-made landscape. Surveys showed that parts of the Messak Settafet have as much as 75 lithics per square metre and that this fractured debris is a dominant element of the environment. The type of stone tools—Acheulean and Middle Stone Age—indicates that extensive stone tool manufacture occurred over the last half million years or more. The lithic-strewn pavement created by this ancient stone tool manufacture possibly represents the earliest human environmental impact at a landscape scale and is an example of anthropogenic change. The nature of the lithics and inferred age may suggest that hominins other than modern humans were capable of unintentionally modifying their environment. The scale of debris also indicates the significance of stone as a critical resource for hominins and so provides insights into a novel evolutionary ecology. PMID:25760999
Turbulence and entrainment length scales in large wind farms.
Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F
2017-04-13
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Turbulence and entrainment length scales in large wind farms
2017-01-01
A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265028
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
The Effects of Reducing Tracking in Upper Secondary School: Evidence from a Large-Scale Pilot Scheme
ERIC Educational Resources Information Center
Hall, Caroline
2012-01-01
By exploiting an extensive pilot scheme that preceded an educational reform, this paper evaluates the effects of introducing a more comprehensive upper secondary school system in Sweden. The reform reduced the differences between academic and vocational tracks through prolonging and increasing the academic content of the latter. As a result, all…
Computational Complexity of Bosons in Linear Networks
2017-03-01
photon statistics while strongly reducing emission probabilities: thus leading experimental teams pursuing large-scale BOSONSAMPLING have faced a hard...Potentially, this could motivate new validation protocols exploiting statistics that include this temporal degree of freedom. The impact of...photon- statistics polluted by higher-order terms, which can be mistakenly interpreted as decreased photon-indistinguishability. In fact, in many cases
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Cloud-based MOTIFSIM: Detecting Similarity in Large DNA Motif Data Sets.
Tran, Ngoc Tam L; Huang, Chun-Hsi
2017-05-01
We developed the cloud-based MOTIFSIM on Amazon Web Services (AWS) cloud. The tool is an extended version from our web-based tool version 2.0, which was developed based on a novel algorithm for detecting similarity in multiple DNA motif data sets. This cloud-based version further allows researchers to exploit the computing resources available from AWS to detect similarity in multiple large-scale DNA motif data sets resulting from the next-generation sequencing technology. The tool is highly scalable with expandable AWS.
NASA Astrophysics Data System (ADS)
Pettex, Emeline; David, Léa; Authier, Matthieu; Blanck, Aurélie; Dorémus, Ghislain; Falchetto, Hélène; Laran, Sophie; Monestiez, Pascal; Van Canneyt, Olivier; Virgili, Auriane; Ridoux, Vincent
2017-07-01
Scientific investigation in offshore areas are logistically challenging and expensive, therefore the available knowledge on seabird at sea distribution and abundance, as well as their seasonal variations, remains limited. To investigate the seasonal variability in seabird distribution and abundance in the North-Western Mediterranean Sea (NWMS), we conducted two large-scale aerial surveys in winter 2011-12 and summer 2012, covering a 181,400 km2 area. Following a strip-transect method, observers recorded a total of 4141 seabird sightings in winter and 2334 in summer, along 32,213 km. Using geostatistical methods, we generated sightings density maps for both seasons, as well as estimates of density and abundance. Most taxa showed seasonal variations in their density and distribution patterns, as they used the area either for wintering or for breeding. Highest densities of seabirds were recorded during winter, although large-sized shearwaters, storm petrels and terns were more abundant during summer. Consequently, with nearly 170,000 seabirds estimated in winter, the total abundance was twice higher in winter. Coastal waters of the continental shelf were generally more exploited by seabirds, even though some species, such as Mediterranean gulls, black-headed gulls, little gulls and storm petrels were found at high densities in highly offshore waters. Our results revealed areas highly exploited by the seabird community in the NWMS, such as the Gulf of Lion, the Tuscan region, and the area between Corsica and Sardinia. In addition, these large-scale surveys provide a baseline for the monitoring of seabird at sea distribution, and could inform the EU Marine Strategy Framework Directive.
Spreng, R Nathan; Cassidy, Benjamin N; Darboh, Bri S; DuPre, Elizabeth; Lockrow, Amber W; Setton, Roni; Turner, Gary R
2017-10-01
Age-related brain changes leading to altered socioemotional functioning may increase vulnerability to financial exploitation. If confirmed, this would suggest a novel mechanism leading to heightened financial exploitation risk in older adults. Development of predictive neural markers could facilitate increased vigilance and prevention. In this preliminary study, we sought to identify structural and functional brain differences associated with financial exploitation in older adults. Financially exploited older adults (n = 13, 7 female) and a matched cohort of older adults who had been exposed to, but avoided, a potentially exploitative situation (n = 13, 7 female) were evaluated. Using magnetic resonance imaging, we examined cortical thickness and resting state functional connectivity. Behavioral data were collected using standardized cognitive assessments, self-report measures of mood and social functioning. The exploited group showed cortical thinning in anterior insula and posterior superior temporal cortices, regions associated with processing affective and social information, respectively. Functional connectivity encompassing these regions, within default and salience networks, was reduced, while between network connectivity was increased. Self-reported anger and hostility was higher for the exploited group. We observed financial exploitation associated with brain differences in regions involved in socioemotional functioning. These exploratory and preliminary findings suggest that alterations in brain regions implicated in socioemotional functioning may be a marker of financial exploitation risk. Large-scale, prospective studies are necessary to validate this neural mechanism, and develop predictive markers for use in clinical practice. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America.
Biomimetic surface structuring using cylindrical vector femtosecond laser beams
NASA Astrophysics Data System (ADS)
Skoulas, Evangelos; Manousaki, Alexandra; Fotakis, Costas; Stratakis, Emmanuel
2017-03-01
We report on a new, single-step and scalable method to fabricate highly ordered, multi-directional and complex surface structures that mimic the unique morphological features of certain species found in nature. Biomimetic surface structuring was realized by exploiting the unique and versatile angular profile and the electric field symmetry of cylindrical vector (CV) femtosecond (fs) laser beams. It is shown that, highly controllable, periodic structures exhibiting sizes at nano-, micro- and dual- micro/nano scales can be directly written on Ni upon line and large area scanning with radial and azimuthal polarization beams. Depending on the irradiation conditions, new complex multi-directional nanostructures, inspired by the Shark’s skin morphology, as well as superhydrophobic dual-scale structures mimicking the Lotus’ leaf water repellent properties can be attained. It is concluded that the versatility and features variations of structures formed is by far superior to those obtained via laser processing with linearly polarized beams. More important, by exploiting the capabilities offered by fs CV fields, the present technique can be further extended to fabricate even more complex and unconventional structures. We believe that our approach provides a new concept in laser materials processing, which can be further exploited for expanding the breadth and novelty of applications.
ERIC Educational Resources Information Center
Hemmings, Philip
2006-01-01
This paper looks at ways of ensuring Czech regions and municipalities are fully motivated to make efficiency improvements in public service provision and so help achieve countrywide fiscal sustainability. The very large number of small municipalities in the Czech Republic means that scale economies are difficult to exploit and the policy options…
Bibliography--Unclassified Technical Reports, Special Reports, and Technical Notes: FY 1982.
1982-11-01
in each category are listed in chronological order under seven areas: manpower management, personnel administration , organization management, education...7633). Technical reports listed that have unlimited distribution can also be obtained from the National Technical Information Service , 5285 Port Royal...simulations of manpower systems. This research exploits the technology of computer-managed large-scale data bases. PERSONNEL ADMINISTRATION The personnel
Best Practices in Student Veteran Education: Making a "Veteran-Friendly" Institution
ERIC Educational Resources Information Center
Dillard, Robert J.; Yu, Helen H.
2016-01-01
With the conclusion of major military engagements in Iraq and Afghanistan, U.S. institutions of higher learning are experiencing an inflow of student veterans on a scale not seen since the conclusion of World War II. In response, a large number of American colleges and universities quickly sought to exploit this glut of new students by arbitrarily…
Möllmann, Christian; Conversi, Alessandra; Edwards, Martin
2011-08-23
Abrupt and rapid ecosystem shifts (where major reorganizations of food-web and community structures occur), commonly termed regime shifts, are changes between contrasting and persisting states of ecosystem structure and function. These shifts have been increasingly reported for exploited marine ecosystems around the world from the North Pacific to the North Atlantic. Understanding the drivers and mechanisms leading to marine ecosystem shifts is crucial in developing adaptive management strategies to achieve sustainable exploitation of marine ecosystems. An international workshop on a comparative approach to analysing these marine ecosystem shifts was held at Hamburg University, Institute for Hydrobiology and Fisheries Science, Germany on 1-3 November 2010. Twenty-seven scientists from 14 countries attended the meeting, representing specialists from seven marine regions, including the Baltic Sea, the North Sea, the Barents Sea, the Black Sea, the Mediterranean Sea, the Bay of Biscay and the Scotian Shelf off the Canadian East coast. The goal of the workshop was to conduct the first large-scale comparison of marine ecosystem regime shifts across multiple regional areas, in order to support the development of ecosystem-based management strategies. This journal is © 2011 The Royal Society
Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.
Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco
2018-06-07
Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ficker, Lisa J.; Rahman-Filipiak, Annalise
2015-01-01
This study examines preliminary evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS), a new person-centered approach to assessing capacity to make financial decisions, and its relationship to self-reported cases of financial exploitation in 69 older African Americans. More than one third of individuals reporting financial exploitation also had questionable decisional abilities. Overall, decisional ability score and current decision total were significantly associated with cognitive screening test and financial ability scores, demonstrating good criterion validity. Financially exploited individuals, and non-exploited individuals, showed mean group differences on the Mini Mental State Exam, Financial Situational Awareness, Psychological Vulnerability, Current Decisional Ability, and Susceptibility to undue influence subscales, and Total Lichtenberg Financial Decision Rating Scale Score. Study findings suggest that impaired decisional abilities may render older adults more vulnerable to financial exploitation, and that the LFDRS is a valid tool for measuring both decisional abilities and financial exploitation. PMID:26285038
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Tri-track: free software for large-scale particle tracking.
Vallotton, Pascal; Olivier, Sandra
2013-04-01
The ability to correctly track objects in time-lapse sequences is important in many applications of microscopy. Individual object motions typically display a level of dynamic regularity reflecting the existence of an underlying physics or biology. Best results are obtained when this local information is exploited. Additionally, if the particle number is known to be approximately constant, a large number of tracking scenarios may be rejected on the basis that they are not compatible with a known maximum particle velocity. This represents information of a global nature, which should ideally be exploited too. Some time ago, we devised an efficient algorithm that exploited both types of information. The tracking task was reduced to a max-flow min-cost problem instance through a novel graph structure that comprised vertices representing objects from three consecutive image frames. The algorithm is explained here for the first time. A user-friendly implementation is provided, and the specific relaxation mechanism responsible for the method's effectiveness is uncovered. The software is particularly competitive for complex dynamics such as dense antiparallel flows, or in situations where object displacements are considerable. As an application, we characterize a remarkable vortex structure formed by bacteria engaged in interstitial motility.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.
As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which themore » ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;« less
Large-scale imputation of epigenomic datasets for systematic annotation of diverse human tissues.
Ernst, Jason; Kellis, Manolis
2015-04-01
With hundreds of epigenomic maps, the opportunity arises to exploit the correlated nature of epigenetic signals, across both marks and samples, for large-scale prediction of additional datasets. Here, we undertake epigenome imputation by leveraging such correlations through an ensemble of regression trees. We impute 4,315 high-resolution signal maps, of which 26% are also experimentally observed. Imputed signal tracks show overall similarity to observed signals and surpass experimental datasets in consistency, recovery of gene annotations and enrichment for disease-associated variants. We use the imputed data to detect low-quality experimental datasets, to find genomic sites with unexpected epigenomic signals, to define high-priority marks for new experiments and to delineate chromatin states in 127 reference epigenomes spanning diverse tissues and cell types. Our imputed datasets provide the most comprehensive human regulatory region annotation to date, and our approach and the ChromImpute software constitute a useful complement to large-scale experimental mapping of epigenomic information.
NASA Astrophysics Data System (ADS)
Hartmann, Alfred; Redfield, Steve
1989-04-01
This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.
Unsupervised DInSAR processing chain for multi-scale displacement analysis
NASA Astrophysics Data System (ADS)
Casu, Francesco; Manunta, Michele
2016-04-01
Earth Observation techniques can be very helpful for the estimation of several sources of ground deformation due to their characteristics of large spatial coverage, high resolution and cost effectiveness. In this scenario, Differential Synthetic Aperture Radar Interferometry (DInSAR) is one of the most effective methodologies for its capability to generate spatially dense deformation maps at both global and local spatial scale, with centimeter to millimeter accuracy. DInSAR exploits the phase difference (interferogram) between SAR image pairs relevant to acquisitions gathered at different times, but with the same illumination geometry and from sufficiently close flight tracks, whose separation is typically referred to as baseline. Among several, the SBAS algorithm is one of the most used DInSAR approaches and it is aimed at generating displacement time series at a multi-scale level by exploiting a set of small baseline interferograms. SBAS, and generally DInSAR, has taken benefit from the large availability of spaceborne SAR data collected along years by several satellite systems, with particular regard to the European ERS and ENVISAT sensors, which have acquired SAR images worldwide during approximately 20 years. Moreover, since 2014 the new generation of Copernicus Sentinel satellites has started to acquire data with a short revisit time (12 days) and a global coverage policy, thus flooding the scientific EO community with an unprecedent amount of data. To efficiently manage such amount of data, proper processing facilities (as those coming from the emerging Cloud Computing technologies) have to be used, as well as novel algorithms aimed at their efficient exploitation have to be developed. In this work we present a set of results achieved by exploiting a recently proposed implementation of the SBAS algorithm, namely Parallel-SBAS (P-SBAS), which allows us to effectively process, in an unsupervised way and in a limited time frame, a huge number of SAR images, thus leading to the generation of Interferometric products for both global and local scale displacement analysis. Among several examples, we will show a wide displacement SBAS processing, carried out over the southern California, during which the whole ascending ENVISAT data set of more than 740 images has been fully processed on a Cloud Computing environment in less than 9 hours, leading to the generation of a displacement map of about 150,000 square kilometres. The P-SBAS characteristics allowed also us to integrate the algorithm within the ESA Geohazard Exploitation Platform (GEP), which is based on the use of GRID and Cloud Computing facilities, thus making freely available to the EO community a web tool for massive and systematic interferometric displacement time series generation. This work has been partially supported by: the Italian MIUR under the RITMARE project; the CNR-DPC agreement and the ESA GEP project.
Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures
2015-05-27
PREC: Practical Root Exploit Containment for Android Devices, ACM Conference on Data and Application Security and Privacy (CODASPY) . 03-MAR-14...05-OCT-11, . : , Hiep Nguyen, Yongmin Tan, Xiaohui Gu. Propagation-aware Anomaly Localization for Cloud Hosted Distributed Applications , ACM...Workshop on Managing Large-Scale Systems via the Analysis of System Logs and the Application of Machine Learning Techniques (SLAML) in conjunction with SOSP
Cosmic microwave background bispectrum from primordial magnetic fields on large angular scales.
Seshadri, T R; Subramanian, Kandaswamy
2009-08-21
Primordial magnetic fields lead to non-Gaussian signals in the cosmic microwave background (CMB) even at the lowest order, as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. In contrast, CMB non-Gaussianity due to inflationary scalar perturbations arises only as a higher-order effect. We propose a novel probe of stochastic primordial magnetic fields that exploits the characteristic CMB non-Gaussianity that they induce. We compute the CMB bispectrum (b(l1l2l3)) induced by such fields on large angular scales. We find a typical value of l1(l1 + 1)l3(l3 + 1)b(l1l2l3) approximately 10(-22), for magnetic fields of strength B0 approximately 3 nG and with a nearly scale invariant magnetic spectrum. Observational limits on the bispectrum allow us to set upper limits on B0 approximately 35 nG.
Cosmological Higgs-Axion Interplay for a Naturally Small Electroweak Scale.
Espinosa, J R; Grojean, C; Panico, G; Pomarol, A; Pujolàs, O; Servant, G
2015-12-18
Recently, a new mechanism to generate a naturally small electroweak scale has been proposed. It exploits the coupling of the Higgs boson to an axionlike field and a long era in the early Universe where the axion unchains a dynamical screening of the Higgs mass. We present a new realization of this idea with the new feature that it leaves no sign of new physics at the electroweak scale, and up to a rather large scale, 10^{9} GeV, except for two very light and weakly coupled axionlike states. One of the scalars can be a viable dark matter candidate. Such a cosmological Higgs-axion interplay could be tested with a number of experimental strategies.
Thinking big: Towards ideal strains and processes for large-scale aerobic biofuels production
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, James D.; Beckham, Gregg T.
In this study, global concerns about anthropogenic climate change, energy security and independence, and environmental consequences of continued fossil fuel exploitation are driving significant public and private sector interest and financing to hasten development and deployment of processes to produce renewable fuels, as well as bio-based chemicals and materials, towards scales commensurate with current fossil fuel-based production. Over the past two decades, anaerobic microbial production of ethanol from first-generation hexose sugars derived primarily from sugarcane and starch has reached significant market share worldwide, with fermentation bioreactor sizes often exceeding the million litre scale. More recently, industrial-scale lignocellulosic ethanol plants aremore » emerging that produce ethanol from pentose and hexose sugars using genetically engineered microbes and bioreactor scales similar to first-generation biorefineries.« less
Thinking big: Towards ideal strains and processes for large-scale aerobic biofuels production
McMillan, James D.; Beckham, Gregg T.
2016-12-22
In this study, global concerns about anthropogenic climate change, energy security and independence, and environmental consequences of continued fossil fuel exploitation are driving significant public and private sector interest and financing to hasten development and deployment of processes to produce renewable fuels, as well as bio-based chemicals and materials, towards scales commensurate with current fossil fuel-based production. Over the past two decades, anaerobic microbial production of ethanol from first-generation hexose sugars derived primarily from sugarcane and starch has reached significant market share worldwide, with fermentation bioreactor sizes often exceeding the million litre scale. More recently, industrial-scale lignocellulosic ethanol plants aremore » emerging that produce ethanol from pentose and hexose sugars using genetically engineered microbes and bioreactor scales similar to first-generation biorefineries.« less
From local to national scale DInSAR analysis for the comprehension of Earth's surface dynamics.
NASA Astrophysics Data System (ADS)
De Luca, Claudio; Casu, Francesco; Manunta, Michele; Zinno, Ivana; lanari, Riccardo
2017-04-01
Earth Observation techniques can be very helpful for the estimation of several sources of ground deformation due to their characteristics of large spatial coverage, high resolution and cost effectiveness. In this scenario, Differential Synthetic Aperture Radar Interferometry (DInSAR) is one of the most effective methodologies for its capability to generate spatially dense deformation maps with centimeter to millimeter accuracy. DInSAR exploits the phase difference (interferogram) between SAR image pairs relevant to acquisitions gathered at different times, but with the same illumination geometry and from sufficiently close flight tracks, whose separation is typically referred to as baseline. Among several, the SBAS algorithm is one of the most used DInSAR approaches and it is aimed at generating displacement time series at a multi-scale level by exploiting a set of small baseline interferograms. SBAS, and generally DInSAR, has taken benefit from the large availability of spaceborne SAR data collected along years by several satellite systems, with particular regard to the European ERS and ENVISAT sensors, which have acquired SAR images worldwide during approximately 20 years. While the application of SBAS to ERS and ENVISAT data at local scale is widely testified, very few examples involving those archives for analysis at huge spatial scale are available in literature. This is mainly due to the required processing power (in terms of CPUs, memory and storage) and the limited availability of automatic processing procedures (unsupervised tools), which are mandatory requirements for obtaining displacement results in a time effective way. Accordingly, in this work we present a methodology for generating the Vertical and Horizontal (East-West) components of Earth's surface deformation at very large (national/continental) spatial scale. In particular, it relies on the availability of a set of SAR data collected over an Area of Interest (AoI), which could be some hundreds of thousands of square kilometers wide, from ascending and descending orbits. The exploited SAR data are processed, on a local basis, through the Parallel SBAS (P-SBAS) approach thus generating the displacement time series and the corresponding mean deformation velocity maps. Subsequently, starting from the so generated DInSAR results, the proposed methodology lays on a proper mosaicking procedure to finally retrieve the mean velocity maps of the Vertical and Horizontal (East-West) deformation components relevant to the overall AoI. This technique permits to account for possible regional trends (tectonics trend) not easily detectable by the local scale DInSAR analyses. We tested the proposed methodology with the ENVISAT ASAR archives that have been acquired, from ascending and descending orbits, over California (US), covering an area of about 100.000 km2. The presented methodology can be easily applied also to other SAR satellite data. Above all, it is particularly suitable to deal with the very large data flow provided by the Sentinel-1 constellation, which collects data with a global coverage policy and an acquisition mode specifically designed for interferometric applications.
Biomimetic surface structuring using cylindrical vector femtosecond laser beams
Skoulas, Evangelos; Manousaki, Alexandra; Fotakis, Costas; Stratakis, Emmanuel
2017-01-01
We report on a new, single-step and scalable method to fabricate highly ordered, multi-directional and complex surface structures that mimic the unique morphological features of certain species found in nature. Biomimetic surface structuring was realized by exploiting the unique and versatile angular profile and the electric field symmetry of cylindrical vector (CV) femtosecond (fs) laser beams. It is shown that, highly controllable, periodic structures exhibiting sizes at nano-, micro- and dual- micro/nano scales can be directly written on Ni upon line and large area scanning with radial and azimuthal polarization beams. Depending on the irradiation conditions, new complex multi-directional nanostructures, inspired by the Shark’s skin morphology, as well as superhydrophobic dual-scale structures mimicking the Lotus’ leaf water repellent properties can be attained. It is concluded that the versatility and features variations of structures formed is by far superior to those obtained via laser processing with linearly polarized beams. More important, by exploiting the capabilities offered by fs CV fields, the present technique can be further extended to fabricate even more complex and unconventional structures. We believe that our approach provides a new concept in laser materials processing, which can be further exploited for expanding the breadth and novelty of applications. PMID:28327611
Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.
Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H
2011-06-06
We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Examining Application Components to Reveal Android Malware
2013-03-01
RGBDroid: a novel response-based approach to android privilege escalation attacks ”. Proceedings of the 5th USENIX conference on Large-Scale Exploits and...Wetherall. “These aren’t the droids you’re looking for: retrofitting android to protect data from imperious applications”. Proceedings of the 18th ACM...copyright protection in the United States. AFIT-ENG-13-M-19 EXAMINING APPLICATION COMPONENTS TO REVEAL ANDROID MALWARE THESIS Presented to the Faculty
LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis
2016-05-23
LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High
The Tomographic Ionized-Carbon Mapping Experiment (TIME) CII Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Staniszewski, Z.; Bock, J. J.; Bradford, C. M.; Brevik, J.; Cooray, A.; Gong, Y.; Hailey-Dunsheath, S.; O'Brient, R.; Santos, M.; Shirokoff, E.; Silva, M.; Zemcov, M.
2014-09-01
The Tomographic Ionized-Carbon Mapping Experiment (TIME) and TIME-Pilot are proposed imaging spectrometers to measure reionization and large scale structure at redshifts 5-9. We seek to exploit the 158 restframe emission of [CII], which becomes measurable at 200-300 GHz at reionization redshifts. Here we describe the scientific motivation, give an overview of the proposed instrument, and highlight key technological developments underway to enable these measurements.
ERIC Educational Resources Information Center
Lysaght, Zita; O'Leary, Michael
2017-01-01
Exploiting the potential that Assessment for Learning (AfL) offers to optimise student learning is contingent on both teachers' knowledge and use of AfL and the fidelity with which this translates into their daily classroom practices. Quantitative data derived from the use of an Assessment for Learning Audit Instrument (AfLAI) with a large sample…
Hedging against terrorism: Are US businesses prepared?
Kahan, Jerome H
2015-01-01
Private US companies face risks in connection with financial matters, but are not necessarily prepared to cope with risks that can seriously disrupt or even halt their operations, notably terrorist attacks and natural disasters. Enhancing the resilience of businesses when dealing with terrorism is especially challenging, as these groups or individuals can adapt tactics to exploit the vulnerabilities of companies they wish to target. Business managers need to formulate flexible preparedness plans that reduce risks from large-scale natural disasters as well as terrorist attacks. In doing so, they can take advantage of post-9/11 US government guidance for these endeavours as well as programmes that eliminate risks to private insurance entities so they can issue policies that cover terrorist strikes of high consequences. Just as business executives use hedging strategies in the world of finance, they also need operational hedging strategies as a means of exploiting as well as lowering the risks surrounding future uncertainties. Resources devoted to planning and hedging are investments that can increase the odds of businesses surviving and thriving, even if they experience high-impact terrorist attacks, threats or large-scale natural disasters, making suppliers, customers and stakeholders happy. The purpose of this paper is to give executives the incentive to take steps to do just that.
Koski, Matthew H; Ison, Jennifer L; Padilla, Ashley; Pham, Angela Q; Galloway, Laura F
2018-06-13
Seemingly mutualistic relationships can be exploited, in some cases reducing fitness of the exploited species. In plants, the insufficient receipt of pollen limits reproduction. While infrequent pollination commonly underlies pollen limitation (PL), frequent interactions with low-efficiency, exploitative pollinators may also cause PL. In the widespread protandrous herb Campanula americana , visitation by three pollinators explained 63% of the variation in PL among populations spanning the range. Bumblebees and the medium-sized Megachile campanulae enhanced reproductive success, but small solitary bees exacerbated PL. To dissect mechanisms behind these relationships, we scored sex-specific floral visitation, and the contributions of each pollinator to plant fitness using single flower visits. Small bees and M. campanulae overvisited male-phase flowers, but bumblebees frequently visited female-phase flowers. Fewer bumblebee visits were required to saturate seed set compared to other bees. Scaling pollinator efficiency metrics to populations, small bees deplete large amounts of pollen due to highly male-biased flower visitation and infrequent pollen deposition. Thus, small bees reduce plant reproduction by limiting pollen available for transfer by efficient pollinators, and appear to exploit the plant-pollinator mutualism, acting as functional parasites to C. americana It is therefore unlikely that small bees will compensate for reproductive failure in C. americana when bumblebees are scarce. © 2018 The Author(s).
Successful commercialization of nanophotonic technology
NASA Astrophysics Data System (ADS)
Jaiswal, Supriya L.; Clarke, Roger B. M.; Hyde, Sam C. W.
2006-08-01
The exploitation of nanotechnology from proof of principle to realizable commercial applications encounters considerable challenges in regards to high volume, large scale, low cost manufacturability and social ethics. This has led to concerns over converting powerful intellectual property into realizable, industry attractive technologies. At The Technology Partnership we specifically address the issue of successful integration of nanophotonics into industry in markets such as biomedical, ophthalmic, energy, telecommunications, and packaging. In this paper we draw on a few examples where we have either developed industrial scale nanophotonic technology or engineering platforms which may be used to fortify nano/microphotonic technologies and enhance their commercial viability.
Experiment-scale molecular simulation study of liquid crystal thin films
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael
2014-03-01
Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
The 'dirty downside' of global sporting events: focus on human trafficking for sexual exploitation.
Finkel, R; Finkel, M L
2015-01-01
Human trafficking is as complex human rights and public health issue. The issue of human trafficking for sexual exploitation at large global sporting events has proven to be elusive given the clandestine nature of the industry. This piece examines the issue from a public health perspective. This is a literature review of the 'most comprehensive' studies published on the topic. A PubMed search was done using MeSH terms 'human traffickings' and 'sex trafficking' and 'human rights abuses'. Subheadings included 'statistics and numerical data', 'legislation and jurispudence', 'prevention and control', and 'therapy'. Only papers published in English were reviewed. The search showed that very few well-designed empirical studies have been conducted on the topic and only one pertinent systematic review was identified. Findings show a high prevalence of physical violence among those trafficked compared to non-trafficked women. Sexually transmitted infections and HIV AIDS are prevalent and preventive care is virtually non-existent. Quantifying human trafficking for sexual exploitation at large global sporting events has proven to be elusive given the clandestine nature of the industry. This is not to say that human trafficking for sex as well as forced sexual exploitation does not occur. It almost certainly exists, but to what extent is the big question. It is a hidden problem on a global scale in plain view with tremendous public health implications. Copyright © 2014 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
Big Data Analytics with Datalog Queries on Spark.
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2016-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.
Big Data Analytics with Datalog Queries on Spark
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2017-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less
NASA Technical Reports Server (NTRS)
Von Puttkamer, J.
1978-01-01
Manned spaceflight is considered within the framework of two broad categories: human exploitation of space for economic or scientific gain, and human habitation of space as a place where man may live, grow, and actualize himself. With the advent of the Space Shuttle, exploitation of space will take the form of new product development. This will continue during the 1990s as the new products are manufactured on a scale large enough to be profitable. The turn of the century should see major industries in space, and large space habitats. Thus, the question of mankind's existential needs arises. In addition to basic physical needs, the spiritual and cultural requirements of human beings must be considered. The impact of man's presence in space upon human culture in general is discussed with reference to international cooperation, public interest in space programs, scientific advancement, the basic urge to explore, and the density of mankind as a whole; which will become free of external constraints as we step into the cosmos.
A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.
Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang
2016-04-01
Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Regional reanalysis without local data: Exploiting the downscaling paradigm
NASA Astrophysics Data System (ADS)
von Storch, Hans; Feser, Frauke; Geyer, Beate; Klehmet, Katharina; Li, Delei; Rockel, Burkhardt; Schubert-Frisius, Martina; Tim, Nele; Zorita, Eduardo
2017-08-01
This paper demonstrates two important aspects of regional dynamical downscaling of multidecadal atmospheric reanalysis. First, that in this way skillful regional descriptions of multidecadal climate variability may be constructed in regions with little or no local data. Second, that the concept of large-scale constraining allows global downscaling, so that global reanalyses may be completed by additions of consistent detail in all regions of the world. Global reanalyses suffer from inhomogeneities. However, their large-scale componenst are mostly homogeneous; Therefore, the concept of downscaling may be applied to homogeneously complement the large-scale state of the reanalyses with regional detail—wherever the condition of homogeneity of the description of large scales is fulfilled. Technically, this can be done by dynamical downscaling using a regional or global climate model, which's large scales are constrained by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional weather risks—in particular marine risks—was identified. We have run this system in regions with reduced or absent local data coverage, such as Central Siberia, the Bohai and Yellow Sea, Southwestern Africa, and the South Atlantic. Also, a global simulation was computed, which adds regional features to prescribed global dynamics. Our cases demonstrate that spatially detailed reconstructions of the climate state and its change in the recent three to six decades add useful supplementary information to existing observational data for midlatitude and subtropical regions of the world.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
The Search for Efficiency in Arboreal Ray Tracing Applications
NASA Astrophysics Data System (ADS)
van Leeuwen, M.; Disney, M.; Chen, J. M.; Gomez-Dans, J.; Kelbe, D.; van Aardt, J. A.; Lewis, P.
2016-12-01
Forest structure significantly impacts a range of abiotic conditions, including humidity and the radiation regime, all of which affect the rate of net and gross primary productivity. Current forest productivity models typically consider abstract media to represent the transfer of radiation within the canopy. Examples include the representation forest structure via a layered canopy model, where leaf area and inclination angles are stratified with canopy depth, or as turbid media where leaves are randomly distributed within space or within confined geometric solids such as blocks, spheres or cones. While these abstract models are known to produce accurate estimates of primary productivity at the stand level, their limited geometric resolution restricts applicability at fine spatial scales, such as the cell, leaf or shoot levels, thereby not addressing the full potential of assimilation of data from laboratory and field measurements with that of remote sensing technology. Recent research efforts have explored the use of laser scanning to capture detailed tree morphology at millimeter accuracy. These data can subsequently be used to combine ray tracing with primary productivity models, providing an ability to explore trade-offs among different morphological traits or assimilate data from spatial scales, spanning the leaf- to the stand level. Ray tracing has a major advantage of allowing the most accurate structural description of the canopy, and can directly exploit new 3D structural measurements, e.g., from laser scanning. However, the biggest limitation of ray tracing models is their high computational cost, which currently limits their use for large-scale applications. In this talk, we explore ways to more efficiently exploit ray tracing simulations and capture this information in a readily computable form for future evaluation, thus potentially enabling large-scale first-principles forest growth modelling applications.
NASA Astrophysics Data System (ADS)
Wüest, Robert; Nebiker, Stephan
2018-05-01
In this paper we present an app framework for augmenting large-scale walkable maps and orthoimages in museums or public spaces using standard smartphones and tablets. We first introduce a novel approach for using huge orthoimage mosaic floor prints covering several hundred square meters as natural Augmented Reality (AR) markers. We then present a new app architecture and subsequent tests in the Swissarena of the Swiss National Transport Museum in Lucerne demonstrating the capabilities of accurately tracking and augmenting different map topics, including dynamic 3d data such as live air traffic. The resulting prototype was tested with everyday visitors of the museum to get feedback on the usability of the AR app and to identify pitfalls when using AR in the context of a potentially crowded museum. The prototype is to be rolled out to the public after successful testing and optimization of the app. We were able to show that AR apps on standard smartphone devices can dramatically enhance the interactive use of large-scale maps for different purposes such as education or serious gaming in a museum context.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Modeling CMB lensing cross correlations with CLEFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modi, Chirag; White, Martin; Vlah, Zvonimir, E-mail: modichirag@berkeley.edu, E-mail: mwhite@berkeley.edu, E-mail: zvlah@stanford.edu
2017-08-01
A new generation of surveys will soon map large fractions of sky to ever greater depths and their science goals can be enhanced by exploiting cross correlations between them. In this paper we study cross correlations between the lensing of the CMB and biased tracers of large-scale structure at high z . We motivate the need for more sophisticated bias models for modeling increasingly biased tracers at these redshifts and propose the use of perturbation theories, specifically Convolution Lagrangian Effective Field Theory (CLEFT). Since such signals reside at large scales and redshifts, they can be well described by perturbative approaches.more » We compare our model with the current approach of using scale independent bias coupled with fitting functions for non-linear matter power spectra, showing that the latter will not be sufficient for upcoming surveys. We illustrate our ideas by estimating σ{sub 8} from the auto- and cross-spectra of mock surveys, finding that CLEFT returns accurate and unbiased results at high z . We discuss uncertainties due to the redshift distribution of the tracers, and several avenues for future development.« less
Extending large-scale forest inventories to assess urban forests.
Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter
2012-03-01
Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.
Predicting the propagation of concentration and saturation fronts in fixed-bed filters.
Callery, O; Healy, M G
2017-10-15
The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.
A scalable multi-photon coincidence detector based on superconducting nanowires.
Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K
2018-06-04
Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.
NASA Astrophysics Data System (ADS)
Langouët, Loïc; Daire, Marie-Yvane
2009-12-01
The present-day maritime landscape of Western France forms the geographical framework for a recent research project dedicated to the archaeological study of ancient fish-traps, combining regional-scale and site-scale investigations. Based on the compilation and exploitation of a large unpublished dataset including more than 550 sites, a preliminary synthetic study allows us to present some examples of synchronic and thematic approaches, and propose a morphological classification of the weirs. These encouraging first results open up new perspectives on fish-trap chronology closely linked to wider studies on Holocene sea-level changes.
Lichtenberg, Peter A; Ficker, Lisa; Rahman-Filipiak, Analise; Tatro, Ron; Farrell, Cynthia; Speir, James J; Mall, Sanford J; Simasko, Patrick; Collens, Howard H; Jackman, John Daniel
2016-01-01
One of the challenges in preventing the financial exploitation of older adults is that neither criminal justice nor noncriminal justice professionals are equipped to detect capacity deficits. Because decision-making capacity is a cornerstone assessment in cases of financial exploitation, effective instruments for measuring this capacity are essential. We introduce a new screening scale for financial decision making that can be administered to older adults. To explore the scale's implementation and assess construct validity, we conducted a pilot study of 29 older adults seen by APS (Adult Protective Services) workers and 79 seen by other professionals. Case examples are included.
Bio-inspired wooden actuators for large scale applications.
Rüggeberg, Markus; Burgert, Ingo
2015-01-01
Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.
Bio-Inspired Wooden Actuators for Large Scale Applications
Rüggeberg, Markus; Burgert, Ingo
2015-01-01
Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules. PMID:25835386
Fine-scale flight strategies of gulls in urban airflows indicate risk and reward in city living
Shepard, Emily L. C.
2016-01-01
Birds modulate their flight paths in relation to regional and global airflows in order to reduce their travel costs. Birds should also respond to fine-scale airflows, although the incidence and value of this remains largely unknown. We resolved the three-dimensional trajectories of gulls flying along a built-up coastline, and used computational fluid dynamic models to examine how gulls reacted to airflows around buildings. Birds systematically altered their flight trajectories with wind conditions to exploit updraughts over features as small as a row of low-rise buildings. This provides the first evidence that human activities can change patterns of space-use in flying birds by altering the profitability of the airscape. At finer scales still, gulls varied their position to select a narrow range of updraught values, rather than exploiting the strongest updraughts available, and their precise positions were consistent with a strategy to increase their velocity control in gusty conditions. Ultimately, strategies such as these could help unmanned aerial vehicles negotiate complex airflows. Overall, airflows around fine-scale features have profound implications for flight control and energy use, and consideration of this could lead to a paradigm-shift in the way ecologists view the urban environment. This article is part of the themed issue ‘Moving in a moving medium: new perspectives on flight’. PMID:27528784
Large scale shell model study of nuclear spectroscopy in nuclei around 132Sn
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
The properties of low-lying 2+ states in chains of nuclei in the proximity of the magic number N=82 are investigated within a new shell model approach exploiting an iterative algorithm alternative to Lanczos. The calculation yields levels and transition strengths in overall good agreement with experiments. The comparative analysis of the E2 and M1 transitions supports, in many cases, the scheme provided by the interacting boson model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, Edmond
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
NASA Astrophysics Data System (ADS)
Spence, C. M.; Brown, C.; Doss-Gollin, J.
2016-12-01
Climate model projections are commonly used for water resources management and planning under nonstationarity, but they do not reliably reproduce intense short-term precipitation and are instead more skilled at broader spatial scales. To provide a credible estimate of flood trend that reflects climate uncertainty, we present a framework that exploits the connections between synoptic-scale oceanic and atmospheric patterns and local-scale flood-producing meteorological events to develop long-term flood hazard projections. We demonstrate the method for the Iowa River, where high flow episodes have been found to correlate with tropical moisture exports that are associated with a pressure dipole across the eastern continental United States We characterize the relationship between flooding on the Iowa River and this pressure dipole through a nonstationary Pareto-Poisson peaks-over-threshold probability distribution estimated based on the historic record. We then combine the results of a trend analysis of dipole index in the historic record with the results of a trend analysis of the dipole index as simulated by General Circulation Models (GCMs) under climate change conditions through a Bayesian framework. The resulting nonstationary posterior distribution of dipole index, combined with the dipole-conditioned peaks-over-threshold flood frequency model, connects local flood hazard to changes in large-scale atmospheric pressure and circulation patterns that are related to flooding in a process-driven framework. The Iowa River example demonstrates that the resulting nonstationary, probabilistic flood hazard projection may be used to inform risk-based flood adaptation decisions.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks
Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.
2014-01-01
Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Examples of Sentinel-2A Mission Exploitation Results
NASA Astrophysics Data System (ADS)
Koetz, Benjamin; Hoersch, Bianca; Gascon, Ferran; Desnos, Yves-Louis; Seifert, Frank Martin; Paganini, Marc; Ramoino, Fabrizio; Arino, Olivier
2017-04-01
The Sentinel-2 Copernicus mission will bring significant breakthrough in the exploitation of space borne optical data. Sentinel-2 time series will transform land cover, agriculture, forestry, in-land water and costal EO applications from mapping to monitoring, from snapshot to time series data analysis, from image-based to pixel-based processing. The 5-days temporal revisiting of the Sentinel-2 satellites, when both units will be operated together, will usher us in a new era for time series analysis at high spatial resolutions (HR) of 10-20 meters. The monitoring of seasonal variations and processes in phenology and hydrology are examples of the many R&D areas to be studied. The mission's large swath and systematic acquisitions will further support unprecedented coverage at the national scale addressing information requirements of national to regional policies. Within ESA programs, such as the Data User Element (DUE), Scientific Exploitation of Operational Missions (SEOM) and Climate Change Initiative (CCI), several R&D activities are preparing the exploitation of the Sentinel-2 mission towards reliable measurements and monitoring of e.g. Essential Climate Variables and indicators for the Sustainable Development Goals. Early Sentinel-2 results will be presented related to a range of applications and scientific domains such as agricultural monitoring at national scale (DUE Sen2Agri), wetland extent and condition over African Ramsar sites (DUE GlobWetland-Africa), land cover mapping for climate change (CCI Land Cover), national land monitoring (Cadaster-Env), forest degradation (DUE ForMoSa), urban mapping (DUE EO4Urban), in-land water quality (DUE SPONGE), map of Mediterranean aquaculture (DUE SMART) and coral reef habitat mapping (SEOM S2-4Sci Coral). The above-mentioned activities are only a few examples from the very active international land imaging community building on the long-term Landsat and Spot heritage and knowledge.
Gautrot, Julien E.; Trappmann, Britta; Oceguera-Yanez, Fabian; Connelly, John; He, Ximin; Watt, Fiona M.; Huck, Wilhelm T.S.
2010-01-01
The control of the cell microenvironment on model patterned substrates allows the systematic study of cell biology in well defined conditions, potentially using automated systems. The extreme protein resistance of poly(oligo(ethylene glycol methacrylate)) (POEGMA) brushes is exploited to achieve high fidelity patterning of single cells. These coatings can be patterned by soft lithography on large areas (a microscope slide) and scale (substrates were typically prepared in batches of 200). The present protocol relies on the adsorption of extra-cellular matrix (ECM) proteins on unprotected areas using simple incubation and washing steps. The stability of POEGMA brushes, as examined via ellipsometry and SPR, is found to be excellent, both during storage and cell culture. The impact of substrate treatment, brush thickness and incubation protocol on ECM deposition, both for ultra-thin gold and glass substrates, is investigated via fluorescence microscopy and AFM. Optimised conditions result in high quality ECM patterns at the micron scale, even on glass substrates, that are suitable for controlling cell spreading and polarisation. These patterns are compatible with state-of-the-art technologies (fluorescence microscopy, FRET) used for live cell imaging. This technology, combined with single cell analysis methods, provides a platform for exploring the mechanisms that regulate cell behaviour. PMID:20347135
Thermal exploitation of wastes with lignite for energy production.
Grammelis, Panagiotis; Kakaras, Emmanuel; Skodras, George
2003-11-01
The thermal exploitation of wastewood with Greek lignite was investigated by performing tests in a laboratory-scale fluidized bed reactor, a 1-MW(th) semi-industrial circulating fluidized bed combustor, and an industrial boiler. Blends of natural wood, demolition wood, railroad sleepers, medium-density fiberboard residues, and power poles with lignite were used, and the co-combustion efficiency and the effect of wastewood addition on the emitted pollutants were investigated. Carbon monoxide, sulfur dioxide, and oxides of nitrogen emissions were continuously monitored, and, during the industrial-scale tests, the toxic emissions (polychlorinated dibenzodioxins and dibenzofurans and heavy metals) were determined. Ash samples were analyzed for heavy metals in an inductively coupled plasma-atomic emission spectroscopy spectrophotometer. Problems were observed during the preparation of wastewood, because species embedded with different compounds, such as railway sleepers and demolition wood, were not easily treated. All wastewood blends were proven good fuels; co-combustion proceeded smoothly and homogeneous temperature and pressure profiles were obtained. Although some fluctuations were observed, low emissions of gaseous pollutants were obtained for all fuel blends. The metal element emissions (in the flue gases and the solid residues) were lower than the legislative limits. Therefore, wastewood co-combustion with lignite can be realized, provided that the fuel handling and preparation can be practically performed in large-scale installations.
Cell-free protein synthesis: applications in proteomics and biotechnology.
He, Mingyue
2008-01-01
Protein production is one of the key steps in biotechnology and functional proteomics. Expression of proteins in heterologous hosts (such as in E. coli) is generally lengthy and costly. Cell-free protein synthesis is thus emerging as an attractive alternative. In addition to the simplicity and speed for protein production, cell-free expression allows generation of functional proteins that are difficult to produce by in vivo systems. Recent exploitation of cell-free systems enables novel development of technologies for rapid discovery of proteins with desirable properties from very large libraries. This article reviews the recent development in cell-free systems and their application in the large scale protein analysis.
Beam Conditioning and Harmonic Generation in Free ElectronLasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charman, A.E.; Penn, G.; Wolski, A.
2004-07-05
The next generation of large-scale free-electron lasers (FELs) such as Euro-XFEL and LCLS are to be devices which produce coherent X-rays using Self-Amplified Spontaneous Emission (SASE). The performance of these devices is limited by the spread in longitudinal velocities of the beam. In the case where this spread arises primarily from large transverse oscillation amplitudes, beam conditioning can significantly enhance FEL performance. Future X-ray sources may also exploit harmonic generation starting from laser-seeded modulation. Preliminary analysis of such devices is discussed, based on a novel trial-function/variational-principle approach, which shows good agreement with more lengthy numerical simulations.
NASA Astrophysics Data System (ADS)
Muñoz, Á. G.; Díaz-Lobatón, J.; Chourio, X.; Stock, M. J.
2016-05-01
The Lake Maracaibo Basin in North Western Venezuela has the highest annual lightning rate of any place in the world (~ 200 fl km- 2 yr- 1), whose electrical discharges occasionally impact human and animal lives (e.g., cattle) and frequently affect economic activities like oil and natural gas exploitation. Lightning activity is so common in this region that it has a proper name: Catatumbo Lightning (plural). Although short-term lightning forecasts are now common in different parts of the world, to the best of the authors' knowledge, seasonal prediction of lightning activity is still non-existent. This research discusses the relative role of both large-scale and local climate drivers as modulators of lightning activity in the region, and presents a formal predictability study at seasonal scale. Analysis of the Catatumbo Lightning Regional Mode, defined in terms of the second Empirical Orthogonal Function of monthly Lightning Imaging Sensor (LIS-TRMM) and Optical Transient Detector (OTD) satellite data for North Western South America, permits the identification of potential predictors at seasonal scale via a Canonical Correlation Analysis. Lightning activity in North Western Venezuela responds to well defined sea-surface temperature patterns (e.g., El Niño-Southern Oscillation, Atlantic Meridional Mode) and changes in the low-level meridional wind field that are associated with the Inter-Tropical Convergence Zone migrations, the Caribbean Low Level Jet and tropical cyclone activity, but it is also linked to local drivers like convection triggered by the topographic configuration and the effect of the Maracaibo Basin Nocturnal Low Level Jet. The analysis indicates that at seasonal scale the relative contribution of the large-scale drivers is more important than the local (basin-wide) ones, due to the synoptic control imposed by the former. Furthermore, meridional CAPE transport at 925 mb is identified as the best potential predictor for lightning activity in the Lake Maracaibo Basin. It is found that the predictive skill is slightly higher for the minimum lightning season (Jan-Feb) than for the maximum one (Sep-Oct), but that in general the skill is high enough to be useful for decision-making processes related to human safety, oil and natural gas exploitation, energy and food security.
Lichtenberg, Peter A; Ficker, Lisa J; Rahman-Filipiak, Annalise
2016-01-01
This study examines preliminary evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS), a new person-centered approach to assessing capacity to make financial decisions, and its relationship to self-reported cases of financial exploitation in 69 older African Americans. More than one third of individuals reporting financial exploitation also had questionable decisional abilities. Overall, decisional ability score and current decision total were significantly associated with cognitive screening test and financial ability scores, demonstrating good criterion validity. Study findings suggest that impaired decisional abilities may render older adults more vulnerable to financial exploitation, and that the LFDRS is a valid tool.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
Continental-scale patterns of canopy tree composition and function across Amazonia.
ter Steege, Hans; Pitman, Nigel C A; Phillips, Oliver L; Chave, Jerome; Sabatier, Daniel; Duque, Alvaro; Molino, Jean-François; Prévost, Marie-Françoise; Spichiger, Rodolphe; Castellanos, Hernán; von Hildebrand, Patricio; Vásquez, Rodolfo
2006-09-28
The world's greatest terrestrial stores of biodiversity and carbon are found in the forests of northern South America, where large-scale biogeographic patterns and processes have recently begun to be described. Seven of the nine countries with territory in the Amazon basin and the Guiana shield have carried out large-scale forest inventories, but such massive data sets have been little exploited by tropical plant ecologists. Although forest inventories often lack the species-level identifications favoured by tropical plant ecologists, their consistency of measurement and vast spatial coverage make them ideally suited for numerical analyses at large scales, and a valuable resource to describe the still poorly understood spatial variation of biomass, diversity, community composition and forest functioning across the South American tropics. Here we show, by using the seven forest inventories complemented with trait and inventory data collected elsewhere, two dominant gradients in tree composition and function across the Amazon, one paralleling a major gradient in soil fertility and the other paralleling a gradient in dry season length. The data set also indicates that the dominance of Fabaceae in the Guiana shield is not necessarily the result of root adaptations to poor soils (nodulation or ectomycorrhizal associations) but perhaps also the result of their remarkably high seed mass there as a potential adaptation to low rates of disturbance.
Continental-scale patterns of canopy tree composition and function across Amazonia
NASA Astrophysics Data System (ADS)
Ter Steege, Hans; Pitman, Nigel C. A.; Phillips, Oliver L.; Chave, Jerome; Sabatier, Daniel; Duque, Alvaro; Molino, Jean-François; Prévost, Marie-Françoise; Spichiger, Rodolphe; Castellanos, Hernán; von Hildebrand, Patricio; Vásquez, Rodolfo
2006-09-01
The world's greatest terrestrial stores of biodiversity and carbon are found in the forests of northern South America, where large-scale biogeographic patterns and processes have recently begun to be described. Seven of the nine countries with territory in the Amazon basin and the Guiana shield have carried out large-scale forest inventories, but such massive data sets have been little exploited by tropical plant ecologists. Although forest inventories often lack the species-level identifications favoured by tropical plant ecologists, their consistency of measurement and vast spatial coverage make them ideally suited for numerical analyses at large scales, and a valuable resource to describe the still poorly understood spatial variation of biomass, diversity, community composition and forest functioning across the South American tropics. Here we show, by using the seven forest inventories complemented with trait and inventory data collected elsewhere, two dominant gradients in tree composition and function across the Amazon, one paralleling a major gradient in soil fertility and the other paralleling a gradient in dry season length. The data set also indicates that the dominance of Fabaceae in the Guiana shield is not necessarily the result of root adaptations to poor soils (nodulation or ectomycorrhizal associations) but perhaps also the result of their remarkably high seed mass there as a potential adaptation to low rates of disturbance.
Intermittence for humans spreading 45,000 years ago: from Eurasia to the Americas.
Flores, J C; Hopp, Renato
2013-10-01
From northeastern Eurasia to the Americas, a three-stage spread of modern humans is considered through large-scale intermittence (exploitation/relocation). Conceptually, this work supports intermittence as a real strategy for colonization of new habitats. For the first stage, northeastern Eurasia travel, we adapt our model to archaeological dates determining the diffusion coefficient (exploitation phase) as D = 299.44 km2/yr and the velocity parameter (relocation phase) as vo = 4.8944 km/yr. The relative phase weight (✧0.46) between both kinds of motions is consistent with a moderate biological population rate (r΄ ✧ 0.0046/yr). The second stage is related to population fragmentation. The last stage, reaching Alaska, corresponds essentially to relocation (vo ✧ 0.75 km/yr). Copyright © 2014 Wayne State University Press, Detroit, Michigan 48201-1309.
Promoting R & D in photobiological hydrogen production utilizing mariculture-raised cyanobacteria.
Sakurai, Hidehiro; Masukawa, Hajime
2007-01-01
This review article explores the potential of using mariculture-raised cyanobacteria as solar energy converters of hydrogen (H(2)). The exploitation of the sea surface for large-scale renewable energy production and the reasons for selecting the economical, nitrogenase-based systems of cyanobacteria for H(2) production, are described in terms of societal benefits. Reports of cyanobacterial photobiological H(2) production are summarized with respect to specific activity, efficiency of solar energy conversion, and maximum H(2) concentration attainable. The need for further improvements in biological parameters such as low-light saturation properties, sustainability of H(2) production, and so forth, and the means to overcome these difficulties through the identification of promising wild-type strains followed by optimization of the selected strains using genetic engineering are also discussed. Finally, a possible mechanism for the development of economical large-scale mariculture operations in conjunction with international cooperation and social acceptance is outlined.
Endocytic reawakening of motility in jammed epithelia
NASA Astrophysics Data System (ADS)
Malinverno, Chiara; Corallino, Salvatore; Giavazzi, Fabio; Bergert, Martin; Li, Qingsen; Leoni, Marco; Disanza, Andrea; Frittoli, Emanuela; Oldani, Amanda; Martini, Emanuele; Lendenmann, Tobias; Deflorian, Gianluca; Beznoussenko, Galina V.; Poulikakos, Dimos; Ong, Kok Haur; Uroz, Marina; Trepat, Xavier; Parazzoli, Dario; Maiuri, Paolo; Yu, Weimiao; Ferrari, Aldo; Cerbino, Roberto; Scita, Giorgio
2017-05-01
Dynamics of epithelial monolayers has recently been interpreted in terms of a jamming or rigidity transition. How cells control such phase transitions is, however, unknown. Here we show that RAB5A, a key endocytic protein, is sufficient to induce large-scale, coordinated motility over tens of cells, and ballistic motion in otherwise kinetically arrested monolayers. This is linked to increased traction forces and to the extension of cell protrusions, which align with local velocity. Molecularly, impairing endocytosis, macropinocytosis or increasing fluid efflux abrogates RAB5A-induced collective motility. A simple model based on mechanical junctional tension and an active cell reorientation mechanism for the velocity of self-propelled cells identifies regimes of monolayer dynamics that explain endocytic reawakening of locomotion in terms of a combination of large-scale directed migration and local unjamming. These changes in multicellular dynamics enable collectives to migrate under physical constraints and may be exploited by tumours for interstitial dissemination.
Multidimensional quantum entanglement with large-scale integrated optics.
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G
2018-04-20
The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates
NASA Astrophysics Data System (ADS)
Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma
2016-10-01
This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, K.
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.
1989-01-01
The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.
Parallel-vector out-of-core equation solver for computational mechanics
NASA Technical Reports Server (NTRS)
Qin, J.; Agarwal, T. K.; Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.
1993-01-01
A parallel/vector out-of-core equation solver is developed for shared-memory computers, such as the Cray Y-MP machine. The input/ output (I/O) time is reduced by using the a synchronous BUFFER IN and BUFFER OUT, which can be executed simultaneously with the CPU instructions. The parallel and vector capability provided by the supercomputers is also exploited to enhance the performance. Numerical applications in large-scale structural analysis are given to demonstrate the efficiency of the present out-of-core solver.
Functional inks and printing of two-dimensional materials.
Hu, Guohua; Kang, Joohoon; Ng, Leonard W T; Zhu, Xiaoxi; Howe, Richard C T; Jones, Christopher G; Hersam, Mark C; Hasan, Tawfique
2018-05-08
Graphene and related two-dimensional materials provide an ideal platform for next generation disruptive technologies and applications. Exploiting these solution-processed two-dimensional materials in printing can accelerate this development by allowing additive patterning on both rigid and conformable substrates for flexible device design and large-scale, high-speed, cost-effective manufacturing. In this review, we summarise the current progress on ink formulation of two-dimensional materials and the printable applications enabled by them. We also present our perspectives on their research and technological future prospects.
NASA Technical Reports Server (NTRS)
Lee, J.; Kim, K.
1991-01-01
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
Space Station services and design features for users
NASA Technical Reports Server (NTRS)
Kurzhals, Peter R.; Mckinney, Royce L.
1987-01-01
The operational design features and services planned for the NASA Space Station will furnish, in addition to novel opportunities and facilities, lower costs through interface standardization and automation and faster access by means of computer-aided integration and control processes. By furnishing a basis for large-scale space exploitation, the Space Station will possess industrial production and operational services capabilities that may be used by the private sector for commercial ventures; it could also ultimately support lunar and planetary exploration spacecraft assembly and launch facilities.
On the statistical mechanics of the 2D stochastic Euler equation
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.
Mari, Lorenzo; Bonaventura, Luca; Storto, Andrea; Melià, Paco; Gatto, Marino; Masina, Simona
2017-01-01
Protecting key hotspots of marine biodiversity is essential to maintain ecosystem services at large spatial scales. Protected areas serve not only as sources of propagules colonizing other habitats, but also as receptors, thus acting as protected nurseries. To quantify the geographical extent and the temporal persistence of ecological benefits resulting from protection, we investigate larval connectivity within a remote archipelago, characterized by a strong spatial gradient of human impact from pristine to heavily exploited: the Northern Line Islands (NLIs), including part of the Pacific Remote Islands Marine National Monument (PRI-MNM). Larvae are described as passive Lagrangian particles transported by oceanic currents obtained from a oceanographic reanalysis. We compare different simulation schemes and compute connectivity measures (larval exchange probabilities and minimum/average larval dispersal distances from target islands). To explore the role of PRI-MNM in protecting marine organisms with pelagic larval stages, we drive millions of individual-based simulations for various Pelagic Larval Durations (PLDs), in all release seasons, and over a two-decades time horizon (1991–2010). We find that connectivity in the NLIs is spatially asymmetric and displays significant intra- and inter-annual variations. The islands belonging to PRI-MNM act more as sinks than sources of larvae, and connectivity is higher during the winter-spring period. In multi-annual analyses, yearly averaged southward connectivity significantly and negatively correlates with climatological anomalies (El Niño). This points out a possible system fragility and susceptibility to global warming. Quantitative assessments of large-scale, long-term marine connectivity patterns help understand region-specific, ecologically relevant interactions between islands. This is fundamental for devising scientifically-based protection strategies, which must be space- and time-varying to cope with the challenges posed by the concurrent pressures of human exploitation and global climate change. PMID:28809937
Mari, Lorenzo; Bonaventura, Luca; Storto, Andrea; Melià, Paco; Gatto, Marino; Masina, Simona; Casagrandi, Renato
2017-01-01
Protecting key hotspots of marine biodiversity is essential to maintain ecosystem services at large spatial scales. Protected areas serve not only as sources of propagules colonizing other habitats, but also as receptors, thus acting as protected nurseries. To quantify the geographical extent and the temporal persistence of ecological benefits resulting from protection, we investigate larval connectivity within a remote archipelago, characterized by a strong spatial gradient of human impact from pristine to heavily exploited: the Northern Line Islands (NLIs), including part of the Pacific Remote Islands Marine National Monument (PRI-MNM). Larvae are described as passive Lagrangian particles transported by oceanic currents obtained from a oceanographic reanalysis. We compare different simulation schemes and compute connectivity measures (larval exchange probabilities and minimum/average larval dispersal distances from target islands). To explore the role of PRI-MNM in protecting marine organisms with pelagic larval stages, we drive millions of individual-based simulations for various Pelagic Larval Durations (PLDs), in all release seasons, and over a two-decades time horizon (1991-2010). We find that connectivity in the NLIs is spatially asymmetric and displays significant intra- and inter-annual variations. The islands belonging to PRI-MNM act more as sinks than sources of larvae, and connectivity is higher during the winter-spring period. In multi-annual analyses, yearly averaged southward connectivity significantly and negatively correlates with climatological anomalies (El Niño). This points out a possible system fragility and susceptibility to global warming. Quantitative assessments of large-scale, long-term marine connectivity patterns help understand region-specific, ecologically relevant interactions between islands. This is fundamental for devising scientifically-based protection strategies, which must be space- and time-varying to cope with the challenges posed by the concurrent pressures of human exploitation and global climate change.
An ignition key for atomic-scale engines
NASA Astrophysics Data System (ADS)
Dundas, Daniel; Cunningham, Brian; Buchanan, Claire; Terasawa, Asako; Paxton, Anthony T.; Todorov, Tchavdar N.
2012-10-01
A current-carrying resonant nanoscale device, simulated by non-adiabatic molecular dynamics, exhibits sharp activation of non-conservative current-induced forces with bias. The result, above the critical bias, is generalized rotational atomic motion with a large gain in kinetic energy. The activation exploits sharp features in the electronic structure, and constitutes, in effect, an ignition key for atomic-scale motors. A controlling factor for the effect is the non-equilibrium dynamical response matrix for small-amplitude atomic motion under current. This matrix can be found from the steady-state electronic structure by a simpler static calculation, providing a way to detect the likely appearance, or otherwise, of non-conservative dynamics, in advance of real-time modelling.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
RenNanqi; GuoWanqian; LiuBingfeng; CaoGuangli; DingJie
2011-06-01
Among different technologies of hydrogen production, bio-hydrogen production exhibits perhaps the greatest potential to replace fossil fuels. Based on recent research on dark fermentative hydrogen production, this article reviews the following aspects towards scaled-up application of this technology: bioreactor development and parameter optimization, process modeling and simulation, exploitation of cheaper raw materials and combining dark-fermentation with photo-fermentation. Bioreactors are necessary for dark-fermentation hydrogen production, so the design of reactor type and optimization of parameters are essential. Process modeling and simulation can help engineers design and optimize large-scale systems and operations. Use of cheaper raw materials will surely accelerate the pace of scaled-up production of biological hydrogen. And finally, combining dark-fermentation with photo-fermentation holds considerable promise, and has successfully achieved maximum overall hydrogen yield from a single substrate. Future development of bio-hydrogen production will also be discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.
2017-12-01
Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.
Potential gains from hospital mergers in Denmark.
Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller
2010-12-01
The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.
Communication architecture for large geostationary platforms
NASA Technical Reports Server (NTRS)
Bond, F. E.
1979-01-01
Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.
Event-driven processing for hardware-efficient neural spike sorting
NASA Astrophysics Data System (ADS)
Liu, Yan; Pereira, João L.; Constandinou, Timothy G.
2018-02-01
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
The evolution and exploitation of the fiber-optic hydrophone
NASA Astrophysics Data System (ADS)
Hill, David J.
2007-07-01
In the late 1970s one of the first applications identified for fibre-optic sensing was the fibre-optic hydrophone. It was recognised that the technology had the potential to provide a cost effective solution for large-scale arrays of highly sensitive hydrophones which could be interrogated over large distances. Consequently both the United Kingdom and United States navies funded the development of this sonar technology to the point that it is now deployed on submarines and as seabed arrays. The basic design of a fibre-optic hydrophone has changed little; comprising a coil of optical fibre wound on a compliant mandrel, interrogated using interferometric techniques. Although other approaches are being investigated, including the development of fibre-laser hydrophones, the interferometric approach remains the most efficient way to create highly multiplexed arrays of acoustic sensors. So much so, that the underlying technology is now being exploited in civil applications. Recently the exploration and production sector of the oil and gas industry has begun funding the development of fibre-optic seismic sensing using seabed mounted, very large-scale arrays of four component (three accelerometers and a hydrophone) packages based upon the original technology developed for sonar systems. This has given new impetus to the development of the sensors and the associated interrogation systems which has led to the technology being adopted for other commercial uses. These include the development of networked in-road fibre-optic Weigh-in-Motion sensors and of intruder detection systems which are able to acoustically monitor long lengths of border, on both land and at sea. After two decades, the fibre-optic hydrophone and associated technology has matured and evolved into a number of highly capable sensing solutions used by a range of industries.
Where the Wild Things Are: Observational Constraints on Black Holes' Growth
NASA Astrophysics Data System (ADS)
Merloni, Andrea
2009-12-01
The physical and evolutionary relation between growing supermassive black holes (AGN) and host galaxies is currently the subject of intense research activity. Nevertheless, a deep theoretical understanding of such a relation is hampered by the unique multi-scale nature of the combined AGN-galaxy system, which defies any purely numerical, or semi-analytic approach. Various physical process active on different physical scales have signatures in different parts of the electromagnetic spectrum; thus, observations at different wavelengths and theoretical ideas all can contribute towards a ``large dynamic range'' view of the AGN phenomenon, capable of conceptually ``resolving'' the many scales involved. As an example, I will focus in this review on two major recent observational results on the cosmic evolution of supermassive black holes, focusing on the novel contribution given to the field by the COSMOS survey. First of all, I will discuss the evidence for the so-called ``downsizing'' in the AGN population as derived from large X-ray surveys. I will then present new constraints on the evolution of the black hole-galaxy scaling relation at 1
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
Transposon facilitated DNA sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, D.E.; Berg, C.M.; Huang, H.V.
1990-01-01
The purpose of this research is to investigate and develop methods that exploit the power of bacterial transposable elements for large scale DNA sequencing: Our premise is that the use of transposons to put primer binding sites randomly in target DNAs should provide access to all portions of large DNA fragments, without the inefficiencies of methods involving random subcloning and attendant repetitive sequencing, or of sequential synthesis of many oligonucleotide primers that are used to match systematically along a DNA molecule. Two unrelated bacterial transposons, Tn5 and {gamma}{delta}, are being used because they have both proven useful for molecular analyses,more » and because they differ sufficiently in mechanism and specificity of transposition to merit parallel development.« less
Megacity pumping and preferential flow threaten groundwater quality
Khan, Mahfuzur R.; Koneshloo, Mohammad; Knappett, Peter S. K.; Ahmed, Kazi M.; Bostick, Benjamin C.; Mailloux, Brian J.; Mozumder, Rajib H.; Zahid, Anwar; Harvey, Charles F.; van Geen, Alexander; Michael, Holly A.
2016-01-01
Many of the world's megacities depend on groundwater from geologically complex aquifers that are over-exploited and threatened by contamination. Here, using the example of Dhaka, Bangladesh, we illustrate how interactions between aquifer heterogeneity and groundwater exploitation jeopardize groundwater resources regionally. Groundwater pumping in Dhaka has caused large-scale drawdown that extends into outlying areas where arsenic-contaminated shallow groundwater is pervasive and has potential to migrate downward. We evaluate the vulnerability of deep, low-arsenic groundwater with groundwater models that incorporate geostatistical simulations of aquifer heterogeneity. Simulations show that preferential flow through stratigraphy typical of fluvio-deltaic aquifers could contaminate deep (>150 m) groundwater within a decade, nearly a century faster than predicted through homogeneous models calibrated to the same data. The most critical fast flowpaths cannot be predicted by simplified models or identified by standard measurements. Such complex vulnerability beyond city limits could become a limiting factor for megacity groundwater supplies in aquifers worldwide. PMID:27673729
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)
2000-01-01
A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.
Causes and projections of abrupt climate-driven ecosystem shifts in the North Atlantic.
Beaugrand, Grégory; Edwards, Martin; Brander, Keith; Luczak, Christophe; Ibanez, Frederic
2008-11-01
Warming of the global climate is now unequivocal and its impact on Earth' functional units has become more apparent. Here, we show that marine ecosystems are not equally sensitive to climate change and reveal a critical thermal boundary where a small increase in temperature triggers abrupt ecosystem shifts seen across multiple trophic levels. This large-scale boundary is located in regions where abrupt ecosystem shifts have been reported in the North Atlantic sector and thereby allows us to link these shifts by a global common phenomenon. We show that these changes alter the biodiversity and carrying capacity of ecosystems and may, combined with fishing, precipitate the reduction of some stocks of Atlantic cod already severely impacted by exploitation. These findings offer a way to anticipate major ecosystem changes and to propose adaptive strategies for marine exploited resources such as cod in order to minimize social and economic consequences.
Coupling bimolecular PARylation biosensors with genetic screens to identify PARylation targets.
Krastev, Dragomir B; Pettitt, Stephen J; Campbell, James; Song, Feifei; Tanos, Barbara E; Stoynov, Stoyno S; Ashworth, Alan; Lord, Christopher J
2018-05-22
Poly (ADP-ribose)ylation is a dynamic protein modification that regulates multiple cellular processes. Here, we describe a system for identifying and characterizing PARylation events that exploits the ability of a PBZ (PAR-binding zinc finger) protein domain to bind PAR with high-affinity. By linking PBZ domains to bimolecular fluorescent complementation biosensors, we developed fluorescent PAR biosensors that allow the detection of temporal and spatial PARylation events in live cells. Exploiting transposon-mediated recombination, we integrate the PAR biosensor en masse into thousands of protein coding genes in living cells. Using these PAR-biosensor "tagged" cells in a genetic screen we carry out a large-scale identification of PARylation targets. This identifies CTIF (CBP80/CBP20-dependent translation initiation factor) as a novel PARylation target of the tankyrase enzymes in the centrosomal region of cells, which plays a role in the distribution of the centrosomal satellites.
The Large Ring Laser G for Continuous Earth Rotation Monitoring
NASA Astrophysics Data System (ADS)
Schreiber, K. U.; Klügel, T.; Velikoseltsev, A.; Schlüter, W.; Stedman, G. E.; Wells, J.-P. R.
2009-09-01
Ring Laser gyroscopes exploit the Sagnac effect and measure rotations absolute. They do not require an external reference frame and therefore provide an independent method to monitor Earth rotation. Large-scale versions of these gyroscopes promise to eventually provide a similar high resolution for the measurement of the variations in the Earth rotation rate as the established methods based on VLBI and GNSS. This would open the door to a continuous monitoring of LOD (Length of Day) and polar motion, which is not yet available today. Another advantage is the access to the sub-daily frequency regime of Earth rotation. The ring laser “G” (Grossring), located at the Geodetic Observatory Wettzell (Germany) is the most advanced realization of such a large gyroscope. This paper outlines the current sensor design and properties.
Yuan, Liang (Leon); Herman, Peter R.
2016-01-01
Three-dimensional (3D) periodic nanostructures underpin a promising research direction on the frontiers of nanoscience and technology to generate advanced materials for exploiting novel photonic crystal (PC) and nanofluidic functionalities. However, formation of uniform and defect-free 3D periodic structures over large areas that can further integrate into multifunctional devices has remained a major challenge. Here, we introduce a laser scanning holographic method for 3D exposure in thick photoresist that combines the unique advantages of large area 3D holographic interference lithography (HIL) with the flexible patterning of laser direct writing to form both micro- and nano-structures in a single exposure step. Phase mask interference patterns accumulated over multiple overlapping scans are shown to stitch seamlessly and form uniform 3D nanostructure with beam size scaled to small 200 μm diameter. In this way, laser scanning is presented as a facile means to embed 3D PC structure within microfluidic channels for integration into an optofluidic lab-on-chip, demonstrating a new laser HIL writing approach for creating multi-scale integrated microsystems. PMID:26922872
Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; ...
2016-04-20
Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree withmore » those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. As a result, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.« less
Winkel, Lenny H. E.; Trang, Pham Thi Kim; Lan, Vi Mai; Stengel, Caroline; Amini, Manouchehr; Ha, Nguyen Thi; Viet, Pham Hung; Berg, Michael
2011-01-01
Arsenic contamination of shallow groundwater is among the biggest health threats in the developing world. Targeting uncontaminated deep aquifers is a popular mitigation option although its long-term impact remains unknown. Here we present the alarming results of a large-scale groundwater survey covering the entire Red River Delta and a unique probability model based on three-dimensional Quaternary geology. Our unprecedented dataset reveals that ∼7 million delta inhabitants use groundwater contaminated with toxic elements, including manganese, selenium, and barium. Depth-resolved probabilities and arsenic concentrations indicate drawdown of arsenic-enriched waters from Holocene aquifers to naturally uncontaminated Pleistocene aquifers as a result of > 100 years of groundwater abstraction. Vertical arsenic migration induced by large-scale pumping from deep aquifers has been discussed to occur elsewhere, but has never been shown to occur at the scale seen here. The present situation in the Red River Delta is a warning for other As-affected regions where groundwater is extensively pumped from uncontaminated aquifers underlying high arsenic aquifers or zones. PMID:21245347
Industrial biomanufacturing: The future of chemical production.
Clomburg, James M; Crumbley, Anna M; Gonzalez, Ramon
2017-01-06
The current model for industrial chemical manufacturing employs large-scale megafacilities that benefit from economies of unit scale. However, this strategy faces environmental, geographical, political, and economic challenges associated with energy and manufacturing demands. We review how exploiting biological processes for manufacturing (i.e., industrial biomanufacturing) addresses these concerns while also supporting and benefiting from economies of unit number. Key to this approach is the inherent small scale and capital efficiency of bioprocesses and the ability of engineered biocatalysts to produce designer products at high carbon and energy efficiency with adjustable output, at high selectivity, and under mild process conditions. The biological conversion of single-carbon compounds represents a test bed to establish this paradigm, enabling rapid, mobile, and widespread deployment, access to remote and distributed resources, and adaptation to new and changing markets. Copyright © 2017, American Association for the Advancement of Science.
Janes, J K; Roe, A D; Rice, A V; Gorrell, J C; Coltman, D W; Langor, D W; Sperling, F A H
2016-01-01
An understanding of mating systems and fine-scale spatial genetic structure is required to effectively manage forest pest species such as Dendroctonus ponderosae (mountain pine beetle). Here we used genome-wide single-nucleotide polymorphisms to assess the fine-scale genetic structure and mating system of D. ponderosae collected from a single stand in Alberta, Canada. Fine-scale spatial genetic structure was absent within the stand and the majority of genetic variation was best explained at the individual level. Relatedness estimates support previous reports of pre-emergence mating. Parentage assignment tests indicate that a polygamous mating system better explains the relationships among individuals within a gallery than the previously reported female monogamous/male polygynous system. Furthermore, there is some evidence to suggest that females may exploit the galleries of other females, at least under epidemic conditions. Our results suggest that current management models are likely to be effective across large geographic areas based on the absence of fine-scale genetic structure. PMID:26286666
Automatic Energy Schemes for High Performance Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundriyal, Vaibhav
Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-allmore » and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honegger, J.L.; Czernichowski-Lauriol, I.; Criaud, A.
1989-01-01
The fluid of the Dogger aquifer is always used through a closed loop formed by the production well, the heating plant and the injection well. After two or three years of exploitation of the geothermal doublets in the northern part of the Paris basin, scaling and plugging problems have appeared in some cases. The results of the detailed study carried out at La Courneuve Nord, a typical site of this area, are presented. The drawdown of production rate, scaling in the heat exchanger and the increase of injection pressure required a rapid decision for workover operations on the wells. Thesemore » cleaning operations and joint research studies allowed the authors to identify the cause of the plugging as well as to locate these deposits and to estimate their importance. After cleaning operations, the hydraulic properties of the reservoir seem to be recovered. Chemical and mineralogical analyses of these deposits identified the presence of a large variety of iron sulfide and a typical corrosion product. Biochemical and bacteriological studies show a very high content of micro-organisms. A chemical model, IPDEGAZ, is used to calculate the evolution of the saturation indexes of the fluid with respect to iron sulfide phases. The effects of parameters such as pressure, temperature, degassing and addition of iron by corrosion are simulated. The results of the observation and modeling approaches are compared.« less
High-yield production of graphene by liquid-phase exfoliation of graphite.
Hernandez, Yenny; Nicolosi, Valeria; Lotya, Mustafa; Blighe, Fiona M; Sun, Zhenyu; De, Sukanta; McGovern, I T; Holland, Brendan; Byrne, Michele; Gun'Ko, Yurii K; Boland, John J; Niraj, Peter; Duesberg, Georg; Krishnamurthy, Satheesh; Goodhue, Robbie; Hutchison, John; Scardaci, Vittorio; Ferrari, Andrea C; Coleman, Jonathan N
2008-09-01
Fully exploiting the properties of graphene will require a method for the mass production of this remarkable material. Two main routes are possible: large-scale growth or large-scale exfoliation. Here, we demonstrate graphene dispersions with concentrations up to approximately 0.01 mg ml(-1), produced by dispersion and exfoliation of graphite in organic solvents such as N-methyl-pyrrolidone. This is possible because the energy required to exfoliate graphene is balanced by the solvent-graphene interaction for solvents whose surface energies match that of graphene. We confirm the presence of individual graphene sheets by Raman spectroscopy, transmission electron microscopy and electron diffraction. Our method results in a monolayer yield of approximately 1 wt%, which could potentially be improved to 7-12 wt% with further processing. The absence of defects or oxides is confirmed by X-ray photoelectron, infrared and Raman spectroscopies. We are able to produce semi-transparent conducting films and conducting composites. Solution processing of graphene opens up a range of potential large-area applications, from device and sensor fabrication to liquid-phase chemistry.
Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors
NASA Technical Reports Server (NTRS)
Probst, David K.
1993-01-01
A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.
Genetic structuring of northern myotis (Myotis septentrionalis) at multiple spatial scales
Johnson, Joshua B.; Roberts, James H.; King, Timothy L.; Edwards, John W.; Ford, W. Mark; Ray, David A.
2014-01-01
Although groups of bats may be genetically distinguishable at large spatial scales, the effects of forest disturbances, particularly permanent land use conversions on fine-scale population structure and gene flow of summer aggregations of philopatric bat species are less clear. We genotyped and analyzed variation at 10 nuclear DNA microsatellite markers in 182 individuals of the forest-dwelling northern myotis (Myotis septentrionalis) at multiple spatial scales, from within first-order watersheds scaling up to larger regional areas in West Virginia and New York. Our results indicate that groups of northern myotis were genetically indistinguishable at any spatial scale we considered, and the collective population maintained high genetic diversity. It is likely that the ability to migrate, exploit small forest patches, and use networks of mating sites located throughout the Appalachian Mountains, Interior Highlands, and elsewhere in the hibernation range have allowed northern myotis to maintain high genetic diversity and gene flow regardless of forest disturbances at local and regional spatial scales. A consequence of maintaining high gene flow might be the potential to minimize genetic founder effects following population declines caused currently by the enzootic White-nose Syndrome.
Active Exploration of Large 3D Model Repositories.
Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min
2015-12-01
With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
CSM Testbed Development and Large-Scale Structural Applications
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.
1989-01-01
A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.
Lichtenberg, P.A.; Howard, H; Simaskp, P.; Mall, S.; Speir, J.; Farrell, C.; Tatro, R; Rahman-Filipiak, A.; Ficker, L.J.
2016-01-01
One of the challenges in preventing the financial exploitation of older adults is that neither criminal justice nor noncriminal justice professionals are equipped to detect capacity deficits. Because decision-making capacity is a cornerstone assessment in cases of financial exploitation, effective instruments for measuring this capacity are essential. We introduce a new screening scale for financial decision making that can be administered to older adults. To explore the scale’s implementation and assess construct validity, we conducted a pilot study of 29 older adults seen by APS workers and 79 seen by other professionals. Case examples are included. PMID:27010780
Insular threat associations within taxa worldwide.
Leclerc, Camille; Courchamp, Franck; Bellard, Céline
2018-04-23
The global loss of biodiversity can be attributed to numerous threats. While pioneer studies have investigated their relative importance, the majority of those studies are restricted to specific geographic regions and/or taxonomic groups and only consider a small subset of threats, generally in isolation despite their frequent interaction. Here, we investigated 11 major threats responsible for species decline on islands worldwide. We applied an innovative method of network analyses to disentangle the associations of multiple threats on vertebrates, invertebrates, and plants in 15 insular regions. Biological invasions, wildlife exploitation, and cultivation, either alone or in association, were found to be the three most important drivers of species extinction and decline on islands. Specifically, wildlife exploitation and cultivation are largely associated with the decline of threatened plants and terrestrial vertebrates, whereas biological invasions mostly threaten invertebrates and freshwater fish. Furthermore, biodiversity in the Indian Ocean and near the Asian coasts is mostly affected by wildlife exploitation and cultivation compared to biological invasions in the Pacific and Atlantic insular regions. We highlighted specific associations of threats at different scales, showing that the analysis of each threat in isolation might be inadequate for developing effective conservation policies and managements.
Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael
2003-01-01
We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.
Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin
2018-01-01
Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...
2018-01-05
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
Accelerating large-scale protein structure alignments with graphics processing units
2012-01-01
Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132
Gamifying Video Object Segmentation.
Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela
2017-10-01
Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.
NASA Astrophysics Data System (ADS)
Klose, C. D.
2006-12-01
This presentation emphasizes the dualism of natural resources exploitation and economic growth versus geomechanical pollution and risks of human-triggered earthquakes. Large-scale geoengineering activities, e.g., mining, reservoir impoundment, oil/gas production, water exploitation or fluid injection, alter pre-existing lithostatic stress states in the earth's crust and are anticipated to trigger earthquakes. Such processes of in- situ stress alteration are termed geomechanical pollution. Moreover, since the 19th century more than 200 earthquakes have been documented worldwide with a seismic moment magnitude of 4.5
Multi-scale Material Appearance
NASA Astrophysics Data System (ADS)
Wu, Hongzhi
Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.
Deep convolutional neural network based antenna selection in multiple-input multiple-output system
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Li, Yan; Hu, Ying
2018-03-01
Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.
The large-scale three-point correlation function of the SDSS BOSS DR12 CMASS galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.; Beutler, Florian; Chuang, Chia-Hsun; Cuesta, Antonio J.; Ge, Jian; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McBride, Cameron K.; Nichol, Robert C.; Percival, Will J.; Rodríguez-Torres, Sergio; Ross, Ashley J.; Scoccimarro, Román; Seo, Hee-Jong; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magaña, Mariana
2017-06-01
We report a measurement of the large-scale three-point correlation function of galaxies using the largest data set for this purpose to date, 777 202 luminous red galaxies in the Sloan Digital Sky Survey Baryon Acoustic Oscillation Spectroscopic Survey (SDSS BOSS) DR12 CMASS sample. This work exploits the novel algorithm of Slepian & Eisenstein to compute the multipole moments of the 3PCF in O(N^2) time, with N the number of galaxies. Leading-order perturbation theory models the data well in a compressed basis where one triangle side is integrated out. We also present an accurate and computationally efficient means of estimating the covariance matrix. With these techniques, the redshift-space linear and non-linear bias are measured, with 2.6 per cent precision on the former if σ8 is fixed. The data also indicate a 2.8σ preference for the BAO, confirming the presence of BAO in the three-point function.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
Helium ion microscopy of Lepidoptera scales.
Boden, Stuart A; Asadollahbaik, Asa; Rutt, Harvey N; Bagnall, Darren M
2012-01-01
In this report, helium ion microscopy (HIM) is used to study the micro and nanostructures responsible for structural color in the wings of two species of Lepidotera from the Papilionidae family: Papilio ulysses (Blue Mountain Butterfly) and Parides sesostris (Emerald-patched Cattleheart). Electronic charging of uncoated scales from the wings of these butterflies, due to the incident ion beam, is successfully neutralized, leading to images displaying a large depth-of-field and a high level of surface detail, which would normally be obscured by traditional coating methods used for scanning electron microscopy (SEM). The images are compared with those from variable pressure SEM, demonstrating the superiority of HIM at high magnifications. In addition, the large depth-of-field capabilities of HIM are exploited through the creation of stereo pairs that allows the exploration of the third dimension. Furthermore, the extraction of quantitative height information which matches well with cross-sectional transmission electron microscopy measurements from the literature is demonstrated. © Wiley Periodicals, Inc.
On the impact of approximate computation in an analog DeSTIN architecture.
Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar
2014-05-01
Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.
Collaborative mining and interpretation of large-scale data for biomedical research insights.
Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis
2014-01-01
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence.
Collaborative Mining and Interpretation of Large-Scale Data for Biomedical Research Insights
Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis
2014-01-01
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence. PMID:25268270
Solving the shrinkage-induced PDMS alignment registration issue in multilayer soft lithography
NASA Astrophysics Data System (ADS)
Moraes, Christopher; Sun, Yu; Simmons, Craig A.
2009-06-01
Shrinkage of polydimethylsiloxane (PDMS) complicates alignment registration between layers during multilayer soft lithography fabrication. This often hinders the development of large-scale microfabricated arrayed devices. Here we report a rapid method to construct large-area, multilayered devices with stringent alignment requirements. This technique, which exploits a previously unrecognized aspect of sandwich mold fabrication, improves device yield, enables highly accurate alignment over large areas of multilayered devices and does not require strict regulation of fabrication conditions or extensive calibration processes. To demonstrate this technique, a microfabricated Braille display was developed and characterized. High device yield and accurate alignment within 15 µm were achieved over three layers for an array of 108 Braille units spread over a 6.5 cm2 area, demonstrating the fabrication of well-aligned devices with greater ease and efficiency than previously possible.
Sulfide scaling in low enthalpy geothermal environments; A survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Criaud, A.; Fouillac, C.
1989-01-01
A review of the sulfide scaling phenomena in low-temperature environments is presented. While high-temperature fluids tend to deposit metal sulfides because of their high concentrations of dissolved metals and variations of temperature, pressure and fluid chemistry, low temperature media are characterized by very low metal content but much higher dissolved sulfide. In the case of the goethermal wells of the Paris Basin, detailed studies demonstrate that the relatively large concentrations of chloride and dissolved sulfide are responsible for corrosion and consequent formation of iron sulfide scale composed of mackinawite, pyrite and pyrrhotite. The effects of the exploitation schemes are farmore » less important than the corrosion of the casings. The low-enthalpy fluids that do not originate from sedimentary aquifers (such as in Iceland and Bulgaria), have a limited corrosion potential, and the thin sulfide film that appears may prevent the progress of corrosion.« less
History of Missouri Forests in the Era of Exploitation and Conservation
David Benac; Susan Flader
2004-01-01
The era of timber exploitation and early conservation in the Missouri Ozarks occurred roughly from 1880 to 1950, beginning when large timber companies moved into the region to harvest the pine and oak of the valleys and ridgelines. Pine was largely depleted by 1910, but oak harvest continued. Resident Ozarkers, who came largely from a tradition of subsistence hunting,...
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
Query-based biclustering of gene expression data using Probabilistic Relational Models.
Zhao, Hui; Cloots, Lore; Van den Bulcke, Tim; Wu, Yan; De Smet, Riet; Storms, Valerie; Meysman, Pieter; Engelen, Kristof; Marchal, Kathleen
2011-02-15
With the availability of large scale expression compendia it is now possible to view own findings in the light of what is already available and retrieve genes with an expression profile similar to a set of genes of interest (i.e., a query or seed set) for a subset of conditions. To that end, a query-based strategy is needed that maximally exploits the coexpression behaviour of the seed genes to guide the biclustering, but that at the same time is robust against the presence of noisy genes in the seed set as seed genes are often assumed, but not guaranteed to be coexpressed in the queried compendium. Therefore, we developed ProBic, a query-based biclustering strategy based on Probabilistic Relational Models (PRMs) that exploits the use of prior distributions to extract the information contained within the seed set. We applied ProBic on a large scale Escherichia coli compendium to extend partially described regulons with potentially novel members. We compared ProBic's performance with previously published query-based biclustering algorithms, namely ISA and QDB, from the perspective of bicluster expression quality, robustness of the outcome against noisy seed sets and biological relevance.This comparison learns that ProBic is able to retrieve biologically relevant, high quality biclusters that retain their seed genes and that it is particularly strong in handling noisy seeds. ProBic is a query-based biclustering algorithm developed in a flexible framework, designed to detect biologically relevant, high quality biclusters that retain relevant seed genes even in the presence of noise or when dealing with low quality seed sets.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
Scaling identity connects human mobility and social interactions.
Deville, Pierre; Song, Chaoming; Eagle, Nathan; Blondel, Vincent D; Barabási, Albert-László; Wang, Dashun
2016-06-28
Massive datasets that capture human movements and social interactions have catalyzed rapid advances in our quantitative understanding of human behavior during the past years. One important aspect affecting both areas is the critical role space plays. Indeed, growing evidence suggests both our movements and communication patterns are associated with spatial costs that follow reproducible scaling laws, each characterized by its specific critical exponents. Although human mobility and social networks develop concomitantly as two prolific yet largely separated fields, we lack any known relationships between the critical exponents explored by them, despite the fact that they often study the same datasets. Here, by exploiting three different mobile phone datasets that capture simultaneously these two aspects, we discovered a new scaling relationship, mediated by a universal flux distribution, which links the critical exponents characterizing the spatial dependencies in human mobility and social networks. Therefore, the widely studied scaling laws uncovered in these two areas are not independent but connected through a deeper underlying reality.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
Scaling identity connects human mobility and social interactions
Deville, Pierre; Song, Chaoming; Eagle, Nathan; Blondel, Vincent D.; Barabási, Albert-László; Wang, Dashun
2016-01-01
Massive datasets that capture human movements and social interactions have catalyzed rapid advances in our quantitative understanding of human behavior during the past years. One important aspect affecting both areas is the critical role space plays. Indeed, growing evidence suggests both our movements and communication patterns are associated with spatial costs that follow reproducible scaling laws, each characterized by its specific critical exponents. Although human mobility and social networks develop concomitantly as two prolific yet largely separated fields, we lack any known relationships between the critical exponents explored by them, despite the fact that they often study the same datasets. Here, by exploiting three different mobile phone datasets that capture simultaneously these two aspects, we discovered a new scaling relationship, mediated by a universal flux distribution, which links the critical exponents characterizing the spatial dependencies in human mobility and social networks. Therefore, the widely studied scaling laws uncovered in these two areas are not independent but connected through a deeper underlying reality. PMID:27274050
NASA Astrophysics Data System (ADS)
Duda, Mandy; Bracke, Rolf; Stöckhert, Ferdinand; Wittig, Volker
2017-04-01
A fundamental problem of technological applications related to the exploration and provision of geothermal energy is the inaccessibility of subsurface processes. As a result, actual reservoir properties can only be determined using (a) indirect measurement techniques such as seismic surveys, machine feedback and geophysical borehole logging, (b) laboratory experiments capable of simulating in-situ properties, but failing to preserve temporal and spatial scales, or vice versa, and (c) numerical simulations. Moreover, technological applications related to the drilling process, the completion and cementation of a wellbore or the stimulation and exploitation of the reservoir are exposed to high pressure and temperature conditions as well as corrosive environments resulting from both, rock formation and geofluid characteristics. To address fundamental and applied questions in the context of geothermal energy provision and subsurface exploration in general one of Europe's largest geoscientific laboratory infrastructures is introduced. The in-situ Borehole and Geofluid Simulator (i.BOGS) allows to simulate quasi scale-preserving processes at reservoir conditions up to depths of 5000 m and represents a large scale pressure vessel for iso-/hydrostatic and pore pressures up to 125 MPa and temperatures from -10°C to 180°C. The autoclave can either be filled with large rock core samples (25 cm in diameter, up to 3 m length) or with fluids and technical borehole devices (e.g. pumps, sensors). The pressure vessel is equipped with an ultrasound system for active transmission and passive recording of acoustic emissions, and can be complemented by additional sensors. The i.BOGS forms the basic module for the Match.BOGS finally consisting of three modules, i.e. (A) the i.BOGS, (B) the Drill.BOGS, a drilling module to be attached to the i.BOGS capable of applying realistic torques and contact forces to a drilling device that enters the i.BOGS, and (C) the Fluid.BOGS, a geofluid reactor for the composition of highly corrosive geofluids serving as synthetic groundwater / pore fluid in the i.BOGS. The i.BOGS will support scientists and engineers in developing instruments and applications such as drilling tooling and drillstrings, borehole cements and cementation procedures, geophysical tooling and sensors, or logging/measuring while drilling equipment, but will also contribute to optimized reservoir exploitation methods, for example related to stimulation techniques, pumping equipment and long-term reservoir accessibility.
Elmhagen, B; Ludwig, G; Rushton, S P; Helle, P; Lindén, H
2010-07-01
1. The Mesopredator Release Hypothesis (MRH) suggests that top predator suppression of mesopredators is a key ecosystem function with cascading impacts on herbivore prey, but it remains to be shown that this top-down cascade impacts the large-scale structure of ecosystems. 2. The Exploitation Ecosystems Hypothesis (EEH) predicts that regional ecosystem structures are determined by top-down exploitation and bottom-up productivity. In contrast to MRH, EEH assumes that interference among predators has a negligible impact on the structure of ecosystems with three trophic levels. 3. We use the recolonization of a top predator in a three-level boreal ecosystem as a natural experiment to test if large-scale biomass distributions and population trends support MRH. Inspired by EEH, we also test if top-down interference and bottom-up productivity impact regional ecosystem structures. 4. We use data from the Finnish Wildlife Triangle Scheme which has monitored top predator (lynx, Lynx lynx), mesopredator (red fox, Vulpes vulpes) and prey (mountain hare, Lepus timidus) abundance for 17 years in a 200 000 km(2) study area which covers a distinct productivity gradient. 5. Fox biomass was lower than expected from productivity where lynx biomass was high, whilst hare biomass was lower than expected from productivity where fox biomass was high. Hence, where interference controlled fox abundance, lynx had an indirect positive impact on hare abundance as predicted by MRH. The rates of change indicated that lynx expansion gradually suppressed fox biomass. 6. Lynx status caused shifts between ecosystem structures. In the 'interference ecosystem', lynx and hare biomass increased with productivity whilst fox biomass did not. In the 'mesopredator release ecosystem', fox biomass increased with productivity but hare biomass did not. Thus, biomass controlled top-down did not respond to changes in productivity. This fulfils a critical prediction of EEH. 7. We conclude that the cascade involving top predators, mesopredators and their prey can determine large-scale biomass distribution patterns and regional ecosystem structures. Hence, interference within trophic levels has to be taken into account to understand how terrestrial ecosystem structures are shaped.
General relativistic screening in cosmological simulations
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Paranjape, Aseem
2016-10-01
We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph
2014-05-01
The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs. We present our redesign and porting approach as well as our experience and lessons learned. Furthermore, we discuss relevant performance benchmarks obtained on the new hybrid Cray XC30 system "Piz Daint" installed at the Swiss National Supercomputing Centre (CSCS), both in terms of time-to-solution as well as energy consumption. We will demonstrate a first set of short cloud-resolving climate simulations at the European-scale using the GPU-enabled COSMO prototype and elaborate our future plans on how to exploit this new model capability.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.
Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail
2018-04-15
This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.
Imaging the distribution of transient viscosity after the 2016 Mw 7.1 Kumamoto earthquake
NASA Astrophysics Data System (ADS)
Moore, James D. P.; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valère; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Shibazaki, Bunichiro
2017-04-01
The deformation of mantle and crustal rocks in response to stress plays a crucial role in the distribution of seismic and volcanic hazards, controlling tectonic processes ranging from continental drift to earthquake triggering. However, the spatial variation of these dynamic properties is poorly understood as they are difficult to measure. We exploited the large stress perturbation incurred by the 2016 earthquake sequence in Kumamoto, Japan, to directly image localized and distributed deformation. The earthquakes illuminated distinct regions of low effective viscosity in the lower crust, notably beneath the Mount Aso and Mount Kuju volcanoes, surrounded by larger-scale variations of viscosity across the back-arc. This study demonstrates a new potential for geodesy to directly probe rock rheology in situ across many spatial and temporal scales.
General Entanglement Scaling Laws from Time Evolution
NASA Astrophysics Data System (ADS)
Eisert, Jens; Osborne, Tobias J.
2006-10-01
We establish a general scaling law for the entanglement of a large class of ground states and dynamically evolving states of quantum spin chains: we show that the geometric entropy of a distinguished block saturates, and hence follows an entanglement-boundary law. These results apply to any ground state of a gapped model resulting from dynamics generated by a local Hamiltonian, as well as, dually, to states that are generated via a sudden quench of an interaction as recently studied in the case of dynamics of quantum phase transitions. We achieve these results by exploiting ideas from quantum information theory and tools provided by Lieb-Robinson bounds. We also show that there exist noncritical fermionic systems and equivalent spin chains with rapidly decaying interactions violating this entanglement-boundary law. Implications for the classical simulatability are outlined.
Fractal Tempo Fluctuation and Pulse Prediction
Rankin, Summer K.; Large, Edward W.; Fink, Philip W.
2010-01-01
WE INVESTIGATED PEOPLES’ ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at ¼- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the ¼-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music. PMID:25190901
A QoS adaptive multimedia transport system: design, implementation and experiences
NASA Astrophysics Data System (ADS)
Campbell, Andrew; Coulson, Geoff
1997-03-01
The long awaited `new environment' of high speed broadband networks and multimedia applications is fast becoming a reality. However, few systems in existence today, whether they be large scale pilots or small scale test-beds in research laboratories, offer a fully integrated and flexible environment where multimedia applications can maximally exploit the quality of service (QoS) capabilities of supporting networks and end-systems. In this paper we describe the implementation of an adaptive transport system that incorporates a QoS oriented API and a range of mechanisms to assist applications in exploiting QoS and adapting to fluctuations in QoS. The system, which is an instantiation of the Lancaster QoS Architecture, is implemented in a multi ATM switch network environment with Linux based PC end systems and continuous media file servers. A performance evaluation of the system configured to support video-on-demand application scenario is presented and discussed. Emphasis is placed on novel features of the system and on their integration into a complete prototype. The most prominent novelty of our design is a `distributed QoS adaptation' scheme which allows applications to delegate to the system responsibility for augmenting and reducing the perceptual quality of video and audio flows when resource availability increases or decreases.
Mass production of bulk artificial nacre with excellent mechanical properties.
Gao, Huai-Ling; Chen, Si-Ming; Mao, Li-Bo; Song, Zhao-Qiang; Yao, Hong-Bin; Cölfen, Helmut; Luo, Xi-Sheng; Zhang, Fu; Pan, Zhao; Meng, Yu-Feng; Ni, Yong; Yu, Shu-Hong
2017-08-18
Various methods have been exploited to replicate nacre features into artificial structural materials with impressive structural and mechanical similarity. However, it is still very challenging to produce nacre-mimetics in three-dimensional bulk form, especially for further scale-up. Herein, we demonstrate that large-sized, three-dimensional bulk artificial nacre with comprehensive mimicry of the hierarchical structures and the toughening mechanisms of natural nacre can be facilely fabricated via a bottom-up assembly process based on laminating pre-fabricated two-dimensional nacre-mimetic films. By optimizing the hierarchical architecture from molecular level to macroscopic level, the mechanical performance of the artificial nacre is superior to that of natural nacre and many engineering materials. This bottom-up strategy has no size restriction or fundamental barrier for further scale-up, and can be easily extended to other material systems, opening an avenue for mass production of high-performance bulk nacre-mimetic structural materials in an efficient and cost-effective way for practical applications.Artificial materials that replicate the mechanical properties of nacre represent important structural materials, but are difficult to produce in bulk. Here, the authors exploit the bottom-up assembly of 2D nacre-mimetic films to fabricate 3D bulk artificial nacre with an optimized architecture and excellent mechanical properties.
Morphological evidence for discrete stocks of yellow perch in Lake Erie
Kocovsky, Patrick M.; Knight, Carey T.
2012-01-01
Identification and management of unique stocks of exploited fish species are high-priority management goals in the Laurentian Great Lakes. We analyzed whole-body morphometrics of 1430 yellow perch Perca flavescens captured during 2007–2009 from seven known spawning areas in Lake Erie to determine if morphometrics vary among sites and management units to assist in identification of spawning stocks of this heavily exploited species. Truss-based morphometrics (n = 21 measurements) were analyzed using principal component analysis followed by ANOVA of the first three principal components to determine whether yellow perch from the several sampling sites varied morphometrically. Duncan's multiple range test was used to determine which sites differed from one another to test whether morphometrics varied at scales finer than management unit. Morphometrics varied significantly among sites and annually, but differences among sites were much greater. Sites within the same management unit typically differed significantly from one another, indicating morphometric variation at a scale finer than management unit. These results are largely congruent with recently-published studies on genetic variation of yellow perch from many of the same sampling sites. Thus, our results provide additional evidence that there are discrete stocks of yellow perch in Lake Erie and that management units likely comprise multiple stocks.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.
Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D
2012-01-01
Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.
Using Unplanned Fires to Help Suppressing Future Large Fires in Mediterranean Forests
Regos, Adrián; Aquilué, Núria; Retana, Javier; De Cáceres, Miquel; Brotons, Lluís
2014-01-01
Despite the huge resources invested in fire suppression, the impact of wildfires has considerably increased across the Mediterranean region since the second half of the 20th century. Modulating fire suppression efforts in mild weather conditions is an appealing but hotly-debated strategy to use unplanned fires and associated fuel reduction to create opportunities for suppression of large fires in future adverse weather conditions. Using a spatially-explicit fire–succession model developed for Catalonia (Spain), we assessed this opportunistic policy by using two fire suppression strategies that reproduce how firefighters in extreme weather conditions exploit previous fire scars as firefighting opportunities. We designed scenarios by combining different levels of fire suppression efficiency and climatic severity for a 50-year period (2000–2050). An opportunistic fire suppression policy induced large-scale changes in fire regimes and decreased the area burnt under extreme climate conditions, but only accounted for up to 18–22% of the area to be burnt in reference scenarios. The area suppressed in adverse years tended to increase in scenarios with increasing amounts of area burnt during years dominated by mild weather. Climate change had counterintuitive effects on opportunistic fire suppression strategies. Climate warming increased the incidence of large fires under uncontrolled conditions but also indirectly increased opportunities for enhanced fire suppression. Therefore, to shift fire suppression opportunities from adverse to mild years, we would require a disproportionately large amount of area burnt in mild years. We conclude that the strategic planning of fire suppression resources has the potential to become an important cost-effective fuel-reduction strategy at large spatial scale. We do however suggest that this strategy should probably be accompanied by other fuel-reduction treatments applied at broad scales if large-scale changes in fire regimes are to be achieved, especially in the wider context of climate change. PMID:24727853
Using unplanned fires to help suppressing future large fires in Mediterranean forests.
Regos, Adrián; Aquilué, Núria; Retana, Javier; De Cáceres, Miquel; Brotons, Lluís
2014-01-01
Despite the huge resources invested in fire suppression, the impact of wildfires has considerably increased across the Mediterranean region since the second half of the 20th century. Modulating fire suppression efforts in mild weather conditions is an appealing but hotly-debated strategy to use unplanned fires and associated fuel reduction to create opportunities for suppression of large fires in future adverse weather conditions. Using a spatially-explicit fire-succession model developed for Catalonia (Spain), we assessed this opportunistic policy by using two fire suppression strategies that reproduce how firefighters in extreme weather conditions exploit previous fire scars as firefighting opportunities. We designed scenarios by combining different levels of fire suppression efficiency and climatic severity for a 50-year period (2000-2050). An opportunistic fire suppression policy induced large-scale changes in fire regimes and decreased the area burnt under extreme climate conditions, but only accounted for up to 18-22% of the area to be burnt in reference scenarios. The area suppressed in adverse years tended to increase in scenarios with increasing amounts of area burnt during years dominated by mild weather. Climate change had counterintuitive effects on opportunistic fire suppression strategies. Climate warming increased the incidence of large fires under uncontrolled conditions but also indirectly increased opportunities for enhanced fire suppression. Therefore, to shift fire suppression opportunities from adverse to mild years, we would require a disproportionately large amount of area burnt in mild years. We conclude that the strategic planning of fire suppression resources has the potential to become an important cost-effective fuel-reduction strategy at large spatial scale. We do however suggest that this strategy should probably be accompanied by other fuel-reduction treatments applied at broad scales if large-scale changes in fire regimes are to be achieved, especially in the wider context of climate change.
NASA Astrophysics Data System (ADS)
Chan, YinThai
2016-03-01
Colloidal semiconductor nanocrystals are ideal fluorophores for clinical diagnostics, therapeutics, and highly sensitive biochip applications due to their high photostability, size-tunable color of emission and flexible surface chemistry. The relatively recent development of core-seeded semiconductor nanorods showed that the presence of a rod-like shell can confer even more advantageous physicochemical properties than their spherical counterparts, such as large multi-photon absorption cross-sections and facet-specific chemistry that can be exploited to deposit secondary nanoparticles. It may be envisaged that these highly fluorescent nanorods can be integrated with large scale integrated (LSI) microfluidic systems that allow miniaturization and integration of multiple biochemical processes in a single device at the nanoliter scale, resulting in a highly sensitive and automated detection platform. In this talk, I will describe a LSI microfluidic device that integrates RNA extraction, reverse transcription to cDNA, amplification and target pull-down to detect histidine decarboxylase (HDC) gene directly from human white blood cells samples. When anisotropic colloidal semiconductor nanorods (NRs) were used as the fluorescent readout, the detection limit was found to be 0.4 ng of total RNA, which was much lower than that obtained using spherical quantum dots (QDs) or organic dyes. This was attributed to the large action cross-section of NRs and their high probability of target capture in a pull-down detection scheme. The combination of large scale integrated microfluidics with highly fluorescent semiconductor NRs may find widespread utility in point-of-care devices and multi-target diagnostics.
Ryan, Aoife A; Senge, Mathias O
2015-04-01
As the world strives to create a more sustainable environment, green chemistry has come to the fore in attempts to minimize the use of hazardous materials and shift the focus towards renewable sources. Chlorophylls, being the definitive "green" chemical are rarely used for such purposes and this article focuses on the exploitation of this natural resource, the current applications of chlorophylls and their derivatives whilst also providing a perspective on the commercial potential of large-scale isolation of these pigments from biomass for energy and medicinal applications.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
On Taylor-Series Approximations of Residual Stress
NASA Technical Reports Server (NTRS)
Pruett, C. David
1999-01-01
Although subgrid-scale models of similarity type are insufficiently dissipative for practical applications to large-eddy simulation, in recently published a priori analyses, they perform remarkably well in the sense of correlating highly against exact residual stresses. Here, Taylor-series expansions of residual stress are exploited to explain the observed behavior and "success" of similarity models. Until very recently, little attention has been given to issues related to the convergence of such expansions. Here, we re-express the convergence criterion of Vasilyev [J. Comput. Phys., 146 (1998)] in terms of the transfer function and the wavenumber cutoff of the grid filter.
Hierarchical ensemble of global and local classifiers for face recognition.
Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen
2009-08-01
In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.
Ferretti, Francesco; Osio, Giacomo C.; Jenkins, Chris J.; Rosenberg, Andrew A.; Lotze, Heike K.
2013-01-01
Sharks and rays' abundance can decline considerably with fishing. Community changes, however, are more complex because of species interactions, and variable vulnerability and exposure to fishing. We evaluated long-term changes in the elasmobranch community of the Adriatic Sea, a heavily exploited Mediterranean basin where top-predators have been strongly depleted historically, and fishing developed unevenly between the western and eastern side. Combining and standardizing catch data from five trawl surveys from 1948–2005, we estimated abundance trends and explained community changes using life histories, fish-market and effort data, and historical information. We identified a highly depleted elasmobranch community. Since 1948, catch rates have declined by >94% and 11 species ceased to be detected. The exploitation history and spatial gradients in fishing pressure explained most patterns in abundance and diversity, including the absence of strong compensatory increases. Ecological corridors and large-scale protected areas emerged as potential management options for elasmobranch conservation. PMID:23308344
Levin, Michael; Pezzulo, Giovanni; Finkelstein, Joshua M
2017-06-21
Living systems exhibit remarkable abilities to self-assemble, regenerate, and remodel complex shapes. How cellular networks construct and repair specific anatomical outcomes is an open question at the heart of the next-generation science of bioengineering. Developmental bioelectricity is an exciting emerging discipline that exploits endogenous bioelectric signaling among many cell types to regulate pattern formation. We provide a brief overview of this field, review recent data in which bioelectricity is used to control patterning in a range of model systems, and describe the molecular tools being used to probe the role of bioelectrics in the dynamic control of complex anatomy. We suggest that quantitative strategies recently developed to infer semantic content and information processing from ionic activity in the brain might provide important clues to cracking the bioelectric code. Gaining control of the mechanisms by which large-scale shape is regulated in vivo will drive transformative advances in bioengineering, regenerative medicine, and synthetic morphology, and could be used to therapeutically address birth defects, traumatic injury, and cancer.
Evidence of trapline foraging in honeybees.
Buatois, Alexis; Lihoreau, Mathieu
2016-08-15
Central-place foragers exploiting floral resources often use multi-destination routes (traplines) to maximise their foraging efficiency. Recent studies on bumblebees have showed how solitary foragers can learn traplines, minimising travel costs between multiple replenishing feeding locations. Here we demonstrate a similar routing strategy in the honeybee (Apis mellifera), a major pollinator known to recruit nestmates to discovered food resources. Individual honeybees trained to collect sucrose solution from four artificial flowers arranged within 10 m of the hive location developed repeatable visitation sequences both in the laboratory and in the field. A 10-fold increase of between-flower distances considerably intensified this routing behaviour, with bees establishing more stable and more efficient routes at larger spatial scales. In these advanced social insects, trapline foraging may complement cooperative foraging for exploiting food resources near the hive (where dance recruitment is not used) or when resources are not large enough to sustain multiple foragers at once. © 2016. Published by The Company of Biologists Ltd.
GPU accelerated particle visualization with Splotch
NASA Astrophysics Data System (ADS)
Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.
2014-07-01
Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.
What is driving range expansion in a common bat? Hints from thermoregulation and habitat selection.
Ancillotto, Leonardo; Budinski, Ivana; Nardone, Valentina; Di Salvo, Ivy; Corte, Martina Della; Bosso, Luciano; Conti, Paola; Russo, Danilo
2018-06-02
Human-induced alterations often lead to changes in the geographical range of plants and animals. While modelling exercises may contribute to understanding such dynamics at large spatial scales, they rarely offer insights into the mechanisms that prompt the process at a local scale. Savi's pipistrelle (Hypsugo savii) is a vespertilionid bat widespread throughout the Mediterranean region. The species' recent range expansion towards northeastern Europe is thought to be induced by urbanization, yet no study actually tested this hypothesis, and climate change is a potential alternative driver. In this radio-telemetry study, set in the Vesuvius National Park (Campania region, Southern Italy) we provide insights into the species' thermal physiology and foraging ecology and investigate their relationships with potential large-scale responses to climate, and land use changes. Specifically, we test whether H. savii i) exploits urbanisation through a selection of urban areas for roosting and foraging, and ii) tolerates heatwaves (a proxy for thermophily) through a plastic use of thermoregulation. Tolerance to heatwaves would be consistent with the observation that the species' geographic range is not shifting but expanding northwards. Tracked bats roosted mainly in buildings but avoided urban habitats while foraging, actively selecting non-intensive farmland and natural wooded areas. Hypsugo savii showed tolerance to heat, reaching the highest body temperature ever recorded for a free-ranging bat (46.5 °C), and performing long periods of overheating. We conclude that H. savii is not a strictly synurbic species because it exploits urban areas mainly for roosting, and avoids them for foraging: this questions the role of synurbization as a range expansion driver. On the other hand, the species' extreme heat tolerance and plastic thermoregulatory behaviour represent winning traits to cope with heatwaves typical of climate change-related weather fluctuations. Copyright © 2018 Elsevier B.V. All rights reserved.
Malucelli, Emil; Procopio, Alessandra; Fratini, Michela; Gianoncelli, Alessandra; Notargiacomo, Andrea; Merolle, Lucia; Sargenti, Azzurra; Castiglioni, Sara; Cappadone, Concettina; Farruggia, Giovanna; Lombardo, Marco; Lagomarsino, Stefano; Maier, Jeanette A; Iotti, Stefano
2018-01-01
The quantification of elemental concentration in cells is usually performed by analytical assays on large populations missing peculiar but important rare cells. The present article aims at comparing the elemental quantification in single cells and cell population in three different cell types using a new approach for single cells elemental analysis performed at sub-micrometer scale combining X-ray fluorescence microscopy and atomic force microscopy. The attention is focused on the light element Mg, exploiting the opportunity to compare the single cell quantification to the cell population analysis carried out by a highly Mg-selective fluorescent chemosensor. The results show that the single cell analysis reveals the same Mg differences found in large population of the different cell strains studied. However, in one of the cell strains, single cell analysis reveals two cells with an exceptionally high intracellular Mg content compared with the other cells of the same strain. The single cell analysis allows mapping Mg and other light elements in whole cells at sub-micrometer scale. A detailed intensity correlation analysis on the two cells with the highest Mg content reveals that Mg subcellular localization correlates with oxygen in a different fashion with respect the other sister cells of the same strain. Graphical abstract Single cells or large population analysis this is the question!
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spagliardi, Fabio
Liquid argon Time Projection Chambers (LArTPCs) are becoming widely used as neutrino detectors because of their image-like event reconstruction which enables precision neutrino measurements. They primarily use ionisation charge to reconstruct neutrino events. It has been shown, however, that the scintillation light emitted by liquid argon could be exploited to improve their performance. As the neutrino measurements planned in the near future require large-scale experiments, their construction presents challenges in terms of both charge and light collection. In this dissertation we present solutions developed to improve the performance in both aspects of these detectors. We present a new wire tensioningmore » measurement method that allows a remote measurement of the tension of the large number wires that constitute the TPC anode. We also discuss the development and installation of WLS-compound covered foils for the SBND neutrino detector at Fermilab, which is a technique proposed t o augment light collection in LArTPCs. This included preparing a SBND-like mesh cathode and testing it in the Run III of LArIAT, a test beam detector also located at Fermilab. Finally, we present a study aimed at understanding late scintillation light emitted by recombining positive argon ions using LArIAT data, which could affect large scale surface detectors.« less
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
NASA Astrophysics Data System (ADS)
Shrestha, K.; Chou, M.; Graf, D.; Yang, H. D.; Lorenz, B.; Chu, C. W.
2017-05-01
Weak antilocalization (WAL) effects in Bi2Te3 single crystals have been investigated at high and low bulk charge-carrier concentrations. At low charge-carrier density the WAL curves scale with the normal component of the magnetic field, demonstrating the dominance of topological surface states in magnetoconductivity. At high charge-carrier density the WAL curves scale with neither the applied field nor its normal component, implying a mixture of bulk and surface conduction. WAL due to topological surface states shows no dependence on the nature (electrons or holes) of the bulk charge carriers. The observations of an extremely large nonsaturating magnetoresistance and ultrahigh mobility in the samples with lower carrier density further support the presence of surface states. The physical parameters characterizing the WAL effects are calculated using the Hikami-Larkin-Nagaoka formula. At high charge-carrier concentrations, there is a greater number of conduction channels and a decrease in the phase coherence length compared to low charge-carrier concentrations. The extremely large magnetoresistance and high mobility of topological insulators have great technological value and can be exploited in magnetoelectric sensors and memory devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Shengwei; Yu Jiaguo
Bi{sub 2}WO{sub 6} hierarchical multilayered flower-like assemblies are fabricated on a large scale by a simple hydrothermal method in the presence of polymeric poly(sodium 4-styrenesulfonate). Such 3D Bi{sub 2}WO{sub 6} assemblies are constructed from orderly arranged 2D layers, which are further composed of a large number of interconnected nanoplates with a mean side length of ca. 50 nm. The bimodal mesopores associated with such hierarchical assembly exhibit peak mesopore size of ca. 4 nm for the voids within a layer, and peak mesopore size of ca. 40 nm corresponding to the interspaces between stacked layers, respectively. The formation process ismore » discussed on the basis of the results of time-dependent experiments, which support a novel 'coupled cooperative assembly and localized ripening' formation mechanism. More interestingly, we have noticed that the collective effect related to such hierarchical assembly induces a significantly enhanced optical absorbance in the UV-visible region. This work may shed some light on the design of complex architectures and exploitation of their potential applications. - Graphical abstract: Bi{sub 2}WO{sub 6} hierarchical multilayered flower-like assemblies are fabricated on a large scale by a simple hydrothermal method in the presence of polymeric poly(sodium 4-styrenesulfonate)« less
NASA Astrophysics Data System (ADS)
Korenaga, Jun
2011-05-01
The seismic structure of large igneous provinces provides unique constraints on the nature of their parental mantle, allowing us to investigate past mantle dynamics from present crustal structure. To exploit this crust-mantle connection, however, it is prerequisite to quantify the uncertainty of a crustal velocity model, as it could suffer from considerable velocity-depth ambiguity. In this contribution, a practical strategy is suggested to estimate the model uncertainty by explicitly exploring the degree of velocity-depth ambiguity in the model space. In addition, wide-angle seismic data collected over the Ontong Java Plateau are revisited to provide a worked example of the new approach. My analysis indicates that the crustal structure of this gigantic plateau is difficult to reconcile with the melting of a pyrolitic mantle, pointing to the possibility of large-scale compositional heterogeneity in the convecting mantle.
Protection of surface states in topological nanoparticles
NASA Astrophysics Data System (ADS)
Siroki, Gleb; Haynes, Peter D.; Lee, Derek K. K.; Giannini, Vincenzo
2017-07-01
Topological insulators host protected electronic states at their surface. These states show little sensitivity to disorder. For miniaturization one wants to exploit their robustness at the smallest sizes possible. This is also beneficial for optical applications and catalysis, which favor large surface-to-volume ratios. However, it is not known whether discrete states in particles share the protection of their continuous counterparts in large crystals. Here we study the protection of the states hosted by topological insulator nanoparticles. Using both analytical and tight-binding simulations, we show that the states benefit from the same level of protection as those on a planar surface. The results hold for many shapes and sustain surface roughness which may be useful in photonics, spectroscopy, and chemistry. They complement past studies of large crystals—at the other end of possible length scales. The protection of the nanoparticles suggests that samples of all intermediate sizes also possess protected states.
Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howison, Mark; Bethel, E. Wes; Childs, Hank
2012-01-01
With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less
Monitoring of Sea Ice Dynamic by Means of ERS-Envisat Tandem Cross-Interferometry
NASA Astrophysics Data System (ADS)
Pasquali, Paolo; Cantone, Alessio; Barbieri, Massimo; Engdahl, Marcus
2010-03-01
The interest in the monitoring of sea ice masses has increased greatly over the past decades for a variety of reasons. These include:- Navigation in northern latitude waters;- transportation of petroleum;- exploitation of mineral deposits in the Arctic, and- the use of icebergs as a source of fresh water.The availability of ERS-Envisat 28minute tandem acquisitions from dedicated campaigns, covering large areas in the northern latitudes with large geometrical baseline and very short temporal separation, allows the precise estimation of sea ice displacement fields with an accuracy that cannot be obtained on large scale from any other instrument. This article presents different results of sea ice dynamic monitoring over northern Canada obtained within the "ERS-Envisat Tandem Cross-Interferometry Campaigns: CInSAR processing and studies over extended areas" project from data acquired during the 2008-2009 Tandem campaign..
Chemistry of a low temperature geothermal reservoir: The Triassic sandstone aquifer at Melleray, FR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuataz, Francois-David; Fouillac, Christian; Detoc, Aylvie
1988-01-01
The Triassic sandstone aquifer offers on a regional scale, a large potential for low-temperature geothermal exploitation in the Paris Basin. The Na-Cl water n the aquifer has highly variable mineralization (TDS = 4 to 110 g/l) and a wide range of temperature (50º to >100ºC). Chemical studies have been carried out on the Melleray site near Orléans, where a single wel was producing a Na-Cl geothermal water (TDS = 35 g/l) at a wellhead temperature of 72ºC to provide heat for greenhouses. The purpose of these studies is to understand the chemical phenomena occurring in the geothermal loop and tomore » determine the treatment of the fluid and the exploitation procedures necessary for proper reinjection conditions to be achieved. During the tests performed after the drilling operations, chemical variations in the fluid were noticed between several producing zones in the aquifer. Daily geochemical monitoring of the fluid was carried out during two periods of differing exploitation conditions, respectively pumping at 148 m{sup 3}/h and artesian flow at 36 m{sup 3}/h. Vertical heterogeneities of the aquifer can explain the variations observed for the high flowrate. Filtration experiments revealed that the particle load varies with the discharge rate and that over 95 weight % of the particles are smaller than 1 micrometer. The chemistry of the particles varies greatly, according to their origin as corrosion products from the well casing, particles drawn out of the rock or minerals newly formed through water-rock reactions. Finally, small-scale oxidation experiments were carried out on the geothermal fluid to observe the behavior of Fe and SiO{sub 2} and to favour particle aggregates for easier filtration or decantation processes.« less
Fava, Fabio; Zanaroli, Giulio; Vannini, Lucia; Guerzoni, Elisabetta; Bordoni, Alessandra; Viaggi, Davide; Robertson, Jim; Waldron, Keith; Bald, Carlos; Esturo, Aintzane; Talens, Clara; Tueros, Itziar; Cebrián, Marta; Sebők, András; Kuti, Tunde; Broeze, Jan; Macias, Marta; Brendle, Hans-Georg
2013-09-25
By-products generated every year by the European fruit and cereal processing industry currently exceed several million tons. They are disposed of mainly through landfills and thus are largely unexploited sources of several valuable biobased compounds potentially profitable in the formulation of novel food products. The opportunity to design novel strategies to turn them into added value products and food ingredients via novel and sustainable processes is the main target of recently EC-funded FP7 project NAMASTE-EU. NAMASTE-EU aims at developing new laboratory-scale protocols and processes for the exploitation of citrus processing by-products and wheat bran surpluses via the production of ingredients useful for the formulation of new beverage and food products. Among the main results achieved in the first two years of the project, there are the development and assessment of procedures for the selection, stabilization and the physical/biological treatment of citrus and wheat processing by-products, the obtainment and recovery of some bioactive molecules and ingredients and the development of procedures for assessing the quality of the obtained ingredients and for their exploitation in the preparation of new food products. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Barlow, Jos; Ewers, Robert M; Anderson, Liana; Aragao, Luiz E O C; Baker, Tim R; Boyd, Emily; Feldpausch, Ted R; Gloor, Emanuel; Hall, Anthony; Malhi, Yadvinder; Milliken, William; Mulligan, Mark; Parry, Luke; Pennington, Toby; Peres, Carlos A; Phillips, Oliver L; Roman-Cuesta, Rosa Maria; Tobias, Joseph A; Gardner, Toby A
2011-05-01
Developing high-quality scientific research will be most effective if research communities with diverse skills and interests are able to share information and knowledge, are aware of the major challenges across disciplines, and can exploit economies of scale to provide robust answers and better inform policy. We evaluate opportunities and challenges facing the development of a more interactive research environment by developing an interdisciplinary synthesis of research on a single geographic region. We focus on the Amazon as it is of enormous regional and global environmental importance and faces a highly uncertain future. To take stock of existing knowledge and provide a framework for analysis we present a set of mini-reviews from fourteen different areas of research, encompassing taxonomy, biodiversity, biogeography, vegetation dynamics, landscape ecology, earth-atmosphere interactions, ecosystem processes, fire, deforestation dynamics, hydrology, hunting, conservation planning, livelihoods, and payments for ecosystem services. Each review highlights the current state of knowledge and identifies research priorities, including major challenges and opportunities. We show that while substantial progress is being made across many areas of scientific research, our understanding of specific issues is often dependent on knowledge from other disciplines. Accelerating the acquisition of reliable and contextualized knowledge about the fate of complex pristine and modified ecosystems is partly dependent on our ability to exploit economies of scale in shared resources and technical expertise, recognise and make explicit interconnections and feedbacks among sub-disciplines, increase the temporal and spatial scale of existing studies, and improve the dissemination of scientific findings to policy makers and society at large. Enhancing interaction among research efforts is vital if we are to make the most of limited funds and overcome the challenges posed by addressing large-scale interdisciplinary questions. Bringing together a diverse scientific community with a single geographic focus can help increase awareness of research questions both within and among disciplines, and reveal the opportunities that may exist for advancing acquisition of reliable knowledge. This approach could be useful for a variety of globally important scientific questions. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
NASA Astrophysics Data System (ADS)
Cañellas-Boltà, N.; Riera-Mora, S.; Orengo, H. A.; Livarda, A.; Knappett, C.
2018-03-01
On the east Mediterranean island of Crete, a hierarchical society centred on large palatial complexes emerges during the Bronze Age. The economic basis for this significant social change has long been debated, particularly concerning the role of olive cultivation in the island's agricultural system. With the aim of studying vegetation changes and human management to understand the landscape history from Late Neolithic to Bronze Age, two palaeoenvironmental records have been studied at Kouremenos marsh, near the site of Palaikastro (Eastern Crete). Pollen, NPP and charcoal particles analyses evidenced seven phases of landscape change, resulting from different agricultural and pastoral practices and the use of fire probably to manage vegetation. Moreover, the Kouremenos records show the importance of the olive tree in the area. They reflect a clear trend for its increasing use and exploitation from 3600 cal yr BC (Final Neolithic) to the Early Minoan period, that is coeval with an opening of the landscape. The increase of Olea pollen was due to the expansion of the tree and its management using pruning and mechanical cleaning. The onset of olive expansion at c. 3600 cal yr BC places Crete among the first locales in the eastern Mediterranean in the management of this tree. Between c. 2780 and 2525 cal yr BC the landscape was largely occupied by olive and grasslands, coinciding with an increase in grazing practices. The high Olea pollen percentages (40-45%) suggest an intensive and large-scale exploitation of the olive tree. The results suggest that a complex and organized landscape with complementary land uses and activities was already in place since the Final Neolithic. The notable expansion of olive trees suggests the relevance of olive exploitation in the socio-economic development of Minoan towns of eastern Crete. Other crops, such as cereals and vine, and activities such as grazing have also played an important role in the configuration of the past landscape.
Indurkhya, Sagar; Beal, Jacob
2010-01-06
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.
Indurkhya, Sagar; Beal, Jacob
2010-01-01
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048
Pezzulo, G; Levin, M
2015-12-01
A major goal of regenerative medicine and bioengineering is the regeneration of complex organs, such as limbs, and the capability to create artificial constructs (so-called biobots) with defined morphologies and robust self-repair capabilities. Developmental biology presents remarkable examples of systems that self-assemble and regenerate complex structures toward their correct shape despite significant perturbations. A fundamental challenge is to translate progress in molecular genetics into control of large-scale organismal anatomy, and the field is still searching for an appropriate theoretical paradigm for facilitating control of pattern homeostasis. However, computational neuroscience provides many examples in which cell networks - brains - store memories (e.g., of geometric configurations, rules, and patterns) and coordinate their activity towards proximal and distant goals. In this Perspective, we propose that programming large-scale morphogenesis requires exploiting the information processing by which cellular structures work toward specific shapes. In non-neural cells, as in the brain, bioelectric signaling implements information processing, decision-making, and memory in regulating pattern and its remodeling. Thus, approaches used in computational neuroscience to understand goal-seeking neural systems offer a toolbox of techniques to model and control regenerative pattern formation. Here, we review recent data on developmental bioelectricity as a regulator of patterning, and propose that target morphology could be encoded within tissues as a kind of memory, using the same molecular mechanisms and algorithms so successfully exploited by the brain. We highlight the next steps of an unconventional research program, which may allow top-down control of growth and form for numerous applications in regenerative medicine and synthetic bioengineering.
The Next Generation of the Montage Image Mopsaic Engine
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Good, John; Rusholme, Ben; Robitaille, Thomas
2016-01-01
We have released a major upgrade of the Montage image mosaic engine (http://montage.ipac.caltech.edu) , as part of a program to develop the next generation of the engine in response to the rapid changes in the data processing landscape in Astronomy, which is generating ever larger data sets in ever more complex formats . The new release (version 4) contains modules dedicated to creating and managing mosaics of data stored as multi-dimensional arrays ("data cubes"). The new release inherits the architectural benefits of portability and scalability of the original design. The code is publicly available on Git Hub and the Montage web page. The release includes a command line tool that supports visualization of large images, and the beta-release of a Python interface to the visualization tool. We will provide examples on how to use these these features. We are generating a mosaic of the Galactic Arecibo L-band Feed Array HI (GALFA-HI) Survey maps of neutral hydrogen in and around our Milky Way Galaxy, to assess the performance at scale and to develop tools and methodologies that will enable scientists inexpert in cloud processing to exploit could platforms for data processing and product generation at scale. Future releases include support for an R-tree based mechanism for fast discovery of and access to large data sets and on-demand access to calibrated SDSS DR9 data that exploits it; support for the Hierarchical Equal Area isoLatitude Pixelization (HEALPix) scheme, now standard for projects investigating cosmic background radiation (Gorski et al 2005); support fort the Tessellated Octahedral Adaptive Subdivision Transform (TOAST), the sky partitioning sky used by the WorldWide Telescope (WWT); and a public applications programming interface (API) in C that can be called from other languages, especially Python.
NASA Astrophysics Data System (ADS)
Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele
2017-04-01
The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.
Unintended consequences of increasing block tariffs pricing policy in urban water
NASA Astrophysics Data System (ADS)
Dahan, Momi; Nisan, Udi
2007-03-01
We exploit a unique data set to estimate the degree of economies of scale in water consumption, controlling for the standard demand factors. We found a linear Engel curve in water consumption: each additional household member consumes the same water quantity regardless of household size, except for a single-person household. Our evidence suggests that the increasing block tariffs (IBT) structure, which is indifferent to household size, has unintended consequences. Large households, which are also likely to be poor given the negative correlation between income and household size, are charged a higher price for water. The degree of economies of scale found here erodes the effectiveness of IBT price structure as a way to introduce an equity consideration. This implication is important in view of the global trend toward the use of IBT.
Imaging the distribution of transient viscosity after the 2016 Mw 7.1 Kumamoto earthquake.
Moore, James D P; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valère; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Shibazaki, Bunichiro
2017-04-14
The deformation of mantle and crustal rocks in response to stress plays a crucial role in the distribution of seismic and volcanic hazards, controlling tectonic processes ranging from continental drift to earthquake triggering. However, the spatial variation of these dynamic properties is poorly understood as they are difficult to measure. We exploited the large stress perturbation incurred by the 2016 earthquake sequence in Kumamoto, Japan, to directly image localized and distributed deformation. The earthquakes illuminated distinct regions of low effective viscosity in the lower crust, notably beneath the Mount Aso and Mount Kuju volcanoes, surrounded by larger-scale variations of viscosity across the back-arc. This study demonstrates a new potential for geodesy to directly probe rock rheology in situ across many spatial and temporal scales. Copyright © 2017, American Association for the Advancement of Science.
Computational investigation of large-scale vortex interaction with flexible bodies
NASA Astrophysics Data System (ADS)
Connell, Benjamin; Yue, Dick K. P.
2003-11-01
The interaction of large-scale vortices with flexible bodies is examined with particular interest paid to the energy and momentum budgets of the system. Finite difference direct numerical simulation of the Navier-Stokes equations on a moving curvilinear grid is coupled with a finite difference structural solver of both a linear membrane under tension and linear Euler-Bernoulli beam. The hydrodynamics and structural dynamics are solved simultaneously using an iterative procedure with the external structural forcing calculated from the hydrodynamics at the surface and the flow-field velocity boundary condition given by the structural motion. We focus on an investigation into the canonical problem of a vortex-dipole impinging on a flexible membrane. It is discovered that the structural properties of the membrane direct the interaction in terms of the flow evolution and the energy budget. Pressure gradients associated with resonant membrane response are shown to sustain the oscillatory motion of the vortex pair. Understanding how the key mechanisms in vortex-body interactions are guided by the structural properties of the body is a prerequisite to exploiting these mechanisms.
Lake, B.C.; Schmutz, J.A.; Lindberg, M.S.; Ely, Craig R.; Eldridge, W.D.; Broerman, F.J.
2008-01-01
We studied body mass of prefledging Emperor Geese Chen canagica at three locations across the Yukon-Kuskokwim Delta, Alaska, during 1990-2004 to investigate whether large-scale variation in body mass was related to interspecific competition for food. From 1990 to 2004, densities of Cackling Geese Branta hutchinsii minima more than doubled and were c. 2-5?? greater than densities of Emperor Geese, which were relatively constant over time. Body mass of prefledging Emperor Geese was strongly related (negatively) to interspecific densities of geese (combined density of Cackling and Emperor Geese) and positively related to measures of food availability (grazing lawn extent and net above-ground primary productivity (NAPP)). Grazing by geese resulted in consumption of ??? 90% of the NAPP that occurred in grazing lawns during the brood-rearing period, suggesting that density-dependent interspecific competition was from exploitation of common food resources. Efforts to increase the population size of Emperor Geese would benefit from considering competitive interactions among goose species and with forage plants. ?? 2008 The Authors.
Tommasin, Silvia; Mascali, Daniele; Moraschi, Marta; Gili, Tommaso; Assan, Ibrahim Eid; Fratini, Michela; DiNuzzo, Mauro; Wise, Richard G; Mangia, Silvia; Macaluso, Emiliano; Giove, Federico
2018-06-14
Brain activity at rest is characterized by widely distributed and spatially specific patterns of synchronized low-frequency blood-oxygenation level-dependent (BOLD) fluctuations, which correspond to physiologically relevant brain networks. This network behaviour is known to persist also during task execution, yet the details underlying task-associated modulations of within- and between-network connectivity are largely unknown. In this study we exploited a multi-parametric and multi-scale approach to investigate how low-frequency fluctuations adapt to a sustained n-back working memory task. We found that the transition from the resting state to the task state involves a behaviourally relevant and scale-invariant modulation of synchronization patterns within both task-positive and default mode networks. Specifically, decreases of connectivity within networks are accompanied by increases of connectivity between networks. In spite of large and widespread changes of connectivity strength, the overall topology of brain networks is remarkably preserved. We show that these findings are strongly influenced by connectivity at rest, suggesting that the absolute change of connectivity (i.e., disregarding the baseline) may be not the most suitable metric to study dynamic modulations of functional connectivity. Our results indicate that a task can evoke scale-invariant, distributed changes of BOLD fluctuations, further confirming that low frequency BOLD oscillations show a specialized response and are tightly bound to task-evoked activation. Copyright © 2018. Published by Elsevier Inc.
The impact of galaxy formation on satellite kinematics and redshift-space distortions
NASA Astrophysics Data System (ADS)
Orsi, Álvaro A.; Angulo, Raúl E.
2018-04-01
Galaxy surveys aim to map the large-scale structure of the Universe and use redshift-space distortions to constrain deviations from general relativity and probe the existence of massive neutrinos. However, the amount of information that can be extracted is limited by the accuracy of theoretical models used to analyse the data. Here, by using the L-Galaxies semi-analytical model run over the Millennium-XXL N-body simulation, we assess the impact of galaxy formation on satellite kinematics and the theoretical modelling of redshift-space distortions. We show that different galaxy selection criteria lead to noticeable differences in the radial distributions and velocity structure of satellite galaxies. Specifically, whereas samples of stellar mass selected galaxies feature satellites that roughly follow the dark matter, emission line satellite galaxies are located preferentially in the outskirts of haloes and display net infall velocities. We demonstrate that capturing these differences is crucial for modelling the multipoles of the correlation function in redshift space, even on large scales. In particular, we show how modelling small-scale velocities with a single Gaussian distribution leads to a poor description of the measured clustering. In contrast, we propose a parametrization that is flexible enough to model the satellite kinematics and that leads to an accurate description of the correlation function down to sub-Mpc scales. We anticipate that our model will be a necessary ingredient in improved theoretical descriptions of redshift-space distortions, which together could result in significantly tighter cosmological constraints and a more optimal exploitation of future large data sets.
NASA Astrophysics Data System (ADS)
Sengupta, A.; Kletzing, C.; Howk, R.; Kurth, W. S.
2017-12-01
An important goal of the Van Allen Probes mission is to understand wave particle interactions that can energize relativistic electron in the Earth's Van Allen radiation belts. The EMFISIS instrumentation suite provides measurements of wave electric and magnetic fields of wave features such as chorus that participate in these interactions. Geometric signal processing discovers structural relationships, e.g. connectivity across ridge-like features in chorus elements to reveal properties such as dominant angles of the element (frequency sweep rate) and integrated power along the a given chorus element. These techniques disambiguate these wave features against background hiss-like chorus. This enables autonomous discovery of chorus elements across the large volumes of EMFISIS data. At the scale of individual or overlapping chorus elements, topological pattern recognition techniques enable interpretation of chorus microstructure by discovering connectivity and other geometric features within the wave signature of a single chorus element or between overlapping chorus elements. Thus chorus wave features can be quantified and studied at multiple scales of spectral geometry using geometric signal processing techniques. We present recently developed computational techniques that exploit spectral geometry of chorus elements and whistlers to enable large-scale automated discovery, detection and statistical analysis of these events over EMFISIS data. Specifically, we present different case studies across a diverse portfolio of chorus elements and discuss the performance of our algorithms regarding precision of detection as well as interpretation of chorus microstructure. We also provide large-scale statistical analysis on the distribution of dominant sweep rates and other properties of the detected chorus elements.
Simulation of all-scale atmospheric dynamics on unstructured meshes
NASA Astrophysics Data System (ADS)
Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng
2016-10-01
The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.
Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC
NASA Astrophysics Data System (ADS)
Alruwaili, Manal
With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.
Action detection by double hierarchical multi-structure space-time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-03-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Computational examination of utility scale wind turbine wake interactions
Okosun, Tyamo; Zhou, Chenn Q.
2015-07-14
We performed numerical simulations of small, utility scale wind turbine groupings to determine how wakes generated by upstream turbines affect the performance of the small turbine group as a whole. Specifically, various wind turbine arrangements were simulated to better understand how turbine location influences small group wake interactions. The minimization of power losses due to wake interactions certainly plays a significant role in the optimization of wind farms. Since wind turbines extract kinetic energy from the wind, the air passing through a wind turbine decreases in velocity, and turbines downstream of the initial turbine experience flows of lower energy, resultingmore » in reduced power output. Our study proposes two arrangements of turbines that could generate more power by exploiting the momentum of the wind to increase velocity at downstream turbines, while maintaining low wake interactions at the same time. Furthermore, simulations using Computational Fluid Dynamics are used to obtain results much more quickly than methods requiring wind tunnel models or a large scale experimental test.« less
Dannemann, Teodoro; Boyer, Denis; Miramontes, Octavio
2018-04-10
Multiple-scale mobility is ubiquitous in nature and has become instrumental for understanding and modeling animal foraging behavior. However, the impact of individual movements on the long-term stability of populations remains largely unexplored. We analyze deterministic and stochastic Lotka-Volterra systems, where mobile predators consume scarce resources (prey) confined in patches. In fragile systems (that is, those unfavorable to species coexistence), the predator species has a maximized abundance and is resilient to degraded prey conditions when individual mobility is multiple scaled. Within the Lévy flight model, highly superdiffusive foragers rarely encounter prey patches and go extinct, whereas normally diffusing foragers tend to proliferate within patches, causing extinctions by overexploitation. Lévy flights of intermediate index allow a sustainable balance between patch exploitation and regeneration over wide ranges of demographic rates. Our analytical and simulated results can explain field observations and suggest that scale-free random movements are an important mechanism by which entire populations adapt to scarcity in fragmented ecosystems.
Action detection by double hierarchical multi-structure space–time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-06-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Strong gravitational lensing probes of the particle nature of dark matter
NASA Astrophysics Data System (ADS)
Moustakas, Leonidas A.; Abazajian, Kevork; Benson, Andrew; Bolton, Adam S.; Bullock, James S.; Chen, Jacqueline; Cheng, Edward; Coe, Dan; Congdon, Arthur B.; Dalal, Neal; Diemand, Juerg; Dobke, Benjamin M.; Dobler, Greg; Dore, Olivier; Dutton, Aaron; Ellis, Richard; Fassnacht, Chris D.; Ferguson, Henry; Finkbeiner, Douglas; Gavassi, Raphael; High, Fredrick William; Jeltema, Telsa; Jullo, Eric; Kaplinghat, Manoj; Keeton, Charles R.; Kneib, Jean-Paul; Koopmans, Leon V.E.; Koishiappas, Savvas M.; Kuhlen, Michael; Kusenko, Alexander; Lawrence, Charles R.; Loeb, Avi; Madae, Piero; Marshall, Phil; Metcalf, R. Ben; Natarajan, Priya; Primack, Joel R.; Profumo, Stefano; Seiffert, Michael D.; Simon, Josh; Stern, Daniel; Strigari, Louis; Taylor, James E.; Wayth, Randall; Wambsganss, Joachim; Wechsler, Risa; Zentner, Andrew
There is a vast menagerie of plausible candidates for the constituents of dark matter, both within and beyond extensions of the Standard Model of particle physics. Each of these candidates may have scattering (and other) cross section properties that are consistent with the dark matter abundance, BBN, and the most scales in the matter power spectrum; but which may have vastly different behavior at sub-galactic "cutoff" scales, below which dark matter density fluctuations are smoothed out. The only way to quantitatively measure the power spectrum behavior at sub-galactic scales at distances beyond the local universe, and indeed over cosmic time, is through probes available in multiply imaged strong gravitational lenses. Gravitational potential perturbations by dark matter substructure encode information in the observed relative magnifications, positions, and time delays in a strong lens. Each of these is sensitive to a different moment of the substructure mass function and to different effective mass ranges of the substructure. The time delay perturbations, in particular, are proving to be largely immune to the degeneracies and systematic uncertainties that have impacted exploitation of strong lenses for such studies. There is great potential for a coordinated theoretical and observational effort to enable a sophisticated exploitation of strong gravitational lenses as direct probes of dark matter properties. This opportunity motivates this white paper, and drives the need for: a) strong support of the theoretical work necessary to understand all astrophysical consequences for different dark matter candidates; and b) tailored observational campaigns, and even a fully dedicated mission, to obtain the requisite data.
Clay and Shale Permeability at Lab to Regional Scale
NASA Astrophysics Data System (ADS)
Neuzil, C.
2017-12-01
Because clays, shales, and other clay-rich media tend to be only poorly permeable, and are laterally extensive and voluminous, they play key roles in problems as diverse as groundwater supply, waste confinement, exploitation of conventional and unconventional oil and gas, and deformation and failure in the crust. Clay and shale permeability is a crucial but often highly uncertain analysis parameter; direct measurements are challenging, error-prone, and - perhaps most importantly - provide information only at quite small scales. Fortunately, there has been a dramatic increase in clay and shale permeability data from sources that include scientific ocean drilling, nuclear waste repository research, groundwater resource studies, liquid waste and CO2 sequestration, and oil and gas research. The effect of lithology as well as porosity on matrix permeability can now be examined and permeability - scale relations are becoming discernable. A significant number of large-scale permeability estimates have been obtained by inverse methods that essentially treat large-scale flow systems as natural experiments. They suggest surprisingly little scale-dependence in clay and shale permeabilities in subsiding basins and accretionary complexes. Stable continental settings present a different picture; as depths increase beyond 1 km, scale dependence mostly disappears even over the largest areas. At depths less than 1 km, secondary permeability is not always present over areas of 1 - 10 km2, but always evident for areas in excess of about 103 km2. Transmissive fractures have been observed in very low porosity (< 0.03) shales in these settings, but the cause of scale dependence in other cases is unclear; it may reflect time-dependent, or "dynamic" conditions, including irreversible and ongoing changes imposed on subsurface flow systems by human activities.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
Oscar: a portable prototype system for the study of climate variability
NASA Astrophysics Data System (ADS)
Madonna, Fabio; Rosoldi, Marco; Amato, Francesco
2015-04-01
The study of the techniques for the exploitation of solar energy implies the knowledge of nature, ecosystem, biological factors and local climate. Clouds, fog, water vapor, and the presence of large concentrations of dust can significantly affect the way to exploit the solar energy. Therefore, a quantitative characterization of the impact of climate variability at the regional scale is needed to increase the efficiency and sustainability of the energy system. OSCAR (Observation System for Climate Application at Regional scale) project, funded in the frame of the PO FESR 2007-2013, aims at the design of a portable prototype system for the study of correlations among the trends of several Essential Climate Variables (ECVs) and the change in the amount of solar irradiance at the ground level. The final goal of this project is to provide a user-friendly low cost solution for the quantification of the impact of regional climate variability on the efficiency of solar cell and concentrators to improve the exploitation of natural sources. The prototype has been designed on the basis of historical measurements performed at CNR-IMAA Atmospheric Observatory (CIAO). Measurements from satellite and data from models have been also considered as ancillary to the study, above all, to fill in the gaps of existing datasets. In this work, the results outcome from the project activities will be presented. The results include: the design and implementation of the prototype system; the development of a methodology for the estimation of the impact of climate variability, mainly due to aerosol, cloud and water vapor, on the solar irradiance using the integration of the observations potentially provided by prototype; the study of correlation between the surface radiation, precipitation and aerosols transport. In particular, a statistical study will be presented to assess the impact of the atmosphere on the solar irradiance at the ground, quantifying the contribution due to aerosol and clouds and separating their effect on the direct and the diffuse components of the solar radiation. This also aims to provide recommendations to the manufacturer of the devices used to exploit solar radiation.
NASA Technical Reports Server (NTRS)
Schmetz, Johannes; Menzel, W. Paul; Velden, Christopher; Wu, Xiangqian; Vandeberg, Leo; Nieman, Steve; Hayden, Christopher; Holmlund, Kenneth; Geijo, Carlos
1995-01-01
This paper describes the results from a collaborative study between the European Space Operations Center, the European Organization for the Exploitation of Meteorological Satellites, the National Oceanic and Atmospheric Administration, and the Cooperative Institute for Meteorological Satellite Studies investigating the relationship between satellite-derived monthly mean fields of wind and humidity in the upper troposphere for March 1994. Three geostationary meteorological satellites GOES-7, Meteosat-3, and Meteosat-5 are used to cover an area from roughly 160 deg W to 50 deg E. The wind fields are derived from tracking features in successive images of upper-tropospheric water vapor (WV) as depicted in the 6.5-micron absorption band. The upper-tropospheric relative humidity (UTH) is inferred from measured water vapor radiances with a physical retrieval scheme based on radiative forward calculations. Quantitative information on large-scale circulation patterns in the upper-troposphere is possible with the dense spatial coverage of the WV wind vectors. The monthly mean wind field is used to estimate the large-scale divergence; values range between about-5 x 10(exp -6) and 5 x 10(exp 6)/s when averaged over a scale length of about 1000-2000 km. The spatial patterns of the UTH field and the divergence of the wind field closely resemble one another, suggesting that UTH patterns are principally determined by the large-scale circulation. Since the upper-tropospheric humidity absorbs upwelling radiation from lower-tropospheric levels and therefore contributes significantly to the atmospheric greenhouse effect, this work implies that studies on the climate relevance of water vapor should include three-dimensional modeling of the atmospheric dynamics. The fields of UTH and WV winds are useful parameters for a climate-monitoring system based on satellite data. The results from this 1-month analysis suggest the desirability of further GOES and Meteosat studies to characterize the changes in the upper-tropospheric moisture sources and sinks over the past decade.
A novel 3D deformation measurement method under optical microscope for micro-scale bulge-test
NASA Astrophysics Data System (ADS)
Wu, Dan; Xie, Huimin
2017-11-01
A micro-scale 3D deformation measurement method combined with optical microscope is proposed in this paper. The method is based on gratings and phase shifting algorithm. By recording the grating images before and after deformation from two symmetrical angles and calculating the phases of the grating patterns, the 3D deformation field of the specimen can be extracted from the phases of the grating patterns. The proposed method was applied to the micro-scale bulge test. A micro-scale thermal/mechanical coupling bulge-test apparatus matched with the super-depth microscope was exploited. With the gratings fabricated onto the film, the deformed morphology of the bulged film was measured reliably. The experimental results show that the proposed method and the exploited bulge-test apparatus can be used to characterize the thermal/mechanical properties of the films at micro-scale successfully.
NASA's computer science research program
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1983-01-01
Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.
NASA Astrophysics Data System (ADS)
Ravanelli, R.; Nascetti, A.; Cirigliano, R. V.; Di Rico, C.; Monti, P.; Crespi, M.
2018-04-01
The aim of this work is to exploit the large-scale analysis capabilities of the innovative Google Earth Engine platform in order to investigate the temporal variations of the Urban Heat Island phenomenon as a whole. A intuitive methodology implementing a largescale correlation analysis between the Land Surface Temperature and Land Cover alterations was thus developed.The results obtained for the Phoenix MA are promising and show how the urbanization heavily affects the magnitude of the UHI effects with significant increases in LST. The proposed methodology is therefore able to efficiently monitor the UHI phenomenon.
Associative Pattern Recognition In Analog VLSI Circuits
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1995-01-01
Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.
Sensitivity of the orbiting JEM-EUSO mission to large-scale anisotropies
NASA Astrophysics Data System (ADS)
Weiler, Thomas; Anchordoqui, Luis; Denton, Peter
2013-04-01
Uniform sky coverage and very large apertures are advantages of future extreme-energy, space-based cosmic-ray observatories. In this talk we will quantify the advantage of an all-sky/4pi observatory such as JEM-EUSO over the one to two steradian coverage of a ground-based observatory such as Auger. We exploit the availability of spherical harmonics in the case of 4pi coverage. The resulting Y(lm) coefficients will likely become a standard analysis tool for near-future, space-based, cosmic-ray astronomy. We demonstrate the use of Y(lm)'s with extractions of simulated dipole and quadrupole anisotropies. (A dipole anisotropy is expected if a single source-region such as Cen A dominates the sky, while a quadrupole moment is expected if a 2D source region such as the Supergalactic Plane dominates the sky.)
Taking Open Innovation to the Molecular Level - Strengths and Limitations.
Zdrazil, Barbara; Blomberg, Niklas; Ecker, Gerhard F
2012-08-01
The ever-growing availability of large-scale open data and its maturation is having a significant impact on industrial drug-discovery, as well as on academic and non-profit research. As industry is changing to an 'open innovation' business concept, precompetitive initiatives and strong public-private partnerships including academic research cooperation partners are gaining more and more importance. Now, the bioinformatics and cheminformatics communities are seeking for web tools which allow the integration of this large volume of life science datasets available in the public domain. Such a data exploitation tool would ideally be able to answer complex biological questions by formulating only one search query. In this short review/perspective, we outline the use of semantic web approaches for data and knowledge integration. Further, we discuss strengths and current limitations of public available data retrieval tools and integrated platforms.
A Computational framework for telemedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.
1998-07-01
Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2018-03-01
A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.
Using LUCAS topsoil database to estimate soil organic carbon content in local spectral libraries
NASA Astrophysics Data System (ADS)
Castaldi, Fabio; van Wesemael, Bas; Chabrillat, Sabine; Chartin, Caroline
2017-04-01
The quantification of the soil organic carbon (SOC) content over large areas is mandatory to obtain accurate soil characterization and classification, which can improve site specific management at local or regional scale exploiting the strong relationship between SOC and crop growth. The estimation of the SOC is not only important for agricultural purposes: in recent years, the increasing attention towards global warming highlighted the crucial role of the soil in the global carbon cycle. In this context, soil spectroscopy is a well consolidated and widespread method to estimate soil variables exploiting the interaction between chromophores and electromagnetic radiation. The importance of spectroscopy in soil science is reflected by the increasing number of large soil spectral libraries collected in the world. These large libraries contain soil samples derived from a consistent number of pedological regions and thus from different parent material and soil types; this heterogeneity entails, in turn, a large variability in terms of mineralogical and organic composition. In the light of the huge variability of the spectral responses to SOC content and composition, a rigorous classification process is necessary to subset large spectral libraries and to avoid the calibration of global models failing to predict local variation in SOC content. In this regard, this study proposes a method to subset the European LUCAS topsoil database into soil classes using a clustering analysis based on a large number of soil properties. The LUCAS database was chosen to apply a standardized multivariate calibration approach valid for large areas without the need for extensive field and laboratory work for calibration of local models. Seven soil classes were detected by the clustering analyses and the samples belonging to each class were used to calibrate specific partial least square regression (PLSR) models to estimate SOC content of three local libraries collected in Belgium (Loam belt and Wallonia) and Luxembourg. The three local libraries only consist of spectral data (199 samples) acquired using the same protocol as the one used for the LUCAS database. SOC was estimated with a good accuracy both within each local library (RMSE: 1.2 ÷ 5.4 g kg-1; RPD: 1.41 ÷ 2.06) and for the samples of the three libraries together (RMSE: 3.9 g kg-1; RPD: 2.47). The proposed approach could allow to estimate SOC everywhere in Europe only collecting spectra, without the need for chemical laboratory analyses, exploiting the potentiality of the LUCAS database and specific PLSR models.
NASA Astrophysics Data System (ADS)
Vilotte, J.-P.; Atkinson, M.; Michelini, A.; Igel, H.; van Eck, T.
2012-04-01
Increasingly dense seismic and geodetic networks are continuously transmitting a growing wealth of data from around the world. The multi-use of these data leaded the seismological community to pioneer globally distributed open-access data infrastructures, standard services and formats, e.g., the Federation of Digital Seismic Networks (FDSN) and the European Integrated Data Archives (EIDA). Our ability to acquire observational data outpaces our ability to manage, analyze and model them. Research in seismology is today facing a fundamental paradigm shift. Enabling advanced data-intensive analysis and modeling applications challenges conventional storage, computation and communication models and requires a new holistic approach. It is instrumental to exploit the cornucopia of data, and to guarantee optimal operation and design of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of the seismological data-intensive applications in data analysis and modeling. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of those applications, and integrating the data infrastructures with Grid, Cloud and HPC infrastructures. It will allow prototyping solutions for new use cases as they emerge within the European Plate Observatory Systems (EPOS), the ESFRI initiative of the solid Earth community. Computational seismology, and information management, is increasingly revolving around massive amounts of data that stem from: (1) the flood of data from the observational systems; (2) the flood of data from large-scale simulations and inversions; (3) the ability to economically store petabytes of data online; (4) the evolving Internet and Data-aware computing capabilities. As data-intensive applications are rapidly increasing in scale and complexity, they require additional services-oriented architectures offering a virtualization-based flexibility for complex and re-usable workflows. Scientific information management poses computer science challenges: acquisition, organization, query and visualization tasks scale almost linearly with the data volumes. Commonly used FTP-GREP metaphor allows today to scan gigabyte-sized datasets but will not work for scanning terabyte-sized continuous waveform datasets. New data analysis and modeling methods, exploiting the signal coherence within dense network arrays, are nonlinear. Pair-algorithms on N points scale as N2. Wave form inversion and stochastic simulations raise computing and data handling challenges These applications are unfeasible for tera-scale datasets without new parallel algorithms that use near-linear processing, storage and bandwidth, and that can exploit new computing paradigms enabled by the intersection of several technologies (HPC, parallel scalable database crawler, data-aware HPC). This issues will be discussed based on a number of core pilot data-intensive applications and use cases retained in VERCE. This core applications are related to: (1) data processing and data analysis methods based on correlation techniques; (2) cpu-intensive applications such as large-scale simulation of synthetic waveforms in complex earth systems, and full waveform inversion and tomography. We shall analyze their workflow and data flow, and their requirements for a new service-oriented architecture and a data-aware platform with services and tools. Finally, we will outline the importance of a new collaborative environment between seismology and computer science, together with the need for the emergence and the recognition of 'research technologists' mastering the evolving data-aware technologies and the data-intensive research goals in seismology.
[Impacts of hydroelectric cascade exploitation on river ecosystem and landscape: a review].
Yang, Kun; Deng, Xi; Li, Xue-Ling; Wen, Ping
2011-05-01
Hydroelectric cascade exploitation, one of the major ways for exploiting water resources and developing hydropower, not only satisfies the needs of various national economic sectors, but also promotes the socio-economic sustainable development of river basin. unavoidable anthropogenic impacts on the entire basin ecosystem. Based on the process of hydroelectric cascade exploitation and the ecological characteristics of river basins, this paper reviewed the major impacts of hydroelectric cascade exploitation on dam-area ecosystems, river reservoirs micro-climate, riparian ecosystems, river aquatic ecosystems, wetlands, and river landscapes. Some prospects for future research were offered, e.g., strengthening the research of chain reactions and cumulative effects of ecological factors affected by hydroelectric cascade exploitation, intensifying the study of positive and negative ecological effects under the dam networks and their joint operations, and improving the research of successional development and stability of basin ecosystems at different temporal and spatial scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
Production of primary mirror segments for the Giant Magellan Telescope
NASA Astrophysics Data System (ADS)
Martin, H. M.; Allen, R. G.; Burge, J. H.; Davis, J. M.; Davison, W. B.; Johns, M.; Kim, D. W.; Kingsley, J. S.; Law, K.; Lutz, R. D.; Strittmatter, P. A.; Su, P.; Tuell, M. T.; West, S. C.; Zhou, P.
2014-07-01
Segment production for the Giant Magellan Telescope is well underway, with the off-axis Segment 1 completed, off-axis Segments 2 and 3 already cast, and mold construction in progress for the casting of Segment 4, the center segment. All equipment and techniques required for segment fabrication and testing have been demonstrated in the manufacture of Segment 1. The equipment includes a 28 m test tower that incorporates four independent measurements of the segment's figure and geometry. The interferometric test uses a large asymmetric null corrector with three elements including a 3.75 m spherical mirror and a computer-generated hologram. For independent verification of the large-scale segment shape, we use a scanning pentaprism test that exploits the natural geometry of the telescope to focus collimated light to a point. The Software Configurable Optical Test System, loosely based on the Hartmann test, measures slope errors to submicroradian accuracy at high resolution over the full aperture. An enhanced laser tracker system guides the figuring through grinding and initial polishing. All measurements agree within the expected uncertainties, including three independent measurements of radius of curvature that agree within 0.3 mm. Segment 1 was polished using a 1.2 m stressed lap for smoothing and large-scale figuring, and a set of smaller passive rigid-conformal laps on an orbital polisher for deterministic small-scale figuring. For the remaining segments, the Mirror Lab is building a smaller, orbital stressed lap to combine the smoothing capability with deterministic figuring.
NASA Astrophysics Data System (ADS)
Vallier, Bérénice; Magnenet, Vincent; Fond, Christophe; Schmittbuhl, Jean
2017-04-01
Many numerical models have been developed in deep geothermal reservoir engineering to interpret field measurements of the natural hydro-thermal circulations or to predict exploitation scenarios. They typically aim at analyzing the Thermo-Hydro-Mechanical and Chemical (THMC) coupling including complex rheologies of the rock matrix like thermo-poro-elasticity. Few approaches address in details the role of the fluid rheology and more specifically the non-linear sensitivity of the brine rheology with temperature and pressure. Here we use the finite element Code_Aster to solve the balance equations of a 2D THM model of the Soultz-sous-Forêts reservoir. The brine properties are assumed to depend on the fluid pressure and the temperature as in Magnenet et al. (2014). A sensitive parameter is the thermal dilatation of the brine that is assumed to depend quadratically with temperature as proposed by the experimental measurements of Rowe and Chou (1970). The rock matrix is homogenized at the scale of the equation resolution assuming to have a representative elementary volume of the fractured medium smaller than the mesh size. We still chose four main geological units to adjust the rock physic parameters at large scale: thermal conductivity, permeability, radioactive source production rate, elastic and Biot parameters. We obtain a three layer solution with a large hydro-thermal convection below the cover-basement transition. Interestingly, the geothermal gradient in the sedimentary layer is controlled by the radioactive production rate in the upper altered granite. The second part of the study deals with an inversion approach of the homogenized solid and fluid parameters at large scale using our direct THM model. The goal is to compare the large scale inverted estimates of the rock and brine properties with direct laboratory measurements on cores and discuss their upscaling in the context of a fractured network hydraulically active. Magnenet V., Fond C., Genter A. and Schmittbuhl J.: two-dimensional THM modelling of the large-scale natural hydrothermal circulation at Soultz-sous-Forêts, Geothermal Energy, (2014), 2, 1-17. Rowe A.M. and Chou J.C.S.: Pressure-volume-temperature-concentration relation of aqueous NaCl solutions, J. Chem. Eng. Data., (1970), 15, 61-66.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
[Protection regionalization of Houshi Forest Park based on landscape sensitivity].
Zhou, Rui; Li, Yue-hui; Hu, Yuan-man; Zhang, Jia-hui; Liu, Miao
2009-03-01
By using GIS technology, and selecting slope, relative distance to viewpoints, relative distance to tourism roads, visual probability of viewpoints, and visual probability of tourism roads as the indices, the landscape sensitivity of Houshi Forest Park was assessed, and an integrated assessment model was established. The AHP method was utilized to determine the weights of the indices, and further, to identify the integrated sensitivity class of the areas in the Park. Four classes of integrated sensitivity area were divided. Class I had an area of 297.24 hm2, occupying 22.9% of the total area of the Park, which should be strictly protected to maintain natural landscape, and prohibited any exploitation or construction. Class II had an area of 359.72 hm2, accounting for 27.8% of the total. The hills in this area should be kept from destroying to protect vegetation and water, but the simple byway and stone path could be built. Class III had an area reached up to 495.80 hm2, occupying 38.3% of the total, which could be moderately exploited, and artificial landscape was advocated to beautify and set off natural landscape. Class IV had the smallest area (142.80 hm2) accounting for 11% of the total, which had the greatest potential of exploitation, being possible to build large-scale integrated tourism facilities and travelling roads.
NASA Astrophysics Data System (ADS)
Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano; Rodríguez-Quintero, José; Zafeiropoulos, Savvas
2018-03-01
We report on an instanton-based analysis of the gluon Green functions in the Landau gauge for low momenta; in particular we use lattice results for αs in the symmetric momentum subtraction scheme (MOM) for large-volume lattice simulations. We have exploited quenched gauge field configurations, Nf = 0, with both Wilson and tree-level Symanzik improved actions, and unquenched ones with Nf = 2 + 1 and Nf = 2 + 1 + 1 dynamical flavors (domain wall and twisted-mass fermions, respectively). We show that the dominance of instanton correlations on the low-momenta gluon Green functions can be applied to the determination of phenomenological parameters of the instanton liquid and, eventually, to a determination of the lattice spacing. We furthermore apply the Gradient Flow to remove short-distance fluctuations. The Gradient Flow gets rid of the QCD scale, ΛQCD, and reveals that the instanton prediction extents to large momenta. For those gauge field configurations free of quantum fluctuations, the direct study of topological charge density shows the appearance of large-scale lumps that can be identified as instantons, giving access to a direct study of the instanton density and size distribution that is compatible with those extracted from the analysis of the Green functions.
Understanding middle managers' influence in implementing patient safety culture.
Gutberg, Jennifer; Berta, Whitney
2017-08-22
The past fifteen years have been marked by large-scale change efforts undertaken by healthcare organizations to improve patient safety and patient-centered care. Despite substantial investment of effort and resources, many of these large-scale or "radical change" initiatives, like those in other industries, have enjoyed limited success - with practice and behavioural changes neither fully adopted nor ultimately sustained - which has in large part been ascribed to inadequate implementation efforts. Culture change to "patient safety culture" (PSC) is among these radical change initiatives, where results to date have been mixed at best. This paper responds to calls for research that focus on explicating factors that affect efforts to implement radical change in healthcare contexts, and focuses on PSC as the radical change implementation. Specifically, this paper offers a novel conceptual model based on Organizational Learning Theory to explain the ability of middle managers in healthcare organizations to influence patient safety culture change. We propose that middle managers can capitalize on their unique position between upper and lower levels in the organization and engage in 'ambidextrous' learning that is critical to implementing and sustaining radical change. This organizational learning perspective offers an innovative way of framing the mid-level managers' role, through both explorative and exploitative activities, which further considers the necessary organizational context in which they operate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-02
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix
Zhang, Pengwei; Hu, Liming; Meegoda, Jay N.
2017-01-01
Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix. PMID:28772465
Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix.
Zhang, Pengwei; Hu, Liming; Meegoda, Jay N
2017-01-25
Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix.
Electromagnetic liquid pistons for capillarity-based pumping
NASA Astrophysics Data System (ADS)
Malouin, Bernard; Olles, Joseph; Cheng, Lili; Hirsa, Amir; Vogel, Michael
2011-11-01
Two adjoining ferrofluid droplets can behave as an electronically-controlled oscillator or switch by an appropriate balance of magnetic, capillary, and inertial forces. Their motion can be exploited to displace a surrounding liquid, forming electromagnetic liquid pistons. Such ferrofluid pistons can pump a precise volume of liquid via finely tunable amplitudes or resonant frequencies with no solid moving parts. Here we demonstrate the use of these liquid pistons in capillarity-dominated systems for variable focal distance liquid lenses with nearly perfect spherical interfaces. These liquid/liquid lenses feature many promising qualities not previously realized together in a liquid lens, including large apertures, immunity to evaporation, invariance to orientation relative to gravity, and low driving voltages. The dynamics of these liquid pistons is examined, with experimental measurements showing good agreement with a spherical cap model. A centimeter-scale lens was shown to respond in excess of 30 Hz, with resonant frequencies over 1 kHz predicted for scaled down systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Aiman; Laguna, Ignacio; Sato, Kento
Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enablesmore » failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.« less
Materials by Design—A Perspective From Atoms to Structures
Buehler, Markus J.
2013-01-01
Biological materials are effectively synthesized, controlled, and used for a variety of purposes—in spite of limitations in energy, quality, and quantity of their building blocks. Whereas the chemical composition of materials in the living world plays a some role in achieving functional properties, the way components are connected at different length scales defines what material properties can be achieved, how they can be altered to meet functional requirements, and how they fail in disease states and other extreme conditions. Recent work has demonstrated this by using large-scale computer simulations to predict materials properties from fundamental molecular principles, combined with experimental work and new mathematical techniques to categorize complex structure-property relationships into a systematic framework. Enabled by such categorization, we discuss opportunities based on the exploitation of concepts from distinct hierarchical systems that share common principles in how function is created, linking music to materials science. PMID:24163499
The effect of nanoclay on the rheology and dynamics of polychlorinated biphenyl.
Roy, D; Casalini, R; Roland, C M
2015-12-28
The thermal, rheological, and mechanical and dielectric relaxation properties of exfoliated dispersions of montmorillonite clay in a molecular liquid, polychlorobiphenyl (PCB), were studied. The viscosity enhancement at low concentrations of clay (≤5%) exceeded by a factor of 50 the increase obtainable with conventional fillers. However, the effect of the nanoclay on the local dynamics, including the glass transition temperature, was quite small. All materials herein conformed to density-scaling of the reorientation relaxation time of the PCB for a common value of the scaling exponent. A new relaxation process was observed in the mixtures, associated with PCB molecules in proximity to the clay surface. This process has an anomalously high dielectric strength, suggesting a means to exploit nanoparticles to achieve large electrical energy absorption. This lower frequency dispersion has a weaker dependence on pressure and density, consistent with dynamics constrained by interactions with the particle surface.
Abbarchi, Marco; Naffouti, Meher; Vial, Benjamin; Benkouider, Abdelmalek; Lermusiaux, Laurent; Favre, Luc; Ronda, Antoine; Bidault, Sébastien; Berbezier, Isabelle; Bonod, Nicolas
2014-11-25
Subwavelength-sized dielectric Mie resonators have recently emerged as a promising photonic platform, as they combine the advantages of dielectric microstructures and metallic nanoparticles supporting surface plasmon polaritons. Here, we report the capabilities of a dewetting-based process, independent of the sample size, to fabricate Si-based resonators over large scales starting from commercial silicon-on-insulator (SOI) substrates. Spontaneous dewetting is shown to allow the production of monocrystalline Mie-resonators that feature two resonant modes in the visible spectrum, as observed in confocal scattering spectroscopy. Homogeneous scattering responses and improved spatial ordering of the Si-based resonators are observed when dewetting is assisted by electron beam lithography. Finally, exploiting different thermal agglomeration regimes, we highlight the versatility of this technique, which, when assisted by focused ion beam nanopatterning, produces monocrystalline nanocrystals with ad hoc size, position, and organization in complex multimers.
A New Event Builder for CMS Run II
NASA Astrophysics Data System (ADS)
Albertsson, K.; Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
Systems Proteomics for Translational Network Medicine
Arrell, D. Kent; Terzic, Andre
2012-01-01
Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016
Broadband and scalable optical coupling for silicon photonics using polymer waveguides
NASA Astrophysics Data System (ADS)
La Porta, Antonio; Weiss, Jonas; Dangel, Roger; Jubin, Daniel; Meier, Norbert; Horst, Folkert; Offrein, Bert Jan
2018-04-01
We present optical coupling schemes for silicon integrated photonics circuits that account for the challenges in large-scale data processing systems such as those used for emerging big data workloads. Our waveguide based approach allows to optimally exploit the on-chip optical feature size, and chip- and package real-estate. It further scales well to high numbers of channels and is compatible with state-of-the-art flip-chip die packaging. We demonstrate silicon waveguide to polymer waveguide coupling losses below 1.5 dB for both the O- and C-bands with a polarisation dependent loss of <1 dB. Over 100 optical silicon waveguide to polymer waveguide interfaces were assembled within a single alignment step, resulting in a physical I/O channel density of up to 13 waveguides per millimetre along the chip-edge, with an average coupling loss of below 3.4 dB measured at 1310 nm.
A new event builder for CMS Run II
Albertsson, K.; Andre, J-M; Andronidis, A.; ...
2015-12-23
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Innibandmore » FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. In conclusion, ee present performance measurements from small-scale prototypes and from the full-scale production system.« less
Control of finite critical behaviour in a small-scale social system
NASA Astrophysics Data System (ADS)
Daniels, Bryan C.; Krakauer, David C.; Flack, Jessica C.
2017-02-01
Many adaptive systems sit near a tipping or critical point. For systems near a critical point small changes to component behaviour can induce large-scale changes in aggregate structure and function. Criticality can be adaptive when the environment is changing, but entails reduced robustness through sensitivity. This tradeoff can be resolved when criticality can be tuned. We address the control of finite measures of criticality using data on fight sizes from an animal society model system (Macaca nemestrina, n=48). We find that a heterogeneous, socially organized system, like homogeneous, spatial systems (flocks and schools), sits near a critical point; the contributions individuals make to collective phenomena can be quantified; there is heterogeneity in these contributions; and distance from the critical point (DFC) can be controlled through biologically plausible mechanisms exploiting heterogeneity. We propose two alternative hypotheses for why a system decreases the distance from the critical point.
Scalable Static and Dynamic Community Detection Using Grappolo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halappanavar, Mahantesh; Lu, Hao; Kalyanaraman, Anantharaman
Graph clustering, popularly known as community detection, is a fundamental kernel for several applications of relevance to the Defense Advanced Research Projects Agency’s (DARPA) Hierarchical Identify Verify Exploit (HIVE) Pro- gram. Clusters or communities represent natural divisions within a network that are densely connected within a cluster and sparsely connected to the rest of the network. The need to compute clustering on large scale data necessitates the development of efficient algorithms that can exploit modern architectures that are fundamentally parallel in nature. How- ever, due to their irregular and inherently sequential nature, many of the current algorithms for community detectionmore » are challenging to parallelize. In response to the HIVE Graph Challenge, we present several parallelization heuristics for fast community detection using the Louvain method as the serial template. We implement all the heuristics in a software library called Grappolo. Using the inputs from the HIVE Challenge, we demonstrate superior performance and high quality solutions based on four parallelization heuristics. We use Grappolo on static graphs as the first step towards community detection on streaming graphs.« less
Micro- and nanotechnologies in plankton research
NASA Astrophysics Data System (ADS)
Mohammed, Javeed Shaikh
2015-05-01
A better understanding of the vast range of plankton and their interactions with the marine environment would allow prediction of their large-scale impact on the marine ecosystem, and provide in-depth knowledge on pollution and climate change. Numerous technologies, especially lab-on-a-chip microsystems, are being used to this end. Marine biofouling is a global issue with significant economic consequences. Ecofriendly polymer nanotechnologies are being developed to combat marine biofouling. Furthermore, nanomaterials hold great potential for bioremediation and biofuel production. Excellent reviews covering focused topics in plankton research exist, with only a handful discussing both micro- and nanotechnologies. This work reviews both micro- and nanotechnologies applied to broad-ranging plankton research topics including flow cytometry, chemotaxis/toxicity assays, biofilm formation, marine antifouling/fouling-release surfaces and coatings, green energy, green nanomaterials, microalgae immobilization, and bioremediation. It is anticipated that developments in plankton research will see engineered exploitation of micro- and nanotechnologies. The current review is therefore intended to promote micro-/nanotechnology researchers to team up with limnologists/oceanographers, and develop novel strategies for understanding and green exploitation of the complex marine ecosystem.
Wavelet-enabled progressive data Access and Storage Protocol (WASP)
NASA Astrophysics Data System (ADS)
Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.
2015-12-01
Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
A framework for activity detection in wide-area motion imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D
2009-01-01
Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less
The Promiscuity of [beta]-Strand Pairing Allows for Rational Design of [beta]-Sheet Face Inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makabe, Koki; Koide, Shohei
2009-06-17
Recent studies suggest the dominant role of main-chain H-bond formation in specifying {beta}-sheet topology. Its essentially sequence-independent nature implies a large degree of freedom in designing {beta}-sheet-based nanomaterials. Here we show rational design of {beta}-sheet face inversions by incremental deletions of {beta}-strands from the single-layer {beta}-sheet of Borrelia outer surface protein A. We show that a {beta}-sheet structure can be maintained when a large number of native contacts are removed and that one can design large-scale conformational transitions of a {beta}-sheet such as face inversion by exploiting the promiscuity of strand-strand interactions. High-resolution X-ray crystal structures confirmed the success ofmore » the design and supported the importance of main-chain H-bonds in determining {beta}-sheet topology. This work suggests a simple but effective strategy for designing and controlling nanomaterials based on {beta}-rich peptide self-assemblies.« less
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
Assessment Methods of Groundwater Overdraft Area and Its Application
NASA Astrophysics Data System (ADS)
Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun
2018-05-01
Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.
Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters
Zhang, Sirou; Qiao, Xiaoya
2017-01-01
In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311
Interaction of mining activities and aquatic environment: A review from Greek mine sites.
NASA Astrophysics Data System (ADS)
Vasileiou, Eleni; Kallioras, Andreas
2016-04-01
In Greece a significant amount of mineral and ore deposits have been recorded accompanied by large industrial interest and a long mining history. Today many active and/or abandoned mine sites are scattered within the country; while mining activities take place in different sites for exploiting various deposits (clay, limestone, slate, gypsum, kaolin, mixed sulphide ores (lead, zinc, olivine, pozzolan, quartz lignite, nickel, magnesite, aluminum, bauxite, gold, marbles etc). The most prominent recent ones are: (i) the lignite exploitation that is extended in the area of Ptolemais (Western Macedonia) and Megalopolis (Central Peloponnese); and (ii) the major bauxite deposits located in central Greece within the Parnassos-Ghiona geotectonic zone and on Euboea Island. In the latter area, significant ores of magnesite were exploited and mixed sulphide ores. Centuries of intensive mining exploitation and metallurgical treatment of lead-silver deposits in Greece, have also resulted in significant abandoned sites, such as the one in Lavrion. Mining activities in Lavrio, were initiated in ancient times and continued until the 1980s, resulting in the production of significant waste stockpiles deposited in the area, crucial for the local water resources. Ιn many mining sites, environmental pressures are also recorded after the mine closure to the aquatic environment, as the surface waters flow through waste dump areas and contaminated soils. This paper aims to the geospatial visualization of the mining activities in Greece, in connection to their negative (surface- and/or ground-water pollution; overpumping due to extensive dewatering practices) or positive (enhanced groundwater recharge; pit lakes, improvement of water budget in the catchment scale) impacts on local water resources.
The compression–error trade-off for large gridded data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silver, Jeremy D.; Zender, Charles S.
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
The compression–error trade-off for large gridded data sets
Silver, Jeremy D.; Zender, Charles S.
2017-01-27
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
NASA Astrophysics Data System (ADS)
Di Tullio, M.; Nocchi, F.; Camplani, A.; Emanuelli, N.; Nascetti, A.; Crespi, M.
2018-04-01
The glaciers are a natural global resource and one of the principal climate change indicator at global and local scale, being influenced by temperature and snow precipitation changes. Among the parameters used for glacier monitoring, the surface velocity is a key element, since it is connected to glaciers changes (mass balance, hydro balance, glaciers stability, landscape erosion). The leading idea of this work is to continuously retrieve glaciers surface velocity using free ESA Sentinel-1 SAR imagery and exploiting the potentialities of the Google Earth Engine (GEE) platform. GEE has been recently released by Google as a platform for petabyte-scale scientific analysis and visualization of geospatial datasets. The algorithm of SAR off-set tracking developed at the Geodesy and Geomatics Division of the University of Rome La Sapienza has been integrated in a cloud based platform that automatically processes large stacks of Sentinel-1 data to retrieve glacier surface velocity field time series. We processed about 600 Sentinel-1 image pairs to obtain a continuous time series of velocity field measurements over 3 years from January 2015 to January 2018 for two wide glaciers located in the Northern Patagonian Ice Field (NPIF), the San Rafael and the San Quintin glaciers. Several results related to these relevant glaciers also validated with respect already available and renown software (i.e. ESA SNAP, CIAS) and with respect optical sensor measurements (i.e. LANDSAT8), highlight the potential of the Big Data analysis to automatically monitor glacier surface velocity fields at global scale, exploiting the synergy between GEE and Sentinel-1 imagery.
SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.
Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga
2013-01-01
High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
A sodium-ion battery exploiting layered oxide cathode, graphite anode and glyme-based electrolyte
NASA Astrophysics Data System (ADS)
Hasa, Ivana; Dou, Xinwei; Buchholz, Daniel; Shao-Horn, Yang; Hassoun, Jusef; Passerini, Stefano; Scrosati, Bruno
2016-04-01
Room-temperature rechargeable sodium-ion batteries (SIBs), in view of the large availability and low cost of sodium raw materials, represent an important class of electrochemical systems suitable for application in large-scale energy storage. In this work, we report a novel, high power SIB formed by coupling the layered P2-Na0.7CoO2 cathode with the graphite anode in an optimized ether-based electrolyte. The study firstly addresses the electrochemical optimization of the two electrode materials and then the realization and characterization of the novel SIB based on their combination. The cell represents an original sodium rocking chair battery obtained combining the intercalation/de-intercalation processes of sodium within the cathode and anode layers. We show herein that this battery, favored by suitable electrode/electrolyte combination, offers unique performance in terms of cycle life, efficiency and, especially, power capability.
Dredging displaces bottlenose dolphins from an urbanised foraging patch.
Pirotta, Enrico; Laesser, Barbara Eva; Hardaker, Andrea; Riddoch, Nicholas; Marcoux, Marianne; Lusseau, David
2013-09-15
The exponential growth of the human population and its increasing industrial development often involve large scale modifications of the environment. In the marine context, coastal urbanisation and harbour expansion to accommodate the rising levels of shipping and offshore energy exploitation require dredging to modify the shoreline and sea floor. While the consequences of dredging on invertebrates and fish are relatively well documented, no study has robustly tested the effects on large marine vertebrates. We monitored the attendance of common bottlenose dolphins (Tursiops truncatus) to a recently established urbanised foraging patch, Aberdeen harbour (Scotland), and modelled the effect of dredging operations on site usage. We found that higher intensities of dredging caused the dolphins to spend less time in the harbour, despite high baseline levels of disturbance and the importance of the area as a foraging patch. Copyright © 2013 Elsevier Ltd. All rights reserved.
Phase magnification by two-axis countertwisting for detection-noise robust interferometry
NASA Astrophysics Data System (ADS)
Anders, Fabian; Pezzè, Luca; Smerzi, Augusto; Klempt, Carsten
2018-04-01
Entanglement-enhanced atom interferometry has the potential of surpassing the standard quantum limit and eventually reaching the ultimate Heisenberg bound. The experimental progress is, however, hindered by various technical noise sources, including the noise in the detection of the output quantum state. The influence of detection noise can be largely overcome by exploiting echo schemes, where the entanglement-generating interaction is repeated after the interferometer sequence. Here, we propose an echo protocol that uses two-axis countertwisting as the main nonlinear interaction. We demonstrate that the scheme is robust to detection noise and its performance is superior compared to the already demonstrated one-axis twisting echo scheme. In particular, the sensitivity maintains the Heisenberg scaling in the limit of a large particle number. Finally, we show that the protocol can be implemented with spinor Bose-Einstein condensates. Our results thus outline a realistic approach to mitigate the detection noise in quantum-enhanced interferometry.
Changing skewness: an early warning signal of regime shifts in ecosystems.
Guttal, Vishwesha; Jayaprakash, Ciriyam
2008-05-01
Empirical evidence for large-scale abrupt changes in ecosystems such as lakes and vegetation of semi-arid regions is growing. Such changes, called regime shifts, can lead to degradation of ecological services. We study simple ecological models that show a catastrophic transition as a control parameter is varied and propose a novel early warning signal that exploits two ubiquitous features of ecological systems: nonlinearity and large external fluctuations. Either reduced resilience or increased external fluctuations can tip ecosystems to an alternative stable state. It is shown that changes in asymmetry in the distribution of time series data, quantified by changing skewness, is a model-independent and reliable early warning signal for both routes to regime shifts. Furthermore, using model simulations that mimic field measurements and a simple analysis of real data from abrupt climate change in the Sahara, we study the feasibility of skewness calculations using data available from routine monitoring.
Roccuzzo, Sebastiana; Beckerman, Andrew P; Pandhal, Jagroop
2016-12-01
Open raceway ponds are regarded as the most economically viable option for large-scale cultivation of microalgae for low to mid-value bio-products, such as biodiesel. However, improvements are required including reducing the costs associated with harvesting biomass. There is now a growing interest in exploiting natural ecological processes within biotechnology. We review how chemical cues produced by algal grazers induce colony formation in algal cells, which subsequently leads to their sedimentation. A statistical meta-analysis of more than 80 studies reveals that Daphnia grazers can induce high levels of colony formation and sedimentation in Scenedesmus obliquus and that these natural, infochemical induced sedimentation rates are comparable to using commercial chemical equivalents. These data suggest that natural ecological interactions can be co-opted in biotechnology as part of a promising, low energy and clean harvesting method for use in large raceway systems.
Simulation Based Exploration of Critical Zone Dynamics in Intensively Managed Landscapes
NASA Astrophysics Data System (ADS)
Kumar, P.
2017-12-01
The advent of high-resolution measurements of topographic and (vertical) vegetation features using areal LiDAR are enabling us to resolve micro-scale ( 1m) landscape structural characteristics over large areas. Availability of hyperspectral measurements is further augmenting these LiDAR data by enabling the biogeochemical characterization of vegetation and soils at unprecedented spatial resolutions ( 1-10m). Such data have opened up novel opportunities for modeling Critical Zone processes and exploring questions that were not possible before. We show how an integrated 3-D model at 1m grid resolution can enable us to resolve micro-topographic and ecological dynamics and their control on hydrologic and biogeochemical processes over large areas. We address the computational challenge of such detailed modeling by exploiting hybrid CPU and GPU computing technologies. We show results of moisture, biogeochemical, and vegetation dynamics from studies in the Critical Zone Observatory for Intensively managed Landscapes (IMLCZO) in the Midwestern United States.
Unstructured-grid coastal ocean modelling in Southern Adriatic and Northern Ionian Seas
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo
2016-04-01
The Southern Adriatic Northern Ionian coastal Forecasting System (SANIFS) is a short-term forecasting system based on unstructured grid approach. The model component is built on SHYFEM finite element three-dimensional hydrodynamic model. The operational chain exploits a downscaling approach starting from the Mediterranean oceanographic-scale model MFS (Mediterranean Forecasting System, operated by INGV). The implementation set-up has been designed to provide accurate hydrodynamics and active tracer processes in the coastal waters of Southern Eastern Italy (Apulia, Basilicata and Calabria regions), where the model is characterized by a variable resolution in range of 50-500 m. The horizontal resolution is also high in open-sea areas, where the elements size is approximately 3 km. The model is forced: (i) at the lateral open boundaries through a full nesting strategy directly with the MFS (temperature, salinity, non-tidal sea surface height and currents) and OTPS (tidal forcing) fields; (ii) at surface through two alternative atmospheric forcing datasets (ECMWF and COSMOME) via MFS-bulk-formulae. Given that the coastal fields are driven by a combination of both local/coastal and deep ocean forcings propagating along the shelf, the performance of SANIFS was verified first (i) at the large and shelf-coastal scales by comparing with a large scale CTD survey and then (ii) at the coastal-harbour scale by comparison with CTD, ADCP and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 7 km). The present work highlights how downscaling could improve the simulation of the flow field going from typical open-ocean scales of the order of several km to the coastal (and harbour) scales of tens to hundreds of meters.
Strontium isotopes delineate fine-scale natal origins and migration histories of Pacific salmon
Brennan, Sean R.; Zimmerman, Christian E.; Fernandez, Diego P.; Cerling, Thure E.; McPhee, Megan V.; Wooller, Matthew J.
2015-01-01
Highly migratory organisms present major challenges to conservation efforts. This is especially true for exploited anadromous fish species, which exhibit long-range dispersals from natal sites, complex population structures, and extensive mixing of distinct populations during exploitation. By tracing the migratory histories of individual Chinook salmon caught in fisheries using strontium isotopes, we determined the relative production of natal habitats at fine spatial scales and different life histories. Although strontium isotopes have been widely used in provenance research, we present a new robust framework to simultaneously assess natal sources and migrations of individuals within fishery harvests through time. Our results pave the way for investigating how fine-scale habitat production and life histories of salmon respond to perturbations—providing crucial insights for conservation.
Gore, Meredith L.; Lute, Michelle L.; Ratsimbazafy, Jonah H.; Rajaonson, Andry
2016-01-01
Environmental insecurity is a source and outcome of biodiversity declines and social conflict. One challenge to scaling insecurity reduction policies is that empirical evidence about local attitudes is overwhelmingly missing. We set three objectives: determine how local people rank risk associated with different sources of environmental insecurity; assess perceptions of environmental insecurity, biodiversity exploitation, myths of nature and risk management preferences; and explore relationships between perceptions and biodiversity exploitation. We conducted interviews (N = 88) with residents of Madagascar’s Torotorofotsy Protected Area, 2014. Risk perceptions had a moderate effect on perceptions of environmental insecurity. We found no effects of environmental insecurity on biodiversity exploitation. Results offer one if not the first exploration of local perceptions of illegal biodiversity exploitation and environmental security. Local people’s perception of risk seriousness associated with illegal biodiversity exploitation such as lemur hunting (low overall) may not reflect perceptions of policy-makers (considered to be high). Discord is a key entry point for attention. PMID:27082106
Gore, Meredith L; Lute, Michelle L; Ratsimbazafy, Jonah H; Rajaonson, Andry
2016-01-01
Environmental insecurity is a source and outcome of biodiversity declines and social conflict. One challenge to scaling insecurity reduction policies is that empirical evidence about local attitudes is overwhelmingly missing. We set three objectives: determine how local people rank risk associated with different sources of environmental insecurity; assess perceptions of environmental insecurity, biodiversity exploitation, myths of nature and risk management preferences; and explore relationships between perceptions and biodiversity exploitation. We conducted interviews (N = 88) with residents of Madagascar's Torotorofotsy Protected Area, 2014. Risk perceptions had a moderate effect on perceptions of environmental insecurity. We found no effects of environmental insecurity on biodiversity exploitation. Results offer one if not the first exploration of local perceptions of illegal biodiversity exploitation and environmental security. Local people's perception of risk seriousness associated with illegal biodiversity exploitation such as lemur hunting (low overall) may not reflect perceptions of policy-makers (considered to be high). Discord is a key entry point for attention.
Experimental application of OMA solutions on the model of industrial structure
NASA Astrophysics Data System (ADS)
Mironov, A.; Mironovs, D.
2017-10-01
It is very important and sometimes even vital to maintain reliability of industrial structures. High quality control during production and structural health monitoring (SHM) in exploitation provides reliable functioning of large, massive and remote structures, like wind generators, pipelines, power line posts, etc. This paper introduces a complex of technological and methodical solutions for SHM and diagnostics of industrial structures, including those that are actuated by periodic forces. Solutions were verified on a wind generator scaled model with integrated system of piezo-film deformation sensors. Simultaneous and multi-patch Operational Modal Analysis (OMA) approaches were implemented as methodical means for structural diagnostics and monitoring. Specially designed data processing algorithms provide objective evaluation of structural state modification.
Exploiting Locality in Quantum Computation for Quantum Chemistry.
McClean, Jarrod R; Babbush, Ryan; Love, Peter J; Aspuru-Guzik, Alán
2014-12-18
Accurate prediction of chemical and material properties from first-principles quantum chemistry is a challenging task on traditional computers. Recent developments in quantum computation offer a route toward highly accurate solutions with polynomial cost; however, this solution still carries a large overhead. In this Perspective, we aim to bring together known results about the locality of physical interactions from quantum chemistry with ideas from quantum computation. We show that the utilization of spatial locality combined with the Bravyi-Kitaev transformation offers an improvement in the scaling of known quantum algorithms for quantum chemistry and provides numerical examples to help illustrate this point. We combine these developments to improve the outlook for the future of quantum chemistry on quantum computers.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Challenges and opportunities for improved understanding of regional climate dynamics
NASA Astrophysics Data System (ADS)
Collins, Matthew; Minobe, Shoshiro; Barreiro, Marcelo; Bordoni, Simona; Kaspi, Yohai; Kuwano-Yoshida, Akira; Keenlyside, Noel; Manzini, Elisa; O'Reilly, Christopher H.; Sutton, Rowan; Xie, Shang-Ping; Zolina, Olga
2018-01-01
Dynamical processes in the atmosphere and ocean are central to determining the large-scale drivers of regional climate change, yet their predictive understanding is poor. Here, we identify three frontline challenges in climate dynamics where significant progress can be made to inform adaptation: response of storms, blocks and jet streams to external forcing; basin-to-basin and tropical-extratropical teleconnections; and the development of non-linear predictive theory. We highlight opportunities and techniques for making immediate progress in these areas, which critically involve the development of high-resolution coupled model simulations, partial coupling or pacemaker experiments, as well as the development and use of dynamical metrics and exploitation of hierarchies of models.
Astrophysical data analysis with information field theory
NASA Astrophysics Data System (ADS)
Enßlin, Torsten
2014-12-01
Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.
On-chip synthesis of circularly polarized emission of light with integrated photonic circuits.
He, Li; Li, Mo
2014-05-01
The helicity of circularly polarized (CP) light plays an important role in the light-matter interaction in magnetic and quantum material systems. Exploiting CP light in integrated photonic circuits could lead to on-chip integration of novel optical helicity-dependent devices for applications ranging from spintronics to quantum optics. In this Letter, we demonstrate a silicon photonic circuit coupled with a 2D grating emitter operating at a telecom wavelength to synthesize vertically emitting, CP light from a quasi-TE waveguide mode. Handedness of the emitted circular polarized light can be thermally controlled with an integrated microheater. The compact device footprint enables a small beam diameter, which is desirable for large-scale integration.
Anderson-Schmidt, Heike; Adler, Lothar; Aly, Chadiga; Anghelescu, Ion-George; Bauer, Michael; Baumgärtner, Jessica; Becker, Joachim; Bianco, Roswitha; Becker, Thomas; Bitter, Cosima; Bönsch, Dominikus; Buckow, Karoline; Budde, Monika; Bührig, Martin; Deckert, Jürgen; Demiroglu, Sara Y; Dietrich, Detlef; Dümpelmann, Michael; Engelhardt, Uta; Fallgatter, Andreas J; Feldhaus, Daniel; Figge, Christian; Folkerts, Here; Franz, Michael; Gade, Katrin; Gaebel, Wolfgang; Grabe, Hans-Jörgen; Gruber, Oliver; Gullatz, Verena; Gusky, Linda; Heilbronner, Urs; Helbing, Krister; Hegerl, Ulrich; Heinz, Andreas; Hensch, Tilman; Hiemke, Christoph; Jäger, Markus; Jahn-Brodmann, Anke; Juckel, Georg; Kandulski, Franz; Kaschka, Wolfgang P; Kircher, Tilo; Koller, Manfred; Konrad, Carsten; Kornhuber, Johannes; Krause, Marina; Krug, Axel; Lee, Mahsa; Leweke, Markus; Lieb, Klaus; Mammes, Mechthild; Meyer-Lindenberg, Andreas; Mühlbacher, Moritz; Müller, Matthias J; Nieratschker, Vanessa; Nierste, Barbara; Ohle, Jacqueline; Pfennig, Andrea; Pieper, Marlenna; Quade, Matthias; Reich-Erkelenz, Daniela; Reif, Andreas; Reitt, Markus; Reininghaus, Bernd; Reininghaus, Eva Z; Riemenschneider, Matthias; Rienhoff, Otto; Roser, Patrik; Rujescu, Dan; Schennach, Rebecca; Scherk, Harald; Schmauss, Max; Schneider, Frank; Schosser, Alexandra; Schott, Björn H; Schwab, Sybille G; Schwanke, Jens; Skrowny, Daniela; Spitzer, Carsten; Stierl, Sebastian; Stöckel, Judith; Stübner, Susanne; Thiel, Andreas; Volz, Hans-Peter; von Hagen, Martin; Walter, Henrik; Witt, Stephanie H; Wobrock, Thomas; Zielasek, Jürgen; Zimmermann, Jörg; Zitzelsberger, Antje; Maier, Wolfgang; Falkai, Peter G; Rietschel, Marcella; Schulze, Thomas G
2013-12-01
The German Association for Psychiatry and Psychotherapy (DGPPN) has committed itself to establish a prospective national cohort of patients with major psychiatric disorders, the so-called DGPPN-Cohort. This project will enable the scientific exploitation of high-quality data and biomaterial from psychiatric patients for research. It will be set up using harmonised data sets and procedures for sample generation and guided by transparent rules for data access and data sharing regarding the central research database. While the main focus lies on biological research, it will be open to all kinds of scientific investigations, including epidemiological, clinical or health-service research.
Terabyte IDE RAID-5 Disk Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
David A. Sanders et al.
2003-09-30
High energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. We examine some techniques that exploit recent developments in commodity hardware. We report on tests of redundant arrays of integrated drive electronics (IDE) disk drives for use in offline high energy physics data analysis. IDE redundant array of inexpensive disks (RAID) prices now are less than the cost per terabyte of million-dollar tape robots! The arrays can be scaled to sizes affordablemore » to institutions without robots and used when fast random access at low cost is important.« less
Uncertainty of exploitation estimates made from tag returns
Miranda, L.E.; Brock, R.E.; Dorr, B.S.
2002-01-01
Over 6,000 crappies Pomoxis spp. were tagged in five water bodies to estimate exploitation rates by anglers. Exploitation rates were computed as the percentage of tags returned after adjustment for three sources of uncertainty: postrelease mortality due to the tagging process, tag loss, and the reporting rate of tagged fish. Confidence intervals around exploitation rates were estimated by resampling from the probability distributions of tagging mortality, tag loss, and reporting rate. Estimates of exploitation rates ranged from 17% to 54% among the five study systems. Uncertainty around estimates of tagging mortality, tag loss, and reporting resulted in 90% confidence intervals around the median exploitation rate as narrow as 15 percentage points and as broad as 46 percentage points. The greatest source of estimation error was uncertainty about tag reporting. Because the large investments required by tagging and reward operations produce imprecise estimates of the exploitation rate, it may be worth considering other approaches to estimating it or simply circumventing the exploitation question altogether.
Diversity, competition, extinction: the ecophysics of language change.
Solé, Ricard V; Corominas-Murtra, Bernat; Fortuny, Jordi
2010-12-06
As indicated early by Charles Darwin, languages behave and change very much like living species. They display high diversity, differentiate in space and time, emerge and disappear. A large body of literature has explored the role of information exchanges and communicative constraints in groups of agents under selective scenarios. These models have been very helpful in providing a rationale on how complex forms of communication emerge under evolutionary pressures. However, other patterns of large-scale organization can be described using mathematical methods ignoring communicative traits. These approaches consider shorter time scales and have been developed by exploiting both theoretical ecology and statistical physics methods. The models are reviewed here and include extinction, invasion, origination, spatial organization, coexistence and diversity as key concepts and are very simple in their defining rules. Such simplicity is used in order to catch the most fundamental laws of organization and those universal ingredients responsible for qualitative traits. The similarities between observed and predicted patterns indicate that an ecological theory of language is emerging, supporting (on a quantitative basis) its ecological nature, although key differences are also present. Here, we critically review some recent advances and outline their implications and limitations as well as highlight problems for future research.
Diversity, competition, extinction: the ecophysics of language change
Solé, Ricard V.; Corominas-Murtra, Bernat; Fortuny, Jordi
2010-01-01
As indicated early by Charles Darwin, languages behave and change very much like living species. They display high diversity, differentiate in space and time, emerge and disappear. A large body of literature has explored the role of information exchanges and communicative constraints in groups of agents under selective scenarios. These models have been very helpful in providing a rationale on how complex forms of communication emerge under evolutionary pressures. However, other patterns of large-scale organization can be described using mathematical methods ignoring communicative traits. These approaches consider shorter time scales and have been developed by exploiting both theoretical ecology and statistical physics methods. The models are reviewed here and include extinction, invasion, origination, spatial organization, coexistence and diversity as key concepts and are very simple in their defining rules. Such simplicity is used in order to catch the most fundamental laws of organization and those universal ingredients responsible for qualitative traits. The similarities between observed and predicted patterns indicate that an ecological theory of language is emerging, supporting (on a quantitative basis) its ecological nature, although key differences are also present. Here, we critically review some recent advances and outline their implications and limitations as well as highlight problems for future research. PMID:20591847
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
NASA Astrophysics Data System (ADS)
Newman, Gregory A.
2014-01-01
Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.
Ocean-wide tracking of pelagic sharks reveals extent of overlap with longline fishing hotspots.
Queiroz, Nuno; Humphries, Nicolas E; Mucientes, Gonzalo; Hammerschlag, Neil; Lima, Fernando P; Scales, Kylie L; Miller, Peter I; Sousa, Lara L; Seabra, Rui; Sims, David W
2016-02-09
Overfishing is arguably the greatest ecological threat facing the oceans, yet catches of many highly migratory fishes including oceanic sharks remain largely unregulated with poor monitoring and data reporting. Oceanic shark conservation is hampered by basic knowledge gaps about where sharks aggregate across population ranges and precisely where they overlap with fishers. Using satellite tracking data from six shark species across the North Atlantic, we show that pelagic sharks occupy predictable habitat hotspots of high space use. Movement modeling showed sharks preferred habitats characterized by strong sea surface-temperature gradients (fronts) over other available habitats. However, simultaneous Global Positioning System (GPS) tracking of the entire Spanish and Portuguese longline-vessel fishing fleets show an 80% overlap of fished areas with hotspots, potentially increasing shark susceptibility to fishing exploitation. Regions of high overlap between oceanic tagged sharks and longliners included the North Atlantic Current/Labrador Current convergence zone and the Mid-Atlantic Ridge southwest of the Azores. In these main regions, and subareas within them, shark/vessel co-occurrence was spatially and temporally persistent between years, highlighting how broadly the fishing exploitation efficiently "tracks" oceanic sharks within their space-use hotspots year-round. Given this intense focus of longliners on shark hotspots, our study argues the need for international catch limits for pelagic sharks and identifies a future role of combining fine-scale fish and vessel telemetry to inform the ocean-scale management of fisheries.
Ocean-wide tracking of pelagic sharks reveals extent of overlap with longline fishing hotspots
Queiroz, Nuno; Humphries, Nicolas E.; Hammerschlag, Neil; Miller, Peter I.; Sousa, Lara L.; Seabra, Rui; Sims, David W.
2016-01-01
Overfishing is arguably the greatest ecological threat facing the oceans, yet catches of many highly migratory fishes including oceanic sharks remain largely unregulated with poor monitoring and data reporting. Oceanic shark conservation is hampered by basic knowledge gaps about where sharks aggregate across population ranges and precisely where they overlap with fishers. Using satellite tracking data from six shark species across the North Atlantic, we show that pelagic sharks occupy predictable habitat hotspots of high space use. Movement modeling showed sharks preferred habitats characterized by strong sea surface-temperature gradients (fronts) over other available habitats. However, simultaneous Global Positioning System (GPS) tracking of the entire Spanish and Portuguese longline-vessel fishing fleets show an 80% overlap of fished areas with hotspots, potentially increasing shark susceptibility to fishing exploitation. Regions of high overlap between oceanic tagged sharks and longliners included the North Atlantic Current/Labrador Current convergence zone and the Mid-Atlantic Ridge southwest of the Azores. In these main regions, and subareas within them, shark/vessel co-occurrence was spatially and temporally persistent between years, highlighting how broadly the fishing exploitation efficiently “tracks” oceanic sharks within their space-use hotspots year-round. Given this intense focus of longliners on shark hotspots, our study argues the need for international catch limits for pelagic sharks and identifies a future role of combining fine-scale fish and vessel telemetry to inform the ocean-scale management of fisheries. PMID:26811467
Marine Mammal Impacts in Exploited Ecosystems: Would Large Scale Culling Benefit Fisheries?
Morissette, Lyne; Christensen, Villy; Pauly, Daniel
2012-01-01
Competition between marine mammals and fisheries for marine resources—whether real or perceived—has become a major issue for several countries and in international fora. We examined trophic interactions between marine mammals and fisheries based on a resource overlap index, using seven Ecopath models including marine mammal groups. On a global scale, most food consumed by marine mammals consisted of prey types that were not the main target of fisheries. For each ecosystem, the primary production required (PPR) to sustain marine mammals was less than half the PPR to sustain fisheries catches. We also developed an index representing the mean trophic level of marine mammal's consumption (TLQ) and compared it with the mean trophic level of fisheries' catches (TLC). Our results showed that overall TLQ was lower than TLC (2.88 versus 3.42). As fisheries increasingly exploit lower-trophic level species, the competition with marine mammals may become more important. We used mixed trophic impact analysis to evaluate indirect trophic effects of marine mammals, and in some cases found beneficial effects on some prey. Finally, we assessed the change in the trophic structure of an ecosystem after a simulated extirpation of marine mammal populations. We found that this lead to alterations in the structure of the ecosystems, and that there was no clear and direct relationship between marine mammals' predation and the potential catch by fisheries. Indeed, total biomass, with no marine mammals in the ecosystem, generally remained surprisingly similar, or even decreased for some species. PMID:22970153
Marine mammal impacts in exploited ecosystems: would large scale culling benefit fisheries?
Morissette, Lyne; Christensen, Villy; Pauly, Daniel
2012-01-01
Competition between marine mammals and fisheries for marine resources-whether real or perceived-has become a major issue for several countries and in international fora. We examined trophic interactions between marine mammals and fisheries based on a resource overlap index, using seven Ecopath models including marine mammal groups. On a global scale, most food consumed by marine mammals consisted of prey types that were not the main target of fisheries. For each ecosystem, the primary production required (PPR) to sustain marine mammals was less than half the PPR to sustain fisheries catches. We also developed an index representing the mean trophic level of marine mammal's consumption (TL(Q)) and compared it with the mean trophic level of fisheries' catches (TL(C)). Our results showed that overall TL(Q) was lower than TL(C) (2.88 versus 3.42). As fisheries increasingly exploit lower-trophic level species, the competition with marine mammals may become more important. We used mixed trophic impact analysis to evaluate indirect trophic effects of marine mammals, and in some cases found beneficial effects on some prey. Finally, we assessed the change in the trophic structure of an ecosystem after a simulated extirpation of marine mammal populations. We found that this lead to alterations in the structure of the ecosystems, and that there was no clear and direct relationship between marine mammals' predation and the potential catch by fisheries. Indeed, total biomass, with no marine mammals in the ecosystem, generally remained surprisingly similar, or even decreased for some species.
Theory of electrically controlled resonant tunneling spin devices
NASA Technical Reports Server (NTRS)
Ting, David Z. -Y.; Cartoixa, Xavier
2004-01-01
We report device concepts that exploit spin-orbit coupling for creating spin polarized current sources using nonmagnetic semiconductor resonant tunneling heterostructures, without external magnetic fields. The resonant interband tunneling psin filter exploits large valence band spin-orbit interaction to provide strong spin selectivity.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Scaling Optimization of the SIESTA MHD Code
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan
2013-10-01
SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Design and test of 1/5th scale horizontal axis tidal current turbine
NASA Astrophysics Data System (ADS)
Liu, Hong-wei; Zhou, Hong-bin; Lin, Yong-gang; Li, Wei; Gu, Hai-gang
2016-06-01
Tidal current energy is prominent and renewable. Great progress has been made in the exploitation technology of tidal current energy all over the world in recent years, and the large scale device has become the trend of tidal current turbine (TCT) for its economies. Instead of the similarity to the wind turbine, the tidal turbine has the characteristics of high hydrodynamic efficiency, big thrust, reliable sealing system, tight power transmission structure, etc. In this paper, a 1/5th scale horizontal axis tidal current turbine has been designed, manufactured and tested before the full scale device design. Firstly, the three-blade horizontal axis rotor was designed based on traditional blade element momentum theory and its hydrodynamic performance was predicted in numerical model. Then the power train system and stand-alone electrical control unit of tidal current turbine, whose performances were accessed through the bench test carried out in workshop, were designed and presented. Finally, offshore tests were carried out and the power performance of the rotor was obtained and compared with the published literatures, and the results showed that the power coefficient was satisfactory, which agrees with the theoretical predictions.
Multiple time step integrators in ab initio molecular dynamics.
Luehr, Nathan; Markland, Thomas E; Martínez, Todd J
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Object classification and outliers analysis in the forthcoming Gaia mission
NASA Astrophysics Data System (ADS)
Ordóñez-Blanco, D.; Arcay, B.; Dafonte, C.; Manteiga, M.; Ulla, A.
2010-12-01
Astrophysics is evolving towards the rational optimization of costly observational material by the intelligent exploitation of large astronomical databases from both terrestrial telescopes and spatial mission archives. However, there has been relatively little advance in the development of highly scalable data exploitation and analysis tools needed to generate the scientific returns from these large and expensively obtained datasets. Among the upcoming projects of astronomical instrumentation, Gaia is the next cornerstone ESA mission. The Gaia survey foresees the creation of a data archive and its future exploitation with automated or semi-automated analysis tools. This work reviews some of the work that is being developed by the Gaia Data Processing and Analysis Consortium for the object classification and analysis of outliers in the forthcoming mission.
Scaling Principles for Understanding and Exploiting Adhesion
NASA Astrophysics Data System (ADS)
Crosby, Alfred
A grand challenge in the science of adhesion is the development of a general design paradigm for adhesive materials that can sustain large forces across an interface yet be detached with minimal force upon command. Essential to this challenge is the generality of achieving this performance under a wide set of external conditions and across an extensive range of forces. Nature has provided some guidance through various examples, e.g. geckos, for how to meet this challenge; however, a single solution is not evident upon initial investigation. To help provide insight into nature's ability to scale reversible adhesion and adapt to different external constraints, we have developed a general scaling theory that describes the force capacity of an adhesive interface in the context of biological locomotion. We have demonstrated that this scaling theory can be used to understand the relative performance of a wide range of organisms, including numerous gecko species and insects, as well as an extensive library of synthetic adhesive materials. We will present the development and testing of this scaling theory, and how this understanding has helped guide the development of new composite materials for high capacity adhesives. We will also demonstrate how this scaling theory has led to the development of new strategies for transfer printing and adhesive applications in manufacturing processes. Overall, the developed scaling principles provide a framework for guiding the design of adhesives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
Transient Structures and Possible Limits of Data Recording in Phase-Change Materials.
Hu, Jianbo; Vanacore, Giovanni M; Yang, Zhe; Miao, Xiangshui; Zewail, Ahmed H
2015-07-28
Phase-change materials (PCMs) represent the leading candidates for universal data storage devices, which exploit the large difference in the physical properties of their transitional lattice structures. On a nanoscale, it is fundamental to determine their performance, which is ultimately controlled by the speed limit of transformation among the different structures involved. Here, we report observation with atomic-scale resolution of transient structures of nanofilms of crystalline germanium telluride, a prototypical PCM, using ultrafast electron crystallography. A nonthermal transformation from the initial rhombohedral phase to the cubic structure was found to occur in 12 ps. On a much longer time scale, hundreds of picoseconds, equilibrium heating of the nanofilm is reached, driving the system toward amorphization, provided that high excitation energy is invoked. These results elucidate the elementary steps defining the structural pathway in the transformation of crystalline-to-amorphous phase transitions and describe the essential atomic motions involved when driven by an ultrafast excitation. The establishment of the time scales of the different transient structures, as reported here, permits determination of the possible limit of performance, which is crucial for high-speed recording applications of PCMs.
Robust visual tracking via multiscale deep sparse networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo
2017-04-01
In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.
Anderson, D.R.
1974-01-01
Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.
Anderson, Eric C
2012-11-08
Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.
Bravo, Àlex; Piñero, Janet; Queralt-Rosinach, Núria; Rautschka, Michael; Furlong, Laura I
2015-02-21
Current biomedical research needs to leverage and exploit the large amount of information reported in scientific publications. Automated text mining approaches, in particular those aimed at finding relationships between entities, are key for identification of actionable knowledge from free text repositories. We present the BeFree system aimed at identifying relationships between biomedical entities with a special focus on genes and their associated diseases. By exploiting morpho-syntactic information of the text, BeFree is able to identify gene-disease, drug-disease and drug-target associations with state-of-the-art performance. The application of BeFree to real-case scenarios shows its effectiveness in extracting information relevant for translational research. We show the value of the gene-disease associations extracted by BeFree through a number of analyses and integration with other data sources. BeFree succeeds in identifying genes associated to a major cause of morbidity worldwide, depression, which are not present in other public resources. Moreover, large-scale extraction and analysis of gene-disease associations, and integration with current biomedical knowledge, provided interesting insights on the kind of information that can be found in the literature, and raised challenges regarding data prioritization and curation. We found that only a small proportion of the gene-disease associations discovered by using BeFree is collected in expert-curated databases. Thus, there is a pressing need to find alternative strategies to manual curation, in order to review, prioritize and curate text-mining data and incorporate it into domain-specific databases. We present our strategy for data prioritization and discuss its implications for supporting biomedical research and applications. BeFree is a novel text mining system that performs competitively for the identification of gene-disease, drug-disease and drug-target associations. Our analyses show that mining only a small fraction of MEDLINE results in a large dataset of gene-disease associations, and only a small proportion of this dataset is actually recorded in curated resources (2%), raising several issues on data prioritization and curation. We propose that joint analysis of text mined data with data curated by experts appears as a suitable approach to both assess data quality and highlight novel and interesting information.
Systematic effects of foreground removal in 21-cm surveys of reionization
NASA Astrophysics Data System (ADS)
Petrovic, Nada; Oh, S. Peng
2011-05-01
21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.
SCALES: SEVIRI and GERB CaL/VaL area for large-scale field experiments
NASA Astrophysics Data System (ADS)
Lopez-Baeza, Ernesto; Belda, Fernando; Bodas, Alejandro; Crommelynck, Dominique; Dewitte, Steven; Domenech, Carlos; Gimeno, Jaume F.; Harries, John E.; Jorge Sanchez, Joan; Pineda, Nicolau; Pino, David; Rius, Antonio; Saleh, Kauzar; Tarruella, Ramon; Velazquez, Almudena
2004-02-01
The main objective of the SCALES Project is to exploit the unique opportunity offered by the recent launch of the first European METEOSAT Second Generation geostationary satellite (MSG-1) to generate and validate new radiation budget and cloud products provided by the GERB (Geostationary Earth Radiation Budget) instrument. SCALES" specific objectives are: (i) definition and characterization of a large reasonably homogeneous area compatible to GERB pixel size (around 50 x 50 km2), (ii) validation of GERB TOA radiances and fluxes derived by means of angular distribution models, (iii) development of algorithms to estimate surface net radiation from GERB TOA measurements, and (iv) development of accurate methodologies to measure radiation flux divergence and analyze its influence on the thermal regime and dynamics of the atmosphere, also using GERB data. SCALES is highly innovative: it focuses on a new and unique space instrument and develops a new specific validation methodology for low resolution sensors that is based on the use of a robust reference meteorological station (Valencia Anchor Station) around which 3D high resolution meteorological fields are obtained from the MM5 Meteorological Model. During the 1st GERB Ground Validation Campaign (18th-24th June, 2003), CERES instruments on Aqua and Terra provided additional radiance measurements to support validation efforts. CERES instruments operated in the PAPS mode (Programmable Azimuth Plane Scanning) focusing the station. Ground measurements were taken by lidar, sun photometer, GPS precipitable water content, radiosounding ascents, Anchor Station operational meteorological measurements at 2m and 15m., 4 radiation components at 2m, and mobile stations to characterize a large area. In addition, measurements during LANDSAT overpasses on June 14th and 30th were also performed. These activities were carried out within the GIST (GERB International Science Team) framework, during GERB Commissioning Period.
Cosmological neutrino simulations at extreme scale
Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...
2017-08-01
Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less
Herbivore-induced plant volatiles and tritrophic interactions across spatial scales.
Aartsma, Yavanna; Bianchi, Felix J J A; van der Werf, Wopke; Poelman, Erik H; Dicke, Marcel
2017-12-01
Herbivore-induced plant volatiles (HIPVs) are an important cue used in herbivore location by carnivorous arthropods such as parasitoids. The effects of plant volatiles on parasitoids have been well characterised at small spatial scales, but little research has been done on their effects at larger spatial scales. The spatial matrix of volatiles ('volatile mosaic') within which parasitoids locate their hosts is dynamic and heterogeneous. It is shaped by the spatial pattern of HIPV-emitting plants, the concentration, chemical composition and breakdown of the emitted HIPV blends, and by environmental factors such as wind, turbulence and vegetation that affect transport and mixing of odour plumes. The volatile mosaic may be exploited differentially by different parasitoid species, in relation to species traits such as sensory ability to perceive volatiles and the physical ability to move towards the source. Understanding how HIPVs influence parasitoids at larger spatial scales is crucial for our understanding of tritrophic interactions and sustainable pest management in agriculture. However, there is a large gap in our knowledge on how volatiles influence the process of host location by parasitoids at the landscape scale. Future studies should bridge the gap between the chemical and behavioural ecology of tritrophic interactions and landscape ecology. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Zhu, Dan; Ciais, Philippe; Chang, Jinfeng; Krinner, Gerhard; Peng, Shushi; Viovy, Nicolas; Peñuelas, Josep; Zimov, Sergey
2018-04-01
Large herbivores are a major agent in ecosystems, influencing vegetation structure, and carbon and nutrient flows. During the last glacial period, a mammoth steppe ecosystem prevailed in the unglaciated northern lands, supporting a high diversity and density of megafaunal herbivores. The apparent discrepancy between abundant megafauna and the expected low vegetation productivity under a generally harsher climate with a lower CO 2 concentration, termed the productivity paradox, requires large-scale quantitative analysis using process-based ecosystem models. However, most of the current global dynamic vegetation models (DGVMs) lack explicit representation of large herbivores. Here we incorporated a grazing module in a DGVM based on physiological and demographic equations for wild large grazers, taking into account feedbacks of large grazers on vegetation. The model was applied globally for present-day and the Last Glacial Maximum (LGM). The present-day results of potential grazer biomass, combined with an empirical land-use map, infer a reduction in wild grazer biomass by 79-93% owing to anthropogenic land replacement of natural grasslands. For the LGM, we find that the larger mean body size of mammalian herbivores than today is the crucial clue to explain the productivity paradox, due to a more efficient exploitation of grass production by grazers with a large body size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano
We report on an instanton-based analysis of the gluon Green functions in the Landau gauge for low momenta; in particular we use lattice results for αs in the symmetric momentum subtraction scheme (MOM) for large-volume lattice simulations. We have exploited quenched gauge field configurations, Nf = 0, with both Wilson and tree-level Symanzik improved actions, and unquenched ones with Nf = 2 + 1 and Nf = 2 + 1 + 1 dynamical flavors (domain wall and twisted-mass fermions, respectively).We show that the dominance of instanton correlations on the low-momenta gluon Green functions can be applied to the determination ofmore » phenomenological parameters of the instanton liquid and, eventually, to a determination of the lattice spacing.We furthermore apply the Gradient Flow to remove short-distance fluctuations. The Gradient Flow gets rid of the QCD scale, ΛQCD, and reveals that the instanton prediction extents to large momenta. For those gauge field configurations free of quantum fluctuations, the direct study of topological charge density shows the appearance of large-scale lumps that can be identified as instantons, giving access to a direct study of the instanton density and size distribution that is compatible with those extracted from the analysis of the Green functions.« less
Lipid metabolism and potentials of biofuel and high added-value oil production in red algae.
Sato, Naoki; Moriyama, Takashi; Mori, Natsumi; Toyoshima, Masakazu
2017-04-01
Biomass production is currently explored in microalgae, macroalgae and land plants. Microalgal biofuel development has been performed mostly in green algae. In the Japanese tradition, macrophytic red algae such as Pyropia yezoensis and Gelidium crinale have been utilized as food and industrial materials. Researches on the utilization of unicellular red microalgae such as Cyanidioschyzon merolae and Porphyridium purpureum started only quite recently. Red algae have relatively large plastid genomes harboring more than 200 protein-coding genes that support the biosynthetic capacity of the plastid. Engineering the plastid genome is a unique potential of red microalgae. In addition, large-scale growth facilities of P. purpureum have been developed for industrial production of biofuels. C. merolae has been studied as a model alga for cell and molecular biological analyses with its completely determined genomes and transformation techniques. Its acidic and warm habitat makes it easy to grow this alga axenically in large scales. Its potential as a biofuel producer is recently documented under nitrogen-limited conditions. Metabolic pathways of the accumulation of starch and triacylglycerol and the enzymes involved therein are being elucidated. Engineering these regulatory mechanisms will open a possibility of exploiting the full capability of production of biofuel and high added-value oil. In the present review, we will describe the characteristics and potential of these algae as biotechnological seeds.
X-ray techniques for innovation in industry
Lawniczak-Jablonska, Krystyna; Cutler, Jeffrey
2014-01-01
The smart specialization declared in the European program Horizon 2020, and the increasing cooperation between research and development found in companies and researchers at universities and research institutions have created a new paradigm where many calls for proposals require participation and funding from public and private entities. This has created a unique opportunity for large-scale facilities, such as synchrotron research laboratories, to participate in and support applied research programs. Scientific staff at synchrotron facilities have developed many advanced tools that make optimal use of the characteristics of the light generated by the storage ring. These tools have been exceptionally valuable for materials characterization including X-ray absorption spectroscopy, diffraction, tomography and scattering, and have been key in solving many research and development issues. Progress in optics and detectors, as well as a large effort put into the improvement of data analysis codes, have resulted in the development of reliable and reproducible procedures for materials characterization. Research with photons has contributed to the development of a wide variety of products such as plastics, cosmetics, chemicals, building materials, packaging materials and pharma. In this review, a few examples are highlighted of successful cooperation leading to solutions of a variety of industrial technological problems which have been exploited by industry including lessons learned from the Science Link project, supported by the European Commission, as a new approach to increase the number of commercial users at large-scale research infrastructures. PMID:25485139
hEIDI: An Intuitive Application Tool To Organize and Treat Large-Scale Proteomics Data.
Hesse, Anne-Marie; Dupierris, Véronique; Adam, Claire; Court, Magali; Barthe, Damien; Emadali, Anouk; Masselon, Christophe; Ferro, Myriam; Bruley, Christophe
2016-10-07
Advances in high-throughput proteomics have led to a rapid increase in the number, size, and complexity of the associated data sets. Managing and extracting reliable information from such large series of data sets require the use of dedicated software organized in a consistent pipeline to reduce, validate, exploit, and ultimately export data. The compilation of multiple mass-spectrometry-based identification and quantification results obtained in the context of a large-scale project represents a real challenge for developers of bioinformatics solutions. In response to this challenge, we developed a dedicated software suite called hEIDI to manage and combine both identifications and semiquantitative data related to multiple LC-MS/MS analyses. This paper describes how, through a user-friendly interface, hEIDI can be used to compile analyses and retrieve lists of nonredundant protein groups. Moreover, hEIDI allows direct comparison of series of analyses, on the basis of protein groups, while ensuring consistent protein inference and also computing spectral counts. hEIDI ensures that validated results are compliant with MIAPE guidelines as all information related to samples and results is stored in appropriate databases. Thanks to the database structure, validated results generated within hEIDI can be easily exported in the PRIDE XML format for subsequent publication. hEIDI can be downloaded from http://biodev.extra.cea.fr/docs/heidi .
CryoSat Plus For Oceans: an ESA Project for CryoSat-2 Data Exploitation Over Ocean
NASA Astrophysics Data System (ADS)
Benveniste, J.; Cotton, D.; Clarizia, M.; Roca, M.; Gommenginger, C. P.; Naeije, M. C.; Labroue, S.; Picot, N.; Fernandes, J.; Andersen, O. B.; Cancet, M.; Dinardo, S.; Lucas, B. M.
2012-12-01
The ESA CryoSat-2 mission is the first space mission to carry a space-borne radar altimeter that is able to operate in the conventional pulsewidth-limited (LRM) mode and in the novel Synthetic Aperture Radar (SAR) mode. Although the prime objective of the Cryosat-2 mission is dedicated to monitoring land and marine ice, the SAR mode capability of the Cryosat-2 SIRAL altimeter also presents the possibility of demonstrating significant potential benefits of SAR altimetry for ocean applications, based on expected performance enhancements which include improved range precision and finer along track spatial resolution. With this scope in mind, the "CryoSat Plus for Oceans" (CP4O) Project, dedicated to the exploitation of CryoSat-2 Data over ocean, supported by the ESA STSE (Support To Science Element) programme, brings together an expert European consortium comprising: DTU Space, isardSAT, National Oceanography Centre , Noveltis, SatOC, Starlab, TU Delft, the University of Porto and CLS (supported by CNES),. The objectives of CP4O are: - to build a sound scientific basis for new scientific and operational applications of Cryosat-2 data over the open ocean, polar ocean, coastal seas and for sea-floor mapping. - to generate and evaluate new methods and products that will enable the full exploitation of the capabilities of the Cryosat-2 SIRAL altimeter , and extend their application beyond the initial mission objectives. - to ensure that the scientific return of the Cryosat-2 mission is maximised. In particular four themes will be addressed: -Open Ocean Altimetry: Combining GOCE Geoid Model with CryoSat Oceanographic LRM Products for the retrieval of CryoSat MSS/MDT model over open ocean surfaces and for analysis of mesoscale and large scale prominent open ocean features. Under this priority the project will also foster the exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to detect short spatial scale open ocean features. -High Resolution Polar Ocean Altimetry: Combination of GOCE Geoid Model with CryoSat Oceanographic SAR Products over polar oceans for the retrieval of CryoSat MSS/MDT and currents circulations system improving the polar tides models and studying the coupling between blowing wind and current pattern. -High Resolution Coastal Zone Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to get the radar altimetry closer to the shore exploiting the SARIn mode for the discrimination of off-nadir land targets (e.g. steep cliffs) in the radar footprint from nadir sea return. -High Resolution Sea-Floor Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to resolve the weak short-wavelength sea surface signals caused by sea-floor topography elements and to map uncharted sea-mounts/trenches. One of the first project activities is the consolidation of preliminary scientific requirements for the four themes under investigation. This paper will present the CP4O project content and objectives and will address the first initial results from the on-going work to define the scientific requirements.
NASA Astrophysics Data System (ADS)
Stumpf, André; Malet, Jean-Philippe
2016-04-01
Since more than 20 years, "Earth Observation" (EO) satellites developed or operated by ESA have provided a wealth of data. In the coming years, the Sentinel missions, along with the Copernicus Contributing Missions as well as Earth Explorers and other, Third Party missions will provide routine monitoring of our environment at the global scale, thereby delivering an unprecedented amount of data. While the availability of the growing volume of environmental data from space represents a unique opportunity for science, general R&D, and applications, it also poses major challenges to fully exploit the potential of archived and daily incoming datasets. Those challenges do not only comprise the discovery, access, processing, and visualization of large data volumes but also an increasing diversity of data sources and end users from different fields (e.g. EO, in-situ monitoring, and modeling). In this context, the GTEP (Geohazards Thematic Exploitation Platform) initiative aims to build an operational distributed processing platform to maximize the exploitation of EO data from past and future satellite missions for the detection and monitoring of natural hazards. This presentation focuses on the "Optical Image Correlation" Pilot Project (funded by ESA within the GTEP platform) which objectives are to develop an easy-to-use, flexible and distributed processing chain for: 1) the automated reconstruction of surface Digital Elevation Models from stereo (and tristereo) pairs of Spot 6/7 and Pléiades satellite imagery, 2) the creation of ortho-images (panchromatic and multi-spectral) of Landsat 8, Sentinel-2, Spot 6/7 and Pléiades scenes, 3) the calculation of horizontal (E-N) displacement vectors based on sub-pixel image correlation. The processing chains is being implemented on the GEP cloud-based (Hadoop, MapReduce) environment and designed for analysis of surface displacements at local to regional scale (10-1000 km2) targeting in particular co-seismic displacement and slow-moving landslides. The processing targets both the analysis of time-series of archived data (Pléiades, Landsat 8) and current satellite missions Spot 6/7 and Sentinel-2. The possibility of rapid calculation in near-real time is an important aspect of the design of the processing chain. Archived datasets will be processed for some 'demonstrator' test sites in order to develop and test the implemented workflows.
NASA Astrophysics Data System (ADS)
Song, Chenchen; Martínez, Todd J.
2016-05-01
We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N2.6 for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).
Song, Chenchen; Martínez, Todd J
2016-05-07
We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).
Theoretical and experimental study of a new algorithm for factoring numbers
NASA Astrophysics Data System (ADS)
Tamma, Vincenzo
The security of codes, for example in credit card and government information, relies on the fact that the factorization of a large integer N is a rather costly process on a classical digital computer. Such a security is endangered by Shor's algorithm which employs entangled quantum systems to find, with a polynomial number of resources, the period of a function which is connected with the factors of N. We can surely expect a possible future realization of such a method for large numbers, but so far the period of Shor's function has been only computed for the number 15. Inspired by Shor's idea, our work aims to methods of factorization based on the periodicity measurement of a given continuous periodic "factoring function" which is physically implementable using an analogue computer. In particular, we have focused on both the theoretical and the experimental analysis of Gauss sums with continuous arguments leading to a new factorization algorithm. The procedure allows, for the first time, to factor several numbers by measuring the periodicity of Gauss sums performing first-order "factoring" interfer ence processes. We experimentally implemented this idea by exploiting polychromatic optical interference in the visible range with a multi-path interferometer, and achieved the factorization of seven digit numbers. The physical principle behind this "factoring" interference procedure can be potentially exploited also on entangled systems, as multi-photon entangled states, in order to achieve a polynomial scaling in the number of resources.
Rich client data exploration and research prototyping for NOAA
NASA Astrophysics Data System (ADS)
Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah
2009-08-01
Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.
The Future of Stellar Populations Studies in the Milky Way and the Local Group
NASA Astrophysics Data System (ADS)
Majewski, Steven R.
2010-04-01
The last decade has seen enormous progress in understanding the structure of the Milky Way and neighboring galaxies via the production of large-scale digital surveys of the sky like 2MASS and SDSS, as well as specialized, counterpart imaging surveys of other Local Group systems. Apart from providing snaphots of galaxy structure, these “cartographic” surveys lend insights into the formation and evolution of galaxies when supplemented with additional data (e.g., spectroscopy, astrometry) and when referenced to theoretical models and simulations of galaxy evolution. These increasingly sophisticated simulations are making ever more specific predictions about the detailed chemistry and dynamics of stellar populations in galaxies. To fully exploit, test and constrain these theoretical ventures demands similar commitments of observational effort as has been plied into the previous imaging surveys to fill out other dimensions of parameter space with statistically significant intensity. Fortunately the future of large-scale stellar population studies is bright with a number of grand projects on the horizon that collectively will contribute a breathtaking volume of information on individual stars in Local Group galaxies. These projects include: (1) additional imaging surveys, such as Pan-STARRS, SkyMapper and LSST, which, apart from providing deep, multicolor imaging, yield time series data useful for revealing variable stars (including critical standard candles, like RR Lyrae variables) and creating large-scale, deep proper motion catalogs; (2) higher accuracy, space-based astrometric missions, such as Gaia and SIM-Lite, which stand to provide critical, high precision dynamical data on stars in the Milky Way and its satellites; and (3) large-scale spectroscopic surveys provided by RAVE, APOGEE, HERMES, LAMOST, and the Gaia spectrometer, which will yield not only enormous numbers of stellar radial velocities, but extremely comprehensive views of the chemistry of stellar populations. Meanwhile, previously dust-obscured regions of the Milky Way will continue to be systematically exposed via large infrared surveys underway or on the way, such as the various GLIMPSE surveys from Spitzer's IRAC instrument, UKIDSS, APOGEE, JASMINE and WISE.
Annealed Scaling for a Charged Polymer
NASA Astrophysics Data System (ADS)
Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.
2016-03-01
This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.
Faust, Matthew D.; Hansen, Michael J.
2017-01-01
Muskellunge anglers desire to catch large fish, and release rates by recreational anglers often approach 100% (Isermann et al. 2011). Muskellunge are also a culturally significant fish for Chippewa tribes and support a subsistence spearing fishery in Wisconsin’s Ceded Territory (Erickson 2007). Although Muskellunge populations within the state’s Ceded Territory are exposed to both angling and spearing fishery exploitation, Faust and Hansen (2016) suggested that under certain conditions (e.g., high minimum length limits (MLL) and low spearing exploitation) Muskellunge fisheries with disparate motivations could coexist (i.e., sufficient numbers of large individuals remained despite harvest from consumptive fishery), but noted that larger declines in trophy Muskellunge abundance were predicted at lower MLLs (e.g., 102-cm). Fisheries managers with the Wisconsin Department of Natural Resources (WDNR) wished to further understand how specific relative stock densities (RSD), used by the WDNR to define and monitor trophy Muskellunge fisheries, are reduced at exploitation rates commonly experienced by populations in northern Wisconsin. Similarly, understanding how trophy Muskellunge abundance may have declined under the previous statewide MLL (i.e., 86-cm) at these levels of exploitation was also desired. Thus, our objectives were to 1) determine if observed levels of angling and spearing exploitation reduced predicted RSD indices below thresholds used by the WDNR to define trophy Muskellunge fisheries for three typical Muskellunge growth potentials in northern Wisconsin across a variety of MLLs; and 2) quantify how numbers of trophy Muskellunge declined under an 86-cm MLL at observed levels of exploitation.
The MICE Grand Challenge lightcone simulation - II. Halo and galaxy catalogues
NASA Astrophysics Data System (ADS)
Crocce, M.; Castander, F. J.; Gaztañaga, E.; Fosalba, P.; Carretero, J.
2015-10-01
This is the second in a series of three papers in which we present an end-to-end simulation from the MICE collaboration, the MICE Grand Challenge (MICE-GC) run. The N-body contains about 70 billion dark-matter particles in a (3 h-1 Gpc)3 comoving volume spanning five orders of magnitude in dynamical range. Here, we introduce the halo and galaxy catalogues built upon it, both in a wide (5000 deg2) and deep (z < 1.4) lightcone and in several comoving snapshots. Haloes were resolved down to few 1011 h-1 M⊙. This allowed us to model galaxies down to absolute magnitude Mr < -18.9. We used a new hybrid halo occupation distribution and abundance matching technique for galaxy assignment. The catalogue includes the spectral energy distributions of all galaxies. We describe a variety of halo and galaxy clustering applications. We discuss how mass resolution effects can bias the large-scale two-pt clustering amplitude of poorly resolved haloes at the ≲5 per cent level, and their three-pt correlation function. We find a characteristic scale-dependent bias of ≲6 per cent across the BAO feature for haloes well above M⋆ ˜ 1012 h-1 M⊙ and for luminous red galaxy like galaxies. For haloes well below M⋆ the scale dependence at 100 h-1 Mpc is ≲2 per cent. Lastly, we discuss the validity of the large-scale Kaiser limit across redshift and departures from it towards non-linear scales. We make the current version of the lightcone halo and galaxy catalogue (
Coaching the exploration and exploitation in active learning for interactive video retrieval.
Wei, Xiao-Yong; Yang, Zhen-Qun
2013-03-01
Conventional active learning approaches for interactive video/image retrieval usually assume the query distribution is unknown, as it is difficult to estimate with only a limited number of labeled instances available. Thus, it is easy to put the system in a dilemma whether to explore the feature space in uncertain areas for a better understanding of the query distribution or to harvest in certain areas for more relevant instances. In this paper, we propose a novel approach called coached active learning that makes the query distribution predictable through training and, therefore, avoids the risk of searching on a completely unknown space. The estimated distribution, which provides a more global view of the feature space, can be used to schedule not only the timing but also the step sizes of the exploration and the exploitation in a principled way. The results of the experiments on a large-scale data set from TRECVID 2005-2009 validate the efficiency and effectiveness of our approach, which demonstrates an encouraging performance when facing domain-shift, outperforms eight conventional active learning methods, and shows superiority to six state-of-the-art interactive video retrieval systems.
Multiple-robot drug delivery strategy through coordinated teams of microswimmers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kei Cheang, U; Kim, Min Jun, E-mail: mkim@coe.drexel.edu; Lee, Kyoungwoo
2014-08-25
Untethered robotic microswimmers are very promising to significantly improve various types of minimally invasive surgeries by offering high accuracy at extremely small scales. A prime example is drug delivery, for which a large number of microswimmers is required to deliver sufficient dosages to target sites. For this reason, the controllability of groups of microswimmers is essential. In this paper, we demonstrate simultaneous control of multiple geometrically similar but magnetically different microswimmers using a single global rotating magnetic field. By exploiting the differences in their magnetic properties, we triggered different swimming behaviors from the microswimmers by controlling the frequency and themore » strength of the global field, for example, one swim and the other does not while exposed to the same control input. Our results show that the balance between the applied magnetic torque and the hydrodynamic torque can be exploited for simultaneous control of two microswimmers to swim in opposite directions, with different velocities, and with similar velocities. This work will serve to establish important concepts for future developments of control systems to manipulate multiple magnetically actuated microswimmers and a step towards using swarms of microswimmers as viable workforces for complex operations.« less
Commercial imagery archive, management, exploitation, and distribution project development
NASA Astrophysics Data System (ADS)
Hollinger, Bruce; Sakkas, Alysa
1999-10-01
The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (1) a potentially unbounded, data archive (5000 TB range) (2) automated workflow management for increased user productivity; (3) automatic tracking and management of files stored on shelves; (4) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (5) access through a thin client-to-server network environment; (6) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (7) scalability that maintains information throughput performance as the size of the digital library grows.
Commercial imagery archive, management, exploitation, and distribution product development
NASA Astrophysics Data System (ADS)
Hollinger, Bruce; Sakkas, Alysa
1999-12-01
The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (a) a potentially unbounded, data archive (5000 TB range) (b) automated workflow management for increased user productivity; (c) automatic tracking and management of files stored on shelves; (d) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (e) access through a thin client-to-server network environment; (f) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (g) scalability that maintains information throughput performance as the size of the digital library grows.
Industrial production of acetone and butanol by fermentation-100 years later.
Sauer, Michael
2016-07-01
Microbial production of acetone and butanol was one of the first large-scale industrial fermentation processes of global importance. During the first part of the 20th century, it was indeed the second largest fermentation process, superseded in importance only by the ethanol fermentation. After a rapid decline after the 1950s, acetone-butanol-ethanol (ABE) fermentation has recently gained renewed interest in the context of biorefinery approaches for the production of fuels and chemicals from renewable resources. The availability of new methods and knowledge opens many new doors for industrial microbiology, and a comprehensive view on this process is worthwhile due to the new interest. This thematic issue of FEMS Microbiology Letters, dedicated to the 100th anniversary of the first industrial exploitation of Chaim Weizmann's ABE fermentation process, covers the main aspects of old and new developments, thereby outlining a model development in biotechnology. All major aspects of industrial microbiology are exemplified by this single process. This includes new technologies, such as the latest developments in metabolic engineering, the exploitation of biodiversity and discoveries of new regulatory systems such as for microbial stress tolerance, as well as technological aspects, such as bio- and down-stream processing. © FEMS 2016.
Zu Ermgassen, Philine S. E.; Spalding, Mark D.; Blake, Brady; Coen, Loren D.; Dumbauld, Brett; Geiger, Steve; Grabowski, Jonathan H.; Grizzle, Raymond; Luckenbach, Mark; McGraw, Kay; Rodney, William; Ruesink, Jennifer L.; Powers, Sean P.; Brumbaugh, Robert
2012-01-01
Historic baselines are important in developing our understanding of ecosystems in the face of rapid global change. While a number of studies have sought to determine changes in extent of exploited habitats over historic timescales, few have quantified such changes prior to late twentieth century baselines. Here, we present, to our knowledge, the first ever large-scale quantitative assessment of the extent and biomass of marine habitat-forming species over a 100-year time frame. We examined records of wild native oyster abundance in the United States from a historic, yet already exploited, baseline between 1878 and 1935 (predominantly 1885–1915), and a current baseline between 1968 and 2010 (predominantly 2000–2010). We quantified the extent of oyster grounds in 39 estuaries historically and 51 estuaries from recent times. Data from 24 estuaries allowed comparison of historic to present extent and biomass. We found evidence for a 64 per cent decline in the spatial extent of oyster habitat and an 88 per cent decline in oyster biomass over time. The difference between these two numbers illustrates that current areal extent measures may be masking significant loss of habitat through degradation. PMID:22696522
Ensembl Genomes 2013: scaling up access to genome-wide data
USDA-ARS?s Scientific Manuscript database
Ensembl Genomes (http://www.ensemblgenomes.org) is an integrating resource for genome-scale data from non-vertebrate species. The project exploits and extends technologies for genome annotation, analysis and dissemination, developed in the context of the vertebrate-focused Ensembl project, and provi...
NASA Astrophysics Data System (ADS)
Aghion, S.; Ariga, A.; Bollani, M.; Ereditato, A.; Ferragut, R.; Giammarchi, M.; Lodari, M.; Pistillo, C.; Sala, S.; Scampoli, P.; Vladymyrov, M.
2018-05-01
Nuclear emulsions are capable of very high position resolution in the detection of ionizing particles. This feature can be exploited to directly resolve the micrometric-scale fringe pattern produced by a matter-wave interferometer for low energy positrons (in the 10–20 keV range). We have tested the performance of emulsion films in this specific scenario. Exploiting silicon nitride diffraction gratings as absorption masks, we produced periodic patterns with features comparable to the expected interferometer signal. Test samples with periodicities of 6, 7 and 20 μ m were exposed to the positron beam, and the patterns clearly reconstructed. Our results support the feasibility of matter-wave interferometry experiments with positrons.
Towards a New Assessment of Urban Areas from Local to Global Scales
NASA Astrophysics Data System (ADS)
Bhaduri, B. L.; Roy Chowdhury, P. K.; McKee, J.; Weaver, J.; Bright, E.; Weber, E.
2015-12-01
Since early 2000s, starting with NASA MODIS, satellite based remote sensing has facilitated collection of imagery with medium spatial resolution but high temporal resolution (daily). This trend continues with an increasing number of sensors and data products. Increasing spatial and temporal resolutions of remotely sensed data archives, from both public and commercial sources, have significantly enhanced the quality of mapping and change data products. However, even with automation of such analysis on evolving computing platforms, rates of data processing have been suboptimal largely because of the ever-increasing pixel to processor ratio coupled with limitations of the computing architectures. Novel approaches utilizing spatiotemporal data mining techniques and computational architectures have emerged that demonstrates the potential for sustained and geographically scalable landscape monitoring to be operational. We exemplify this challenge with two broad research initiatives on High Performance Geocomputation at Oak Ridge National Laboratory: (a) mapping global settlement distribution; (b) developing national critical infrastructure databases. Our present effort, on large GPU based architectures, to exploit high resolution (1m or less) satellite and airborne imagery for extracting settlements at global scale is yielding understanding of human settlement patterns and urban areas at unprecedented resolution. Comparison of such urban land cover database, with existing national and global land cover products, at various geographic scales in selected parts of the world is revealing intriguing patterns and insights for urban assessment. Early results, from the USA, Taiwan, and Egypt, indicate closer agreements (5-10%) in urban area assessments among databases at larger, aggregated geographic extents. However, spatial variability at local scales could be significantly different (over 50% disagreement).
Inter-relationships between corrosion and mineral-scale deposition in aqueous systems.
Hodgkiess, T
2004-01-01
The processes of corrosion and scale deposition in natural and process waters are often linked and this paper considers a number of instances of interactions between the two phenomena. In some circumstances a scale layer (e.g. calcium carbonate) can be advantageously utilised as a corrosion-protection coating on components and this feature has been exploited for many decades in the conditioning of water to induce spontaneous precipitation of a scale layer upon the surfaces of engineering equipment. The electrochemical mechanisms associated with some corrosion and corrosion-control processes can promote alkaline-scale deposition directly upon component surfaces. This is a feature that can be exploited in the operation of cathodic protection (CP) of structures and components submerged in certain types of water (e.g. seawater). Similar phenomena can occur during bi-metallic corrosion and a case study, involving carbon steel/stainless steel couples in seawater, is presented. Additional complexities pertain during cyclic loading of submerged reinforced concrete members in which scale deposition may reduce the severity of fatigue stresses but can be associated with severe corrosion damage to embedded reinforcing steel. Also considered are scale-control/corrosion interactions in thermal desalination plant and an indirect consequence of the scale-control strategy on vapourside corrosion is discussed.
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
Using Powder Cored Tubular Wire Technology to Enhance Electron Beam Freeform Fabricated Structures
NASA Technical Reports Server (NTRS)
Gonzales, Devon; Liu, Stephen; Domack, Marcia; Hafley, Robert
2016-01-01
Electron Beam Freeform Fabrication (EBF3) is an additive manufacturing technique, developed at NASA Langley Research Center, capable of fabricating large scale aerospace parts. Advantages of using EBF3 as opposed to conventional manufacturing methods include, decreased design-to-product time, decreased wasted material, and the ability to adapt controls to produce geometrically complex parts with properties comparable to wrought products. However, to fully exploit the potential of the EBF3 process development of materials tailored for the process is required. Powder cored tubular wire (PCTW) technology was used to modify Ti-6Al-4V and Al 6061 feedstock to enhance alloy content, refine grain size, and create a metal matrix composite in the as-solidified structures, respectively.
Exploration versus exploitation in space, mind, and society
Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.
2015-01-01
Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering.
Gil-Ley, Alejandro; Bussi, Giovanni
2015-03-10
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide.
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Martin J.
This project was part of a coordinated software development effort which the nuclear physics lattice QCD community pursues in order to ensure that lattice calculations can make optimal use of present, and forthcoming leadership-class and dedicated hardware, including those of the national laboratories, and prepares for the exploitation of future computational resources in the exascale era. The UW team improved and extended software libraries used in lattice QCD calculations related to multi-nucleon systems, enhanced production running codes related to load balancing multi-nucleon production on large-scale computing platforms, and developed SQLite (addressable database) interfaces to efficiently archive and analyze multi-nucleon datamore » and developed a Mathematica interface for the SQLite databases.« less
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Thermoelectric harvesting of low temperature natural/waste heat
NASA Astrophysics Data System (ADS)
Rowe, David Michael
2012-06-01
Apart from specialized space requirements current development in applications of thermoelectric generation mainly relate to reducing harmful carbon emissions and decreasing costly fuel consumption through the recovery of exhaust heat from fossil fuel powered engines and emissions from industrial utilities. Focus on these applications is to the detriment of the wider exploitations of thermoelectrics with other sources of heat energy, and in particular natural occurring and waste low temperature heat, receiving little, if any, attention. In this presentation thermoelectric generation applications, both potential and real in harvesting low temperature waste/natural heat are reviewed. The use of thermoelectrics to harvest solar energy, ocean thermal energy, geothermal heat and waste heat are discussed and their credibility as future large-scale sources of electrical power assessed.
Bioremediation of mercury: not properly exploited in contaminated soils!
Mahbub, Khandaker Rayhan; Bahar, Md Mezbaul; Labbate, Maurizio; Krishnan, Kannan; Andrews, Stuart; Naidu, Ravi; Megharaj, Mallavarapu
2017-02-01
Contamination of land and water caused by heavy metal mercury (Hg) poses a serious threat to biota worldwide. The seriousness of toxicity of this neurotoxin is characterized by its ability to augment in food chains and bind to thiol groups in living tissue. Therefore, different remediation approaches have been implemented to rehabilitate Hg-contaminated sites. Bioremediation is considered as cheaper and greener technology than the conventional physico-chemical means. Large-scale use of Hg-volatilizing bacteria are used to clean up Hg-contaminated waters, but there is no such approach to remediate Hg-contaminated soils. This review focuses on recent uses of Hg-resistant bacteria in bioremediation of mercury-contaminated sites, limitation and advantages of this approach, and identifies the gaps in existing research.
Towards scalable Byzantine fault-tolerant replication
NASA Astrophysics Data System (ADS)
Zbierski, Maciej
2017-08-01
Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
Bromelain: an overview of industrial application and purification strategies.
Arshad, Zatul Iffah Mohd; Amid, Azura; Yusof, Faridah; Jaswir, Irwandi; Ahmad, Kausar; Loke, Show Pau
2014-09-01
This review highlights the use of bromelain in various applications with up-to-date literature on the purification of bromelain from pineapple fruit and waste such as peel, core, crown, and leaves. Bromelain, a cysteine protease, has been exploited commercially in many applications in the food, beverage, tenderization, cosmetic, pharmaceutical, and textile industries. Researchers worldwide have been directing their interest to purification strategies by applying conventional and modern approaches, such as manipulating the pH, affinity, hydrophobicity, and temperature conditions in accord with the unique properties of bromelain. The amount of downstream processing will depend on its intended application in industries. The breakthrough of recombinant DNA technology has facilitated the large-scale production and purification of recombinant bromelain for novel applications in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less
Regional variation in diets of breeding Red-shouldered hawks
Strobel, Bradley N.; Boal, Clint W.
2010-01-01
We collected data on breeding season diet composition of Red-shouldered Hawks (Buteo lineatus) in south Texas and compared these data, and those reported from studies elsewhere to examine large scale spatial variation in prey use in eastern North America. Red-shouldered Hawk diets aligned into two significantly different groups, which appear to correlate with latitude. The diets of Red-shouldered Hawks in group 1, which are of more northern latitudes, had significantly more mammalian prey and significantly less amphibian prey than those in group 2, which are at more southerly latitudes. Our meta-analysis demonstrated the dietary flexibility of Red-shouldered Hawks, which likely accounts for their broad distribution by exploiting regional variations in taxon-specific prey availability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
Frequency-encoded photonic qubits for scalable quantum information processing
Lukens, Joseph M.; Lougovski, Pavel
2016-12-21
Among the objectives for large-scale quantum computation is the quantum interconnect: a device that uses photons to interface qubits that otherwise could not interact. However, the current approaches require photons indistinguishable in frequency—a major challenge for systems experiencing different local environments or of different physical compositions altogether. Here, we develop an entirely new platform that actually exploits such frequency mismatch for processing quantum information. Labeled “spectral linear optical quantum computation” (spectral LOQC), our protocol offers favorable linear scaling of optical resources and enjoys an unprecedented degree of parallelism, as an arbitrary Ν-qubit quantum gate may be performed in parallel onmore » multiple Ν-qubit sets in the same linear optical device. Here, not only does spectral LOQC offer new potential for optical interconnects, but it also brings the ubiquitous technology of high-speed fiber optics to bear on photonic quantum information, making wavelength-configurable and robust optical quantum systems within reach.« less
Frequency-encoded photonic qubits for scalable quantum information processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukens, Joseph M.; Lougovski, Pavel
Among the objectives for large-scale quantum computation is the quantum interconnect: a device that uses photons to interface qubits that otherwise could not interact. However, the current approaches require photons indistinguishable in frequency—a major challenge for systems experiencing different local environments or of different physical compositions altogether. Here, we develop an entirely new platform that actually exploits such frequency mismatch for processing quantum information. Labeled “spectral linear optical quantum computation” (spectral LOQC), our protocol offers favorable linear scaling of optical resources and enjoys an unprecedented degree of parallelism, as an arbitrary Ν-qubit quantum gate may be performed in parallel onmore » multiple Ν-qubit sets in the same linear optical device. Here, not only does spectral LOQC offer new potential for optical interconnects, but it also brings the ubiquitous technology of high-speed fiber optics to bear on photonic quantum information, making wavelength-configurable and robust optical quantum systems within reach.« less
Principles of cooperation across systems: from human sharing to multicellularity and cancer.
Aktipis, Athena
2016-01-01
From cells to societies, several general principles arise again and again that facilitate cooperation and suppress conflict. In this study, I describe three general principles of cooperation and how they operate across systems including human sharing, cooperation in animal and insect societies and the massively large-scale cooperation that occurs in our multicellular bodies. The first principle is that of Walk Away: that cooperation is enhanced when individuals can leave uncooperative partners. The second principle is that resource sharing is often based on the need of the recipient (i.e., need-based transfers) rather than on strict account-keeping. And the last principle is that effective scaling up of cooperation requires increasingly sophisticated and costly cheater suppression mechanisms. By comparing how these principles operate across systems, we can better understand the constraints on cooperation. This can facilitate the discovery of novel ways to enhance cooperation and suppress cheating in its many forms, from social exploitation to cancer.
Morphological filtering and multiresolution fusion for mammographic microcalcification detection
NASA Astrophysics Data System (ADS)
Chen, Lulin; Chen, Chang W.; Parker, Kevin J.
1997-04-01
Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
A Novel Device Addressing Design Challenges for Passive Fluid Phase Separations Aboard Spacecraft
NASA Astrophysics Data System (ADS)
Weislogel, M. M.; Thomas, E. A.; Graf, J. C.
2009-07-01
Capillary solutions have long existed for the control of liquid inventories in spacecraft fluid systems such as liquid propellants, cryogens and thermal fluids for temperature control. Such large length scale, `low-gravity,' capillary systems exploit container geometry and fluid properties—primarily wetting—to passively locate or transport fluids to desired positions for a variety of purposes. Such methods have only been confidently established if the wetting conditions are known and favorable. In this paper, several of the significant challenges for `capillary solutions' to low-gravity multiphase fluids management aboard spacecraft are briefly reviewed in light of applications common to life support systems that emphasize the impact of the widely varying wetting properties typical of aqueous systems. A restrictive though no less typifying example of passive phase separation in a urine collection system is highlighted that identifies key design considerations potentially met by predominately capillary solutions. Sample results from novel scale model prototype testing aboard a NASA low-g aircraft are presented that support the various design considerations.
Averill, Colin
2014-10-01
Allocation trade-offs shape ecological and biogeochemical phenomena at local to global scale. Plant allocation strategies drive major changes in ecosystem carbon cycling. Microbial allocation to enzymes that decompose carbon vs. organic nutrients may similarly affect ecosystem carbon cycling. Current solutions to this allocation problem prioritise stoichiometric tradeoffs implemented in plant ecology. These solutions may not maximise microbial growth and fitness under all conditions, because organic nutrients are also a significant carbon resource for microbes. I created multiple allocation frameworks and simulated microbial growth using a microbial explicit biogeochemical model. I demonstrate that prioritising stoichiometric trade-offs does not optimise microbial allocation, while exploiting organic nutrients as carbon resources does. Analysis of continental-scale enzyme data supports the allocation patterns predicted by this framework, and modelling suggests large deviations in soil C loss based on which strategy is implemented. Therefore, understanding microbial allocation strategies will likely improve our understanding of carbon cycling and climate. © 2014 John Wiley & Sons Ltd/CNRS.
OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu
The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.
Stanton, Kasey; Daly, Elizabeth; Stasik-O'Brien, Sara M; Ellickson-Larew, Stephanie; Clark, Lee Anna; Watson, David
2017-09-01
The primary goal of this study was to explicate the construct validity of the Narcissistic Personality Inventory (NPI) and the Hypomanic Personality Scale (HPS) by examining their relations both to each other and to measures of personality and psychopathology in a community sample ( N = 255). Structural evidence indicates that the NPI is defined by Leadership/Authority, Grandiose Exhibitionism, and Entitlement/Exploitativeness factors, whereas the HPS is characterized by specific dimensions reflecting Social Vitality, Mood Volatility, and Excitement. Our results establish that (a) factor-based subscales from these instruments display divergent patterns of relations that are obscured when relying exclusively on total scores and (b) some NPI and HPS subscales more clearly tap content specifically relevant to narcissism and mania, respectively, than others. In particular, our findings challenge the construct validity of the NPI Leadership/Authority and HPS Social Vitality subscales, which appear to assess overlapping assertiveness content that is largely adaptive in nature.
Towards the Development of THz-Sensors for the Detection of African Trypanosomes
NASA Astrophysics Data System (ADS)
Knieß, Robert; Wagner, Carolin B.; Ulrich Göringer, H.; Mueh, Mario; Damm, Christian; Sawallich, Simon; Chmielak, Bartos; Plachetka, Ulrich; Lemme, Max
2018-03-01
Human African trypanosomiasis (HAT) is a neglected tropical disease (NTD) for which adequate therapeutic and diagnostic measures are still lacking. Causative agent of HAT is the African trypanosome, a single-cell parasite, which propagates in the blood and cerebrospinal fluid of infected patients. Although different testing methods for the pathogen exist, none is robust, reliable and cost-efficient enough to support large-scale screening and control programs. Here we propose the design of a new sensor-type for the detection of infective-stage trypanosomes. The sensor exploits the highly selective binding capacity of nucleic acid aptamers to the surface of the parasite in combination with passive sensor structures to allow an electromagnetic remote read-out using terahertz (THz)-radiation. The short wavelength provides a superior interaction with the parasite cells than longer wavelengths, which is essential for a high sensitivity. We present two different sensor structures using both, micro- and nano-scale elements, as well as different measurement principles.
Zhou, Xi; Xu, Huihua; Cheng, Jiyi; Zhao, Ni; Chen, Shih-Chi
2015-01-01
A continuous roll-to-roll microcontact printing (MCP) platform promises large-area nanoscale patterning with significantly improved throughput and a great variety of applications, e.g. precision patterning of metals, bio-molecules, colloidal nanocrystals, etc. Compared with nanoimprint lithography, MCP does not require a thermal imprinting step (which limits the speed and material choices), but instead, extreme precision with multi-axis positioning and misalignment correction capabilities for large area adaptation. In this work, we exploit a flexure-based mechanism that enables continuous MCP with 500 nm precision and 0.05 N force control. The fully automated roll-to-roll platform is coupled with a new backfilling MCP chemistry optimized for high-speed patterning of gold and silver. Gratings of 300, 400, 600 nm line-width at various locations on a 4-inch plastic substrate are fabricated at a speed of 60 cm/min. Our work represents the first example of roll-to-roll MCP with high reproducibility, wafer scale production capability at nanometer resolution. The precision roll-to-roll platform can be readily applied to other material systems. PMID:26037147
NASA Astrophysics Data System (ADS)
Shuang, Y.; Sutou, Y.; Hatayama, S.; Shindo, S.; Song, Y. H.; Ando, D.; Koike, J.
2018-04-01
Phase-change random access memory (PCRAM) is enabled by a large resistance contrast between amorphous and crystalline phases upon reversible switching between the two states. Thus, great efforts have been devoted to identifying potential phase-change materials (PCMs) with large electrical contrast to realize a more accurate reading operation. In contrast, although the truly dominant resistance in a scaled PCRAM cell is contact resistance, less attention has been paid toward the investigation of the contact property between PCMs and electrode metals. This study aims to propose a non-bulk-resistance-dominant PCRAM whose resistance is modulated only by contact. The contact-resistance-dominated PCM exploited here is N-doped Cr2Ge2Te6 (NCrGT), which exhibits almost no electrical resistivity difference between the two phases but exhibits a typical switching behavior involving a three-order-of-magnitude SET/RESET resistance ratio owing to its large contact resistance contrast. The conduction mechanism was discussed on the basis of current-voltage characteristics of the interface between the NCrGT and the W electrode.
Landslide Hazard Assessment In Mountaneous Area of Uzbekistan
NASA Astrophysics Data System (ADS)
Nyazov, R. A.; Nurtaev, B. S.
Because of the growth of population and caretaking of the flat areas under agricul- ture, mountain areas have been intensively mastered, producing increase of natural and technogenic processes in Uzbekistan last years. The landslides are the most dan- gerous phenomena and 7240 of them happened during last 40 years. More than 50 % has taken place in the term of 1991 - 2000 years. The situation is aggravated be- cause these regions are situated in zones, where disastrous earthquakes with M> 7 occurred in past and are expected in the future. Continuing seismic gap in Uzbek- istan during last 15-20 years and last disastrous earthquakes occurred in Afghanistan, Iran, Turkey, Greece, Taiwan and India worry us. On the basis of long-term observa- tions the criteria of landslide hazard assessment (suddenness, displacement interval, straight-line directivity, kind of residential buildings destruction) are proposed. This methodology was developed on two geographic levels: local (town scale) and regional (region scale). Detailed risk analysis performed on a local scale and extrapolated to the regional scale. Engineering-geologic parameters content of hazard estimation of landslides and mud flows also is divided into regional and local levels. Four degrees of danger of sliding processes are distinguished for compiling of small-scale, medium- and large-scale maps. Angren industrial area in Tien-Shan mountain is characterized by initial seismic intensity of 8-9 (MSC scale). Here the human technological activity (open-cast mining) has initiated the forming of the large landslide that covers more- over 8 square kilometers and corresponds to a volume of 800 billion cubic meters. In turn the landslide influence can become the source of industrial emergencies. On an example of Angren industrial mining region, the different scenarios on safety control of residing of the people and motion of transport, regulating technologies definition of field improvement and exploitation of mountain water reservoirs are proposed for prevention of dangerous geological processes.
NASA Astrophysics Data System (ADS)
Amann, Florian; Gischig, Valentin; Evans, Keith; Doetsch, Joseph; Jalali, Reza; Valley, Benoît; Krietsch, Hannes; Dutler, Nathan; Villiger, Linus; Brixel, Bernard; Klepikova, Maria; Kittilä, Anniina; Madonna, Claudio; Wiemer, Stefan; Saar, Martin O.; Loew, Simon; Driesner, Thomas; Maurer, Hansruedi; Giardini, Domenico
2018-02-01
In this contribution, we present a review of scientific research results that address seismo-hydromechanically coupled processes relevant for the development of a sustainable heat exchanger in low-permeability crystalline rock and introduce the design of the In situ Stimulation and Circulation (ISC) experiment at the Grimsel Test Site dedicated to studying such processes under controlled conditions. The review shows that research on reservoir stimulation for deep geothermal energy exploitation has been largely based on laboratory observations, large-scale projects and numerical models. Observations of full-scale reservoir stimulations have yielded important results. However, the limited access to the reservoir and limitations in the control on the experimental conditions during deep reservoir stimulations is insufficient to resolve the details of the hydromechanical processes that would enhance process understanding in a way that aids future stimulation design. Small-scale laboratory experiments provide fundamental insights into various processes relevant for enhanced geothermal energy, but suffer from (1) difficulties and uncertainties in upscaling the results to the field scale and (2) relatively homogeneous material and stress conditions that lead to an oversimplistic fracture flow and/or hydraulic fracture propagation behavior that is not representative of a heterogeneous reservoir. Thus, there is a need for intermediate-scale hydraulic stimulation experiments with high experimental control that bridge the various scales and for which access to the target rock mass with a comprehensive monitoring system is possible. The ISC experiment is designed to address open research questions in a naturally fractured and faulted crystalline rock mass at the Grimsel Test Site (Switzerland). Two hydraulic injection phases were executed to enhance the permeability of the rock mass. During the injection phases the rock mass deformation across fractures and within intact rock, the pore pressure distribution and propagation, and the microseismic response were monitored at a high spatial and temporal resolution.
Incorporating human-water dynamics in a hyper-resolution land surface model
NASA Astrophysics Data System (ADS)
Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.
2017-12-01
The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in a hyper-resolution LSM this work allows for progress on hydrological monitoring and predictions, as well as drought preparedness and water impact assessments at relevant decision-making scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Chenchen; Martínez, Todd J.; SLAC National Accelerator Laboratory, Menlo Park, California 94025
We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 aremore » less than 0.5 kcal/mol for all systems tested (up to 162 atoms).« less
Energy storage inherent in large tidal turbine farms
Vennell, Ross; Adcock, Thomas A. A.
2014-01-01
While wind farms have no inherent storage to supply power in calm conditions, this paper demonstrates that large tidal turbine farms in channels have short-term energy storage. This storage lies in the inertia of the oscillating flow and can be used to exceed the previously published upper limit for power production by currents in a tidal channel, while simultaneously maintaining stronger currents. Inertial storage exploits the ability of large farms to manipulate the phase of the oscillating currents by varying the farm's drag coefficient. This work shows that by optimizing how a large farm's drag coefficient varies during the tidal cycle it is possible to have some flexibility about when power is produced. This flexibility can be used in many ways, e.g. producing more power, or to better meet short predictable peaks in demand. This flexibility also allows trading total power production off against meeting peak demand, or mitigating the flow speed reduction owing to power extraction. The effectiveness of inertial storage is governed by the frictional time scale relative to either the duration of a half tidal cycle or the duration of a peak in power demand, thus has greater benefits in larger channels. PMID:24910516
Energy storage inherent in large tidal turbine farms.
Vennell, Ross; Adcock, Thomas A A
2014-06-08
While wind farms have no inherent storage to supply power in calm conditions, this paper demonstrates that large tidal turbine farms in channels have short-term energy storage. This storage lies in the inertia of the oscillating flow and can be used to exceed the previously published upper limit for power production by currents in a tidal channel, while simultaneously maintaining stronger currents. Inertial storage exploits the ability of large farms to manipulate the phase of the oscillating currents by varying the farm's drag coefficient. This work shows that by optimizing how a large farm's drag coefficient varies during the tidal cycle it is possible to have some flexibility about when power is produced. This flexibility can be used in many ways, e.g. producing more power, or to better meet short predictable peaks in demand. This flexibility also allows trading total power production off against meeting peak demand, or mitigating the flow speed reduction owing to power extraction. The effectiveness of inertial storage is governed by the frictional time scale relative to either the duration of a half tidal cycle or the duration of a peak in power demand, thus has greater benefits in larger channels.
SURFING BIOLOGICAL SURFACES: EXPLOITING THE NUCLEOID FOR PARTITION AND TRANSPORT IN BACTERIA
Vecchiarelli, Anthony G.; Mizuuchi, Kiyoshi; Funnell, Barbara E.
2012-01-01
The ParA family of ATPases are responsible for transporting bacterial chromosomes, plasmids, and large protein machineries. ParAs pattern the nucleoid in vivo, but how patterning functions or is exploited in transport is of considerable debate. Here we discuss the process of self-organization into patterns on the bacterial nucleoid and explore how it relates to the molecular mechanism of ParA action. We review ParA-mediated DNA partition as a general mechanism of how ATP-driven protein gradients on biological surfaces can result in spatial organization on a mesoscale. We also discuss how the nucleoid acts as a formidable diffusion barrier for large bodies in the cell, and make the case that the ParA family evolved to overcome the barrier by exploiting the nucleoid as a matrix for movement. PMID:22934804
Birds of a Feather: Neanderthal Exploitation of Raptors and Corvids
Finlayson, Clive; Brown, Kimberly; Blasco, Ruth; Rosell, Jordi; Negro, Juan José; Finlayson, Geraldine; Sánchez Marco, Antonio; Giles Pacheco, Francisco; Rodríguez Vidal, Joaquín; Carrión, José S.; Fa, Darren A.; Rodríguez Llanes, José M.
2012-01-01
The hypothesis that Neanderthals exploited birds for the use of their feathers or claws as personal ornaments in symbolic behaviour is revolutionary as it assigns unprecedented cognitive abilities to these hominins. This inference, however, is based on modest faunal samples and thus may not represent a regular or systematic behaviour. Here we address this issue by looking for evidence of such behaviour across a large temporal and geographical framework. Our analyses try to answer four main questions: 1) does a Neanderthal to raptor-corvid connection exist at a large scale, thus avoiding associations that might be regarded as local in space or time?; 2) did Middle (associated with Neanderthals) and Upper Palaeolithic (associated with modern humans) sites contain a greater range of these species than Late Pleistocene paleontological sites?; 3) is there a taphonomic association between Neanderthals and corvids-raptors at Middle Palaeolithic sites on Gibraltar, specifically Gorham's, Vanguard and Ibex Caves? and; 4) was the extraction of wing feathers a local phenomenon exclusive to the Neanderthals at these sites or was it a geographically wider phenomenon?. We compiled a database of 1699 Pleistocene Palearctic sites based on fossil bird sites. We also compiled a taphonomical database from the Middle Palaeolithic assemblages of Gibraltar. We establish a clear, previously unknown and widespread, association between Neanderthals, raptors and corvids. We show that the association involved the direct intervention of Neanderthals on the bones of these birds, which we interpret as evidence of extraction of large flight feathers. The large number of bones, the variety of species processed and the different temporal periods when the behaviour is observed, indicate that this was a systematic, geographically and temporally broad, activity that the Neanderthals undertook. Our results, providing clear evidence that Neanderthal cognitive capacities were comparable to those of Modern Humans, constitute a major advance in the study of human evolution. PMID:23029321
Birds of a feather: Neanderthal exploitation of raptors and corvids.
Finlayson, Clive; Brown, Kimberly; Blasco, Ruth; Rosell, Jordi; Negro, Juan José; Bortolotti, Gary R; Finlayson, Geraldine; Sánchez Marco, Antonio; Giles Pacheco, Francisco; Rodríguez Vidal, Joaquín; Carrión, José S; Fa, Darren A; Rodríguez Llanes, José M
2012-01-01
The hypothesis that Neanderthals exploited birds for the use of their feathers or claws as personal ornaments in symbolic behaviour is revolutionary as it assigns unprecedented cognitive abilities to these hominins. This inference, however, is based on modest faunal samples and thus may not represent a regular or systematic behaviour. Here we address this issue by looking for evidence of such behaviour across a large temporal and geographical framework. Our analyses try to answer four main questions: 1) does a Neanderthal to raptor-corvid connection exist at a large scale, thus avoiding associations that might be regarded as local in space or time?; 2) did Middle (associated with Neanderthals) and Upper Palaeolithic (associated with modern humans) sites contain a greater range of these species than Late Pleistocene paleontological sites?; 3) is there a taphonomic association between Neanderthals and corvids-raptors at Middle Palaeolithic sites on Gibraltar, specifically Gorham's, Vanguard and Ibex Caves? and; 4) was the extraction of wing feathers a local phenomenon exclusive to the Neanderthals at these sites or was it a geographically wider phenomenon?. We compiled a database of 1699 Pleistocene Palearctic sites based on fossil bird sites. We also compiled a taphonomical database from the Middle Palaeolithic assemblages of Gibraltar. We establish a clear, previously unknown and widespread, association between Neanderthals, raptors and corvids. We show that the association involved the direct intervention of Neanderthals on the bones of these birds, which we interpret as evidence of extraction of large flight feathers. The large number of bones, the variety of species processed and the different temporal periods when the behaviour is observed, indicate that this was a systematic, geographically and temporally broad, activity that the Neanderthals undertook. Our results, providing clear evidence that Neanderthal cognitive capacities were comparable to those of Modern Humans, constitute a major advance in the study of human evolution.
Detecting recurrent gene mutation in interaction network context using multi-scale graph diffusion.
Babaei, Sepideh; Hulsman, Marc; Reinders, Marcel; de Ridder, Jeroen
2013-01-23
Delineating the molecular drivers of cancer, i.e. determining cancer genes and the pathways which they deregulate, is an important challenge in cancer research. In this study, we aim to identify pathways of frequently mutated genes by exploiting their network neighborhood encoded in the protein-protein interaction network. To this end, we introduce a multi-scale diffusion kernel and apply it to a large collection of murine retroviral insertional mutagenesis data. The diffusion strength plays the role of scale parameter, determining the size of the network neighborhood that is taken into account. As a result, in addition to detecting genes with frequent mutations in their genomic vicinity, we find genes that harbor frequent mutations in their interaction network context. We identify densely connected components of known and putatively novel cancer genes and demonstrate that they are strongly enriched for cancer related pathways across the diffusion scales. Moreover, the mutations in the clusters exhibit a significant pattern of mutual exclusion, supporting the conjecture that such genes are functionally linked. Using multi-scale diffusion kernel, various infrequently mutated genes are found to harbor significant numbers of mutations in their interaction network neighborhood. Many of them are well-known cancer genes. The results demonstrate the importance of defining recurrent mutations while taking into account the interaction network context. Importantly, the putative cancer genes and networks detected in this study are found to be significant at different diffusion scales, confirming the necessity of a multi-scale analysis.
Does Fire Influence the Landscape-Scale Distribution of an Invasive Mesopredator?
Payne, Catherine J.; Ritchie, Euan G.; Kelly, Luke T.; Nimmo, Dale G.
2014-01-01
Predation and fire shape the structure and function of ecosystems globally. However, studies exploring interactions between these two processes are rare, especially at large spatial scales. This knowledge gap is significant not only for ecological theory, but also in an applied context, because it limits the ability of landscape managers to predict the outcomes of manipulating fire and predators. We examined the influence of fire on the occurrence of an introduced and widespread mesopredator, the red fox (Vulpes vulpes), in semi-arid Australia. We used two extensive and complimentary datasets collected at two spatial scales. At the landscape-scale, we surveyed red foxes using sand-plots within 28 study landscapes – which incorporated variation in the diversity and proportional extent of fire-age classes – located across a 104 000 km2 study area. At the site-scale, we surveyed red foxes using camera traps at 108 sites stratified along a century-long post-fire chronosequence (0–105 years) within a 6630 km2 study area. Red foxes were widespread both at the landscape and site-scale. Fire did not influence fox distribution at either spatial scale, nor did other environmental variables that we measured. Our results show that red foxes exploit a broad range of environmental conditions within semi-arid Australia. The presence of red foxes throughout much of the landscape is likely to have significant implications for native fauna, particularly in recently burnt habitats where reduced cover may increase prey species’ predation risk. PMID:25291186
Arunachalam, Kantha D; Annamalai, Sathesh Kumar
2013-01-01
The exploitation of various plant materials for the biosynthesis of nanoparticles is considered a green technology as it does not involve any harmful chemicals. The aim of this study was to develop a simple biological method for the synthesis of silver and gold nanoparticles using Chrysopogon zizanioides. To exploit various plant materials for the biosynthesis of nanoparticles was considered a green technology. An aqueous leaf extract of C. zizanioides was used to synthesize silver and gold nanoparticles by the bioreduction of silver nitrate (AgNO3) and chloroauric acid (HAuCl4) respectively. Water-soluble organics present in the plant materials were mainly responsible for reducing silver or gold ions to nanosized Ag or Au particles. The synthesized silver and gold nanoparticles were characterized by ultraviolet (UV)-visible spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis. The kinetics decline reactions of aqueous silver/gold ion with the C. zizanioides crude extract were determined by UV-visible spectroscopy. SEM analysis showed that aqueous gold ions, when exposed to the extract were reduced and resulted in the biosynthesis of gold nanoparticles in the size range 20–50 nm. This eco-friendly approach for the synthesis of nanoparticles is simple, can be scaled up for large-scale production with powerful bioactivity as demonstrated by the synthesized silver nanoparticles. The synthesized nanoparticles can have clinical use as antibacterial, antioxidant, as well as cytotoxic agents and can be used for biomedical applications. PMID:23861583
The Use of Intensity Scales In Exploiting Tsunami Historical Databases
NASA Astrophysics Data System (ADS)
Barberopoulou, A.; Scheele, F.
2015-12-01
Post-disaster assessments for historical tsunami events (>15 years old) are either scarce or contain limited information. In this study, we are assessing ways to examine tsunami impacts by utilizing data from old events, but more importantly we examine how to best utilize information contained in tsunami historical databases, in order to provide meaningful products that describe the impact of the event. As such, a tsunami intensity scale was applied to two historical events that were observed in New Zealand (one local and one distant), in order to utilize the largest possible number of observations in our dataset. This is especially important for countries like New Zealand where the tsunami historical record is short, going back to only the 19th century, and where instrument recordings are only available for the most recent events. We found that despite a number of challenges in using intensities -uncertainties partly due to limitations of historical event data - these data with the help of GIS tools can be used to produce hazard maps and offer an alternative way to exploit tsunami historical records. Most importantly the assignment of intensities at each point of observation allows for utilization of many more observations than if one depends on physical information alone, such as water heights. We hope these results may be used towards developing a well-defined methodology for hazard assessments, and refine our knowledge for past tsunami events for which the tsunami sources are largely unknown, and also for when physical quantities describing the tsunami (e.g. water height, flood depth, run-up) are scarce.
High resilience in the Yamal-Nenets social–ecological system, West Siberian Arctic, Russia
Forbes, Bruce C.; Stammler, Florian; Kumpula, Timo; Meschtyb, Nina; Pajunen, Anu; Kaarlejärvi, Elina
2009-01-01
Tundra ecosystems are vulnerable to hydrocarbon development, in part because small-scale, low-intensity disturbances can affect vegetation, permafrost soils, and wildlife out of proportion to their spatial extent. Scaling up to include human residents, tightly integrated arctic social-ecological systems (SESs) are believed similarly susceptible to industrial impacts and climate change. In contrast to northern Alaska and Canada, most terrestrial and aquatic components of West Siberian oil and gas fields are seasonally exploited by migratory herders, hunters, fishers, and domesticated reindeer (Rangifer tarandus L.). Despite anthropogenic fragmentation and transformation of a large proportion of the environment, recent socioeconomic upheaval, and pronounced climate warming, we find the Yamal-Nenets SES highly resilient according to a few key measures. We detail the remarkable extent to which the system has successfully reorganized in response to recent shocks and evaluate the limits of the system's capacity to respond. Our analytical approach combines quantitative methods with participant observation to understand the overall effects of rapid land use and climate change at the level of the entire Yamal system, detect thresholds crossed using surrogates, and identify potential traps. Institutional constraints and drivers were as important as the documented ecological changes. Particularly crucial to success is the unfettered movement of people and animals in space and time, which allows them to alternately avoid or exploit a wide range of natural and anthropogenic habitats. However, expansion of infrastructure, concomitant terrestrial and freshwater ecosystem degradation, climate change, and a massive influx of workers underway present a looming threat to future resilience. PMID:20007776
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification
Rutledge, Robert G.
2011-01-01
Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812
A scalable moment-closure approximation for large-scale biochemical reaction networks
Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan
2017-01-01
Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983
Changes in channel morphology over human time scales [Chapter 32
John M. Buffington
2012-01-01
Rivers are exposed to changing environmental conditions over multiple spatial and temporal scales, with the imposed environmental conditions and response potential of the river modulated to varying degrees by human activity and our exploitation of natural resources. Watershed features that control river morphology include topography (valley slope and channel...
Precise stellar surface gravities from the time scales of convectively driven brightness variations
Kallinger, Thomas; Hekker, Saskia; García, Rafael A.; Huber, Daniel; Matthews, Jaymie M.
2016-01-01
A significant part of the intrinsic brightness variations in cool stars of low and intermediate mass arises from surface convection (seen as granulation) and acoustic oscillations (p-mode pulsations). The characteristics of these phenomena are largely determined by the stars’ surface gravity (g). Detailed photometric measurements of either signal can yield an accurate value of g. However, even with ultraprecise photometry from NASA’s Kepler mission, many stars are too faint for current methods or only moderate accuracy can be achieved in a limited range of stellar evolutionary stages. This means that many of the stars in the Kepler sample, including exoplanet hosts, are not sufficiently characterized to fully describe the sample and exoplanet properties. We present a novel way to measure surface gravities with accuracies of about 4%. Our technique exploits the tight relation between g and the characteristic time scale of the combined granulation and p-mode oscillation signal. It is applicable to all stars with a convective envelope, including active stars. It can measure g in stars for which no other analysis is now possible. Because it depends on the time scale (and no other properties) of the signal, our technique is largely independent of the type of measurement (for example, photometry or radial velocity measurements) and the calibration of the instrumentation used. However, the oscillation signal must be temporally resolved; thus, it cannot be applied to dwarf stars observed by Kepler in its long-cadence mode. PMID:26767193
Precise stellar surface gravities from the time scales of convectively driven brightness variations.
Kallinger, Thomas; Hekker, Saskia; García, Rafael A; Huber, Daniel; Matthews, Jaymie M
2016-01-01
A significant part of the intrinsic brightness variations in cool stars of low and intermediate mass arises from surface convection (seen as granulation) and acoustic oscillations (p-mode pulsations). The characteristics of these phenomena are largely determined by the stars' surface gravity (g). Detailed photometric measurements of either signal can yield an accurate value of g. However, even with ultraprecise photometry from NASA's Kepler mission, many stars are too faint for current methods or only moderate accuracy can be achieved in a limited range of stellar evolutionary stages. This means that many of the stars in the Kepler sample, including exoplanet hosts, are not sufficiently characterized to fully describe the sample and exoplanet properties. We present a novel way to measure surface gravities with accuracies of about 4%. Our technique exploits the tight relation between g and the characteristic time scale of the combined granulation and p-mode oscillation signal. It is applicable to all stars with a convective envelope, including active stars. It can measure g in stars for which no other analysis is now possible. Because it depends on the time scale (and no other properties) of the signal, our technique is largely independent of the type of measurement (for example, photometry or radial velocity measurements) and the calibration of the instrumentation used. However, the oscillation signal must be temporally resolved; thus, it cannot be applied to dwarf stars observed by Kepler in its long-cadence mode.
Constraining local reionization histories with 21 cm observations
NASA Astrophysics Data System (ADS)
Beardsley, Adam
2018-01-01
Several low-frequency radio instruments are poised to detect faint signatures of the Epoch of Reionization through the redshifted HI 21 cm line, while near-infrared (NIR) observations continue to push studies of galactic properties to higher redshifts. With ongoing upgrades to radio facilities and the imminent launch of JWST, both fields are on the verge of revolutionary advances in the characterization of the EoR. But there remains an open question of how to marry these observations in a way that will fully exploit their complementary nature. NIR observations will reveal how galaxies formed and evolved, while 21 cm observations will trace the impact early galaxies had on their large scale environments. Simultaneous observation of these two physical processes is difficult owing to the disparate scales probed by the instruments. I will present a method for using images of large-scale 21 cm brightness temperature to constrain the reionization history of a given region of cosmological volume, thereby providing environmental context to galaxy observations in the same region. This framework is complicated by foreground contamination in 21 cm EoR images, but can be mitigated with filtering techniques and useful information is still recoverable. By adding local reionization history to NIR surveys, we will be able to distinguish between galaxies in a variety of environments. This has the potential to enhance expected signatures such as the flattening of the luminosity function at high redshift, or reveal new observables altogether.
Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth
2017-12-01
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
Rascher, U; Alonso, L; Burkart, A; Cilia, C; Cogliati, S; Colombo, R; Damm, A; Drusch, M; Guanter, L; Hanus, J; Hyvärinen, T; Julitta, T; Jussila, J; Kataja, K; Kokkalis, P; Kraft, S; Kraska, T; Matveeva, M; Moreno, J; Muller, O; Panigada, C; Pikl, M; Pinto, F; Prey, L; Pude, R; Rossini, M; Schickling, A; Schurr, U; Schüttemeyer, D; Verrelst, J; Zemek, F
2015-12-01
Variations in photosynthesis still cause substantial uncertainties in predicting photosynthetic CO2 uptake rates and monitoring plant stress. Changes in actual photosynthesis that are not related to greenness of vegetation are difficult to measure by reflectance based optical remote sensing techniques. Several activities are underway to evaluate the sun-induced fluorescence signal on the ground and on a coarse spatial scale using space-borne imaging spectrometers. Intermediate-scale observations using airborne-based imaging spectroscopy, which are critical to bridge the existing gap between small-scale field studies and global observations, are still insufficient. Here we present the first validated maps of sun-induced fluorescence in that critical, intermediate spatial resolution, employing the novel airborne imaging spectrometer HyPlant. HyPlant has an unprecedented spectral resolution, which allows for the first time quantifying sun-induced fluorescence fluxes in physical units according to the Fraunhofer Line Depth Principle that exploits solar and atmospheric absorption bands. Maps of sun-induced fluorescence show a large spatial variability between different vegetation types, which complement classical remote sensing approaches. Different crop types largely differ in emitting fluorescence that additionally changes within the seasonal cycle and thus may be related to the seasonal activation and deactivation of the photosynthetic machinery. We argue that sun-induced fluorescence emission is related to two processes: (i) the total absorbed radiation by photosynthetically active chlorophyll; and (ii) the functional status of actual photosynthesis and vegetation stress. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Chirayath, V.
2014-12-01
Fluid Lensing is a theoretical model and algorithm I present for fluid-optical interactions in turbulent flows as well as two-fluid surface boundaries that, when coupled with an unique computer vision and image-processing pipeline, may be used to significantly enhance the angular resolution of a remote sensing optical system with applicability to high-resolution 3D imaging of subaqueous regions and through turbulent fluid flows. This novel remote sensing technology has recently been implemented on a quadcopter-based UAS for imaging shallow benthic systems to create the first dataset of a biosphere with unprecedented sub-cm-level imagery in 3D over areas as large as 15 square kilometers. Perturbed two-fluid boundaries with different refractive indices, such as the surface between the ocean and air, may be exploited for use as lensing elements for imaging targets on either side of the interface with enhanced angular resolution. I present theoretical developments behind Fluid Lensing and experimental results from its recent implementation for the Reactive Reefs project to image shallow reef ecosystems at cm scales. Preliminary results from petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk coral reefs in American Samoa (August, 2013) show broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to understanding climate change's impact on coastal zones, global oxygen production and carbon sequestration.
Environmental impact of geometric earthwork construction in pre-Columbian Amazonia.
Carson, John Francis; Whitney, Bronwen S; Mayle, Francis E; Iriarte, José; Prümers, Heiko; Soto, J Daniel; Watling, Jennifer
2014-07-22
There is considerable controversy over whether pre-Columbian (pre-A.D. 1492) Amazonia was largely "pristine" and sparsely populated by slash-and-burn agriculturists, or instead a densely populated, domesticated landscape, heavily altered by extensive deforestation and anthropogenic burning. The discovery of hundreds of large geometric earthworks beneath intact rainforest across southern Amazonia challenges its status as a pristine landscape, and has been assumed to indicate extensive pre-Columbian deforestation by large populations. We tested these assumptions using coupled local- and regional-scale paleoecological records to reconstruct land use on an earthwork site in northeast Bolivia within the context of regional, climate-driven biome changes. This approach revealed evidence for an alternative scenario of Amazonian land use, which did not necessitate labor-intensive rainforest clearance for earthwork construction. Instead, we show that the inhabitants exploited a naturally open savanna landscape that they maintained around their settlement despite the climatically driven rainforest expansion that began ∼2,000 y ago across the region. Earthwork construction and agriculture on terra firme landscapes currently occupied by the seasonal rainforests of southern Amazonia may therefore not have necessitated large-scale deforestation using stone tools. This finding implies far less labor--and potentially lower population density--than previously supposed. Our findings demonstrate that current debates over the magnitude and nature of pre-Columbian Amazonian land use, and its impact on global biogeochemical cycling, are potentially flawed because they do not consider this land use in the context of climate-driven forest-savanna biome shifts through the mid-to-late Holocene.
Environmental impact of geometric earthwork construction in pre-Columbian Amazonia
Carson, John Francis; Whitney, Bronwen S.; Mayle, Francis E.; Iriarte, José; Prümers, Heiko; Soto, J. Daniel; Watling, Jennifer
2014-01-01
There is considerable controversy over whether pre-Columbian (pre-A.D. 1492) Amazonia was largely “pristine” and sparsely populated by slash-and-burn agriculturists, or instead a densely populated, domesticated landscape, heavily altered by extensive deforestation and anthropogenic burning. The discovery of hundreds of large geometric earthworks beneath intact rainforest across southern Amazonia challenges its status as a pristine landscape, and has been assumed to indicate extensive pre-Columbian deforestation by large populations. We tested these assumptions using coupled local- and regional-scale paleoecological records to reconstruct land use on an earthwork site in northeast Bolivia within the context of regional, climate-driven biome changes. This approach revealed evidence for an alternative scenario of Amazonian land use, which did not necessitate labor-intensive rainforest clearance for earthwork construction. Instead, we show that the inhabitants exploited a naturally open savanna landscape that they maintained around their settlement despite the climatically driven rainforest expansion that began ∼2,000 y ago across the region. Earthwork construction and agriculture on terra firme landscapes currently occupied by the seasonal rainforests of southern Amazonia may therefore not have necessitated large-scale deforestation using stone tools. This finding implies far less labor—and potentially lower population density—than previously supposed. Our findings demonstrate that current debates over the magnitude and nature of pre-Columbian Amazonian land use, and its impact on global biogeochemical cycling, are potentially flawed because they do not consider this land use in the context of climate-driven forest–savanna biome shifts through the mid-to-late Holocene. PMID:25002502
Meyners, Christian; Baud, Matthias G J; Fuchter, Matthew J; Meyer-Almes, Franz-Josef
2014-09-01
Performing kinetic studies on protein ligand interactions provides important information on complex formation and dissociation. Beside kinetic parameters such as association rates and residence times, kinetic experiments also reveal insights into reaction mechanisms. Exploiting intrinsic tryptophan fluorescence a parallelized high-throughput Förster resonance energy transfer (FRET)-based reporter displacement assay with very low protein consumption was developed to enable the large-scale kinetic characterization of the binding of ligands to recombinant human histone deacetylases (HDACs) and a bacterial histone deacetylase-like amidohydrolase (HDAH) from Bordetella/Alcaligenes. For the binding of trichostatin A (TSA), suberoylanilide hydroxamic acid (SAHA), and two other SAHA derivatives to HDAH, two different modes of action, simple one-step binding and a two-step mechanism comprising initial binding and induced fit, were verified. In contrast to HDAH, all compounds bound to human HDAC1, HDAC6, and HDAC8 through a two-step mechanism. A quantitative view on the inhibitor-HDAC systems revealed two types of interaction, fast binding and slow dissociation. We provide arguments for the thesis that the relationship between quantitative kinetic and mechanistic information and chemical structures of compounds will serve as a valuable tool for drug optimization. Copyright © 2014 Elsevier Inc. All rights reserved.
Gravitational Lensing: Einstein's unfinished symphony
NASA Astrophysics Data System (ADS)
Treu, Tommaso; Ellis, Richard S.
2015-01-01
Gravitational lensing - the deflection of light rays by gravitating matter - has become a major tool in the armoury of the modern cosmologist. Proposed nearly a hundred years ago as a key feature of Einstein's theory of general relativity, we trace the historical development since its verification at a solar eclipse in 1919. Einstein was apparently cautious about its practical utility and the subject lay dormant observationally for nearly 60 years. Nonetheless there has been rapid progress over the past twenty years. The technique allows astronomers to chart the distribution of dark matter on large and small scales thereby testing predictions of the standard cosmological model which assumes dark matter comprises a massive weakly-interacting particle. By measuring the distances and tracing the growth of dark matter structure over cosmic time, gravitational lensing also holds great promise in determining whether the dark energy, postulated to explain the accelerated cosmic expansion, is a vacuum energy density or a failure of general relativity on large scales. We illustrate the wide range of applications which harness the power of gravitational lensing, from searches for the earliest galaxies magnified by massive clusters to those for extrasolar planets which temporarily brighten a background star. We summarise the future prospects with dedicated ground and space-based facilities designed to exploit this remarkable physical phenomenon.
The origins of intensive marine fishing in medieval Europe: the English evidence.
Barrett, James H; Locker, Alison M; Roberts, Callum M
2004-12-07
The catastrophic impact of fishing pressure on species such as cod and herring is well documented. However, the antiquity of their intensive exploitation has not been established. Systematic catch statistics are only available for ca.100 years, but large-scale fishing industries existed in medieval Europe and the expansion of cod fishing from the fourteenth century (first in Iceland, then in Newfoundland) played an important role in the European colonization of the Northwest Atlantic. History has demonstrated the scale of these late medieval and post-medieval fisheries, but only archaeology can illuminate earlier practices. Zooarchaeological evidence shows that the clearest changes in marine fishing in England between AD 600 and 1600 occurred rapidly around AD 1000 and involved large increases in catches of herring and cod. Surprisingly, this revolution predated the documented post-medieval expansion of England's sea fisheries and coincided with the Medieval Warm Period--when natural herring and cod productivity was probably low in the North Sea. This counterintuitive discovery can be explained by the concurrent rise of urbanism and human impacts on freshwater ecosystems. The search for 'pristine' baselines regarding marine ecosystems will thus need to employ medieval palaeoecological proxies in addition to recent fisheries data and early modern historical records.
Increasing CAD system efficacy for lung texture analysis using a convolutional network
NASA Astrophysics Data System (ADS)
Tarando, Sebastian Roberto; Fetita, Catalin; Faccinetto, Alex; Brillet, Pierre-Yves
2016-03-01
The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.
Jensen, Erik C.; Stockton, Amanda M.; Chiesl, Thomas N.; Kim, Jungkyu; Bera, Abhisek; Mathies, Richard A.
2013-01-01
A digitally programmable microfluidic Automaton consisting of a 2-dimensional array of pneumatically actuated microvalves is programmed to perform new multiscale mixing and sample processing operations. Large (µL-scale) volume processing operations are enabled by precise metering of multiple reagents within individual nL-scale valves followed by serial repetitive transfer to programmed locations in the array. A novel process exploiting new combining valve concepts is developed for continuous rapid and complete mixing of reagents in less than 800 ms. Mixing, transfer, storage, and rinsing operations are implemented combinatorially to achieve complex assay automation protocols. The practical utility of this technology is demonstrated by performing automated serial dilution for quantitative analysis as well as the first demonstration of on-chip fluorescent derivatization of biomarker targets (carboxylic acids) for microchip capillary electrophoresis on the Mars Organic Analyzer. A language is developed to describe how unit operations are combined to form a microfluidic program. Finally, this technology is used to develop a novel microfluidic 6-sample processor for combinatorial mixing of large sets (>26 unique combinations) of reagents. The digitally programmable microfluidic Automaton is a versatile programmable sample processor for a wide range of process volumes, for multiple samples, and for different types of analyses. PMID:23172232
NASA Astrophysics Data System (ADS)
di Franco, Antonio; Thiriet, Pierre; di Carlo, Giuseppe; Dimitriadis, Charalampos; Francour, Patrice; Gutiérrez, Nicolas L.; Jeudy de Grissac, Alain; Koutsoubas, Drosos; Milazzo, Marco; Otero, María Del Mar; Piante, Catherine; Plass-Johnson, Jeremiah; Sainz-Trapaga, Susana; Santarossa, Luca; Tudela, Sergi; Guidetti, Paolo
2016-12-01
Marine protected areas (MPAs) have largely proven to be effective tools for conserving marine ecosystem, while socio-economic benefits generated by MPAs to fisheries are still under debate. Many MPAs embed a no-take zone, aiming to preserve natural populations and ecosystems, within a buffer zone where potentially sustainable activities are allowed. Small-scale fisheries (SSF) within buffer zones can be highly beneficial by promoting local socio-economies. However, guidelines to successfully manage SSFs within MPAs, ensuring both conservation and fisheries goals, and reaching a win-win scenario, are largely unavailable. From the peer-reviewed literature, grey-literature and interviews, we assembled a unique database of ecological, social and economic attributes of SSF in 25 Mediterranean MPAs. Using random forest with Boruta algorithm we identified a set of attributes determining successful SSFs management within MPAs. We show that fish stocks are healthier, fishermen incomes are higher and the social acceptance of management practices is fostered if five attributes are present (i.e. high MPA enforcement, presence of a management plan, fishermen engagement in MPA management, fishermen representative in the MPA board, and promotion of sustainable fishing). These findings are pivotal to Mediterranean coastal communities so they can achieve conservation goals while allowing for profitable exploitation of fisheries resources.
Yang, Qingchun; Wang, Luchen; Ma, Hongyun; Yu, Kun; Martín, Jordi Delgado
2016-09-01
Ordos Basin is located in an arid and semi-arid region of northwestern China, which is the most important energy source bases in China. Salawusu Formation (Q3 s) is one of the most important aquifer systems of Ordos Basin, which is adjacent to Jurassic coalfield areas. A large-scale exploitation of Jurassic coal resources over ten years results in series of influences to the coal minerals, such as exposed to the oxidation process and dissolution into the groundwater due to the precipitation infiltration. Therefore, how these processes impact groundwater quality is of great concerns. In this paper, the descriptive statistical method, Piper trilinear diagram, ratios of major ions and canonical correspondence analysis are employed to investigate the hydrochemical evolution, determine the possible sources of pollution processes, and assess the controls on groundwater compositions using the monitored data in 2004 and 2014 (before and after large-scale coal mining). Results showed that long-term exploration of coal resources do not result in serious groundwater pollution. The hydrochemical types changed from HCO3(-)-CO3(2-) facies to SO4(2-)-Cl facies during 10 years. Groundwater hardness, nitrate and sulfate pollution were identified in 2014, which was most likely caused by agricultural activities. Copyright © 2016 Elsevier Ltd. All rights reserved.
The origins of intensive marine fishing in medieval Europe: the English evidence.
Barrett, James H.; Locker, Alison M.; Roberts, Callum M.
2004-01-01
The catastrophic impact of fishing pressure on species such as cod and herring is well documented. However, the antiquity of their intensive exploitation has not been established. Systematic catch statistics are only available for ca.100 years, but large-scale fishing industries existed in medieval Europe and the expansion of cod fishing from the fourteenth century (first in Iceland, then in Newfoundland) played an important role in the European colonization of the Northwest Atlantic. History has demonstrated the scale of these late medieval and post-medieval fisheries, but only archaeology can illuminate earlier practices. Zooarchaeological evidence shows that the clearest changes in marine fishing in England between AD 600 and 1600 occurred rapidly around AD 1000 and involved large increases in catches of herring and cod. Surprisingly, this revolution predated the documented post-medieval expansion of England's sea fisheries and coincided with the Medieval Warm Period--when natural herring and cod productivity was probably low in the North Sea. This counterintuitive discovery can be explained by the concurrent rise of urbanism and human impacts on freshwater ecosystems. The search for 'pristine' baselines regarding marine ecosystems will thus need to employ medieval palaeoecological proxies in addition to recent fisheries data and early modern historical records. PMID:15590590
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-08-18
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.
Di Franco, Antonio; Thiriet, Pierre; Di Carlo, Giuseppe; Dimitriadis, Charalampos; Francour, Patrice; Gutiérrez, Nicolas L; Jeudy de Grissac, Alain; Koutsoubas, Drosos; Milazzo, Marco; Otero, María Del Mar; Piante, Catherine; Plass-Johnson, Jeremiah; Sainz-Trapaga, Susana; Santarossa, Luca; Tudela, Sergi; Guidetti, Paolo
2016-12-01
Marine protected areas (MPAs) have largely proven to be effective tools for conserving marine ecosystem, while socio-economic benefits generated by MPAs to fisheries are still under debate. Many MPAs embed a no-take zone, aiming to preserve natural populations and ecosystems, within a buffer zone where potentially sustainable activities are allowed. Small-scale fisheries (SSF) within buffer zones can be highly beneficial by promoting local socio-economies. However, guidelines to successfully manage SSFs within MPAs, ensuring both conservation and fisheries goals, and reaching a win-win scenario, are largely unavailable. From the peer-reviewed literature, grey-literature and interviews, we assembled a unique database of ecological, social and economic attributes of SSF in 25 Mediterranean MPAs. Using random forest with Boruta algorithm we identified a set of attributes determining successful SSFs management within MPAs. We show that fish stocks are healthier, fishermen incomes are higher and the social acceptance of management practices is fostered if five attributes are present (i.e. high MPA enforcement, presence of a management plan, fishermen engagement in MPA management, fishermen representative in the MPA board, and promotion of sustainable fishing). These findings are pivotal to Mediterranean coastal communities so they can achieve conservation goals while allowing for profitable exploitation of fisheries resources.
Cod Gadus morhua and climate change: processes, productivity and prediction.
Brander, K M
2010-11-01
Environmental factors act on individual fishes directly and indirectly. The direct effects on rates and behaviour can be studied experimentally and in the field, particularly with the advent of ever smarter tags for tracking fishes and their environment. Indirect effects due to changes in food, predators, parasites and diseases are much more difficult to estimate and predict. Climate can affect all life-history stages through direct and indirect processes and although the consequences in terms of growth, survival and reproductive output can be monitored, it is often difficult to determine the causes. Investigation of cod Gadus morhua populations across the whole North Atlantic Ocean has shown large-scale patterns of change in productivity due to lower individual growth and condition, caused by large-scale climate forcing. If a population is being heavily exploited then a drop in productivity can push it into decline unless the level of fishing is reduced: the idea of a stable carrying capacity is a dangerous myth. Overexploitation can be avoided by keeping fishing mortality low and by monitoring and responding rapidly to changes in productivity. There are signs that this lesson has been learned and that G. morhua will continue to be a mainstay of the human diet. © 2010 The Author. Journal of Fish Biology © 2010 The Fisheries Society of the British Isles.
Topical perspective on massive threading and parallelism.
Farber, Robert M
2011-09-01
Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Gilliams, S. J.
2017-12-01
In line with the paradigm shift in Earth Observation of "Bringing the users to the data", ESA provides collaborative, virtual work environments giving access to EO data and tools, processors, and ICT resources through coherent interfaces. These coherent interfaces are categorized thematically, tailored to the related user communities and named Thematic Exploitation Platforms (TEP). The Food Security Thematic Exploitation Platform (FS-TEP) is the youngest out of seven TEPs and is developed in an agile mode in close coordination with its users. It will provide a "one stop platform" for the extraction of information from EO data for services in the food security sector mainly in Europe & Africa, allowing both access to EO data and processing of these data sets. Thereby it will foster smart, data-intensive agricultural and aquacultural applications in the scientific, private and public domain. The FS-TEP builds on a large and heterogeneous user community, spanning from application developers in agriculture to aquaculture, from small-scale farmers to agricultural industry, from public science to the finance and insurance sectors, from local and national administration to international agencies. To meet the requirements of these groups, the FS-TEP will provide different frontend interfaces. Service pilots will demonstrate the platform's ability to support agriculture and aquaculture with tailored EO based information services.The project team developing the FS-TEP and implementing pilot services during a 30 months period (started in April 2017) is led by Vista GmbH, Germany, supported by CGI Italy, VITO, Belgium, and Hatfield Consultants, Canada. It is funded by ESA under contract number 4000120074/17/I-EF.
The Geological Grading Scale: Every million Points Counts!
NASA Astrophysics Data System (ADS)
Stegman, D. R.; Cooper, C. M.
2006-12-01
The concept of geological time, ranging from thousands to billions of years, is naturally quite difficult for students to grasp initially, as it is much longer than the timescales over which they experience everyday life. Moreover, universities operate on a few key timescales (hourly lectures, weekly assignments, mid-term examinations) to which students' maximum attention is focused, largely driven by graded assessment. The geological grading scale exploits the overwhelming interest students have in grades as an opportunity to instill familiarity with geological time. With the geological grading scale, the number of possible points/marks/grades available in the course is scaled to 4.5 billion points --- collapsing the entirety of Earth history into one semester. Alternatively, geological time can be compressed into each assignment, with scores for weekly homeworks not worth 100 points each, but 4.5 billion! Homeworks left incomplete with questions unanswered lose 100's of millions of points - equivalent to missing the Paleozoic era. The expected quality of presentation for problem sets can be established with great impact in the first week by docking assignments an insignificant amount points for handing in messy work; though likely more points than they've lost in their entire schooling history combined. Use this grading scale and your students will gradually begin to appreciate exactly how much time represents a geological blink of the eye.
A spectral method for spatial downscaling | Science Inventory ...
Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch
Communication: Unusual structure and transport in ionic liquid-hexane mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Min; Khatun, Sufia; Castner, Edward W., E-mail: ecastner@rci.rutgers.edu
2015-03-28
Ionic liquids having a sufficiently amphiphilic cation can dissolve large volume fractions of alkanes, leading to mixtures with intriguing properties on molecular length scales. The trihexyl(tetradecyl)phosphonium cation paired with the bis(trifluoromethylsulfonyl)amide anion provides an ionic liquid that can dissolve large mole fractions of hexane. We present experimental results on mixtures of n-C{sub 6}D{sub 14} with this ionic liquid. High-energy X-ray scattering studies reveal a persistence of the characteristic features of ionic liquid structure even for 80% dilution with n-C{sub 6}D{sub 14}. Nuclear magnetic resonance self-diffusion results reveal decidedly non-hydrodynamic behavior where the self-diffusion of the neutral, non-polar n-C{sub 6}D{sub 14}more » is on average a factor of 21 times faster than for the cation. Exploitation of the unique structural and transport properties of these mixtures may lead to new opportunities for designer solvents for enhanced chemical reactivity and interface science.« less
Superfluid high REynolds von Kármán experiment.
Rousset, B; Bonnay, P; Diribarne, P; Girard, A; Poncet, J M; Herbert, E; Salort, J; Baudet, C; Castaing, B; Chevillard, L; Daviaud, F; Dubrulle, B; Gagne, Y; Gibert, M; Hébral, B; Lehner, Th; Roche, P-E; Saint-Michel, B; Bon Mardion, M
2014-10-01
The Superfluid High REynolds von Kármán experiment facility exploits the capacities of a high cooling power refrigerator (400 W at 1.8 K) for a large dimension von Kármán flow (inner diameter 0.78 m), which can work with gaseous or subcooled liquid (He-I or He-II) from room temperature down to 1.6 K. The flow is produced between two counter-rotating or co-rotating disks. The large size of the experiment allows exploration of ultra high Reynolds numbers based on Taylor microscale and rms velocity [S. B. Pope, Turbulent Flows (Cambridge University Press, 2000)] (Rλ > 10000) or resolution of the dissipative scale for lower Re. This article presents the design and first performance of this apparatus. Measurements carried out in the first runs of the facility address the global flow behavior: calorimetric measurement of the dissipation, torque and velocity measurements on the two turbines. Moreover first local measurements (micro-Pitot, hot wire,…) have been installed and are presented.
Highly Stretchable and Self-Healable Supercapacitor with Reduced Graphene Oxide Based Fiber Springs.
Wang, Siliang; Liu, Nishuang; Su, Jun; Li, Luying; Long, Fei; Zou, Zhengguang; Jiang, Xueliang; Gao, Yihua
2017-02-28
In large-scale applications of portable and wearable electronic devices, high-performance supercapacitors are important energy supply sources. However, since the reliability and stability of supercapacitors are generally destroyed by mechanical deformation and damage during practical applications, the stretchability and self-healability must be exploited for the supercapacitors. Preparing the highly stretchable and self-healable electrodes is still a challenge. Here, we report reduced graphene oxide fiber based springs as electrodes for stretchable and self-healable supercapacitors. The fiber springs (diameters of 295 μm) are thick enough to reconnect the broken electrodes accurately by visual inspection. By wrapping fiber springs with a self-healing polymer outer shell, a stretchable and self-healable supercapacitor is successfully realized. The supercapacitor has 82.4% capacitance retention after a large stretch (100%), and 54.2% capacitance retention after the third healing. This work gave an essential strategy for designing and fabricating stretchable and self-healable supercapacitors in next-generation multifunctional electronic devices.
A large hadron electron collider at CERN
Abelleira Fernandez, J. L.
2015-04-06
This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and eletron-ion physics. The LHeC is designed to run synchronously withmore » the LHC in the twenties and to achieve an integrated luminosity of O(100)fb –1. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.« less
Superfluid high REynolds von Kármán experiment
NASA Astrophysics Data System (ADS)
Rousset, B.; Bonnay, P.; Diribarne, P.; Girard, A.; Poncet, J. M.; Herbert, E.; Salort, J.; Baudet, C.; Castaing, B.; Chevillard, L.; Daviaud, F.; Dubrulle, B.; Gagne, Y.; Gibert, M.; Hébral, B.; Lehner, Th.; Roche, P.-E.; Saint-Michel, B.; Bon Mardion, M.
2014-10-01
The Superfluid High REynolds von Kármán experiment facility exploits the capacities of a high cooling power refrigerator (400 W at 1.8 K) for a large dimension von Kármán flow (inner diameter 0.78 m), which can work with gaseous or subcooled liquid (He-I or He-II) from room temperature down to 1.6 K. The flow is produced between two counter-rotating or co-rotating disks. The large size of the experiment allows exploration of ultra high Reynolds numbers based on Taylor microscale and rms velocity [S. B. Pope, Turbulent Flows (Cambridge University Press, 2000)] (Rλ > 10000) or resolution of the dissipative scale for lower Re. This article presents the design and first performance of this apparatus. Measurements carried out in the first runs of the facility address the global flow behavior: calorimetric measurement of the dissipation, torque and velocity measurements on the two turbines. Moreover first local measurements (micro-Pitot, hot wire,…) have been installed and are presented.
Kontoes, Charalampos; Keramitsoglou, Iphigenia; Papoutsis, Ioannis; Sifakis, Nicolas I.; Xofis, Panteleimon
2013-01-01
This paper presents the results of an operational nationwide burnt area mapping service realized over Greece for the years 2007–2011, through the implementation of the so-called BSM_NOA dedicated method developed at the National Observatory of Athens for post-fire recovery management. The method exploits multispectral satellite imagery, such as Landsat-TM, SPOT, FORMOSAT-2, WorldView and IKONOS. The analysis of fire size distribution reveals that a high number of fire events evolve to large and extremely large wildfires under favorable wildfire conditions, confirming the reported trend of an increasing fire-severity in recent years. Furthermore, under such conditions wildfires affect to a higher degree areas at high altitudes, threatening the existence of ecologically significant ecosystems. Finally, recent socioeconomic changes and land abandonment has resulted in the encroachment of former agricultural areas of limited productivity by shrubs and trees, resulting both in increased fuel availability and continuity, and subsequently increased burnability. PMID:23966201
Communication: Unusual structure and transport in ionic liquid-hexane mixtures
Liang, Min; Khatun, Sufia; Castner, Edward W.
2015-03-28
Ionic liquids having a sufficiently amphiphilic cation can dissolve large volume fractions of alkanes, leading to mixtures with intriguing properties on molecular length scales. The trihexyl(tetradecyl)phosphonium cation paired with the bis(trifluoromethylsulfonyl)amide anion provides an ionic liquid that can dissolve large mole fractions of hexane. We present experimental results on mixtures of n-C 6D 14 with this ionic liquid. High- energy X-ray scattering studies reveal a persistence of the characteristic features of ionic liquid structure even for 80% dilution with n-C 6D 14. NMR self-diffusion results reveal decidedly non-hydrodynamic behavior where the self-diffusion of the neutral, non-polar n-C 6D 14 ismore » on average a factor of 21 times faster than for the cation. Exploitation of the unique structural and transport properties of these mixtures may lead to new opportunities for designer solvents for enhanced chemical reactivity and interface science.« less
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
Radiation biophysical aspects of charged particles: From the nanoscale to therapy
NASA Astrophysics Data System (ADS)
Scifoni, Emanuele
2015-06-01
Charged particle applications for radiotherapy are motivated by their specific advantages in terms of dose delivery and biological effect. These advantages have to a large extent originated from the peculiarities of ion beam energy deposition patterns in the medium on a microscopic, down to a nanoscopic scale. A large amount of research was conducted in this direction, especially in the last two decades, profiting also from the parallel investigations going on in radiation protection for space exploration. The main biophysical aspects of charged particles, which are relevant to hadrontherapy are shortly reviewed in the present contribution, namely focusing on relative biological effectiveness (RBE), oxygen enhancement ratio (OER) and combination with radiosensitizers. A summary of present major research direction on both microscopic and macroscopic assessment of the specific mechanism of radiation damage will be given, as well as several open challenges for a better understanding of the whole process, which still limit the full exploitation of ion beams for radiotherapy.
NASA Astrophysics Data System (ADS)
Pascoe, Stephen; Iwi, Alan; kershaw, philip; Stephens, Ag; Lawrence, Bryan
2014-05-01
The advent of large-scale data and the consequential analysis problems have led to two new challenges for the research community: how to share such data to get the maximum value and how to carry out efficient analysis. Solving both challenges require a form of parallelisation: the first is social parallelisation (involving trust and information sharing), the second data parallelisation (involving new algorithms and tools). The JASMIN infrastructure supports both kinds of parallelism by providing a multi-tennent environment with petabyte-scale storage, VM provisioning and batch cluster facilities. The JASMIN Analysis Platform (JAP) is an analysis software layer for JASMIN which emphasises ease of transition from a researcher's local environment to JASMIN. JAP brings together tools traditionally used by multiple communities and configures them to work together, enabling users to move analysis from their local environment to JASMIN without rewriting code. JAP also provides facilities to exploit JASMIN's parallel capabilities whilst maintaining their familiar analysis environment where ever possible. Modern opensource analysis tools typically have multiple dependent packages, increasing the installation burden on system administrators. When you consider a suite of tools, often with both common and conflicting dependencies, analysis pipelines can become locked to a particular installation simply because of the effort required to reconstruct the dependency tree. JAP addresses this problem by providing a consistent suite of RPMs compatible with RedHat Enterprise Linux and CentOS 6.4. Researchers can install JAP locally, either as RPMs or through a pre-built VM image, giving them the confidence to know moving analysis to JASMIN will not disrupt their environment. Analysis parallelisation is in it's infancy in climate sciences, with few tools capable of exploiting any parallel environment beyond manual scripting of the use of multiple processors. JAP begins to bridge this gap through a veriety of higher-level tools for parallelisation and job scheduling such as IPython-parallel and MPI support for interactive analysis languages. We find that enabling even simple parallelisation of workflows, together with the state of the art I/O performance of JASMIN storage, provides many users with the large increases in efficiency they need to scale their analyses to conteporary data volumes and tackly new, previously inaccessible, problems.
Large-Scale Event Extraction from Literature with Multi-Level Gene Normalization
Wei, Chih-Hsuan; Hakala, Kai; Pyysalo, Sampo; Ananiadou, Sophia; Kao, Hung-Yu; Lu, Zhiyong; Salakoski, Tapio; Van de Peer, Yves; Ginter, Filip
2013-01-01
Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons – Attribution – Share Alike (CC BY-SA) license. PMID:23613707
Mineral resources of parts of the Departments of Antioquia and Caldas, Zone II, Colombia
Hall, R.B.; Feininger, Tomas; Barrero, L.; Dario, Rico H.; ,; Alvarez, A.
1970-01-01
The mineral resources of an area of 40,000 sq km, principally in the Department of Antioquia, but including small parts of the Departments of Caldas, C6rdoba, Risaralda, and Tolima, were investigated during the period 1964-68. The area is designated Zone II by the Colombian Inventario Minero Nacional(lMN). The geology of approximately 45 percent of this area, or 18,000 sq km, has been mapped by IMN. Zone II has been a gold producer for centuries, and still produces 75 percent of Colombia's gold. Silver is recovered as a byproduct. Ferruginous laterites have been investigated as potential sources of iron ore but are not commercially exploitable. Nickeliferous laterite on serpentinite near Ure in the extreme northwest corner of the Zone is potentially exploitable, although less promising than similar laterites at Cerro Matoso, north of the Zone boundary. Known deposits of mercury, chromium, manganese, and copper are small and have limited economic potentia1. Cement raw materials are important among nonmetallic resources, and four companies are engaged in the manufacture of portland cement. The eastern half of Zone II contains large carbonate rock reserves, but poor accessibility is a handicap to greater development at present. Dolomite near Amalfi is quarried for the glass-making and other industries. Clay saprolite is abundant and widely used in making brick and tiles in backyard kilns. Kaolin of good quality near La Union is used by the ceramic industry. Subbituminous coal beds of Tertiary are an important resource in the western part of the zone and have good potential for greater development. Aggregate materials for construction are varied and abundant. Deposits of sodic feldspar, talc, decorative stone, and silica are exploited on a small scale. Chrysotils asbestos deposits north of Campamento are being developed to supply fiber for Colombia's thriving asbestos-cement industry, which is presently dependent upon imported fiber. Wollastonite and andalusite are potential resources not exploitable now.
Verliin, Aare; Ojaveer, Henn; Kaju, Katre; Tammiksaar, Erki
2013-01-01
Historical perspectives on fisheries and related human behaviour provide valuable information on fishery resources and their exploitation, helping to more appropriately set management targets and determine relevant reference levels. In this study we analyse historical fisheries and fish trade at the north-eastern Baltic Sea coast in the late 17th century. Local consumption and export together amounted to the annual removal of about 200 tonnes of fish from the nearby sea and freshwater bodies. The fishery was very diverse and exploited altogether one cyclostome and 17 fish species with over 90% of the catch being consumed locally. The exported fish consisted almost entirely of high-valued species with Stockholm (Sweden) being the most important export destination. Due to rich political history and natural features of the region, we suggest that the documented evidence of this small-scale fishery should be considered as the first quantitative summary of exploitation of aquatic living resources in the region and can provide a background for future analyses. PMID:23861914
Metric Documentation of Cultural Heritage: Research Directions from the Italian Gamher Project
NASA Astrophysics Data System (ADS)
Bitelli, G.; Balletti, C.; Brumana, R.; Barazzetti, L.; D'Urso, M. G.; Rinaudo, F.; Tucci, G.
2017-08-01
GAMHer is a collaborative project that aims at exploiting and validating Geomatics algorithms, methodologies and procedures in the framework of new European regulations, which require a more extensive and productive use of digital information, as requested by the Digital Agenda for Europe as one of the seven pillars of the Europe 2020 Strategy. To this aim, GAMHer focuses on the need of a certified accuracy for surveying and monitoring projects with photogrammetry and laser scanning technologies, especially when used in a multiscale approach for landscape and built heritage documentation, conservation, and management. The approach used follows a multi-LoD (level of detail) transition that exploits GIS systems at the landscape scale, BIM technology and "point cloud based" 3d modelling for the scale of the building, and an innovative BIM/GIS integrated approach to foster innovation, promote users' collaboration and encourage communication between users. The outcomes of GAMHer are not intended to be used only by a community of Geomatics specialists, but also by a heterogeneous user community that exploit images and laser scans in their professional activities.
NASA Astrophysics Data System (ADS)
Voter, Arthur
Many important materials processes take place on time scales that far exceed the roughly one microsecond accessible to molecular dynamics simulation. Typically, this long-time evolution is characterized by a succession of thermally activated infrequent events involving defects in the material. In the accelerated molecular dynamics (AMD) methodology, known characteristics of infrequent-event systems are exploited to make reactive events take place more frequently, in a dynamically correct way. For certain processes, this approach has been remarkably successful, offering a view of complex dynamical evolution on time scales of microseconds, milliseconds, and sometimes beyond. We have recently made advances in all three of the basic AMD methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics (TAD)), exploiting both algorithmic advances and novel parallelization approaches. I will describe these advances, present some examples of our latest results, and discuss what should be possible when exascale computing arrives in roughly five years. Funded by the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, and by the Los Alamos Laboratory Directed Research and Development program.
Scaling Semantic Graph Databases in Size and Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Villa, Oreste
In this paper we present SGEM, a full software system for accelerating large-scale semantic graph databases on commodity clusters. Unlike current approaches, SGEM addresses semantic graph databases by only employing graph methods at all the levels of the stack. On one hand, this allows exploiting the space efficiency of graph data structures and the inherent parallelism of graph algorithms. These features adapt well to the increasing system memory and core counts of modern commodity clusters. On the other hand, however, these systems are optimized for regular computation and batched data transfers, while graph methods usually are irregular and generate fine-grainedmore » data accesses with poor spatial and temporal locality. Our framework comprises a SPARQL to data parallel C compiler, a library of parallel graph methods and a custom, multithreaded runtime system. We introduce our stack, motivate its advantages with respect to other solutions and show how we solved the challenges posed by irregular behaviors. We present the result of our software stack on the Berlin SPARQL benchmarks with datasets up to 10 billion triples (a triple corresponds to a graph edge), demonstrating scaling in dataset size and in performance as more nodes are added to the cluster.« less
Supersampling and Network Reconstruction of Urban Mobility.
Sagarra, Oleguer; Szell, Michael; Santi, Paolo; Díaz-Guilera, Albert; Ratti, Carlo
2015-01-01
Understanding human mobility is of vital importance for urban planning, epidemiology, and many other fields that draw policies from the activities of humans in space. Despite the recent availability of large-scale data sets of GPS traces or mobile phone records capturing human mobility, typically only a subsample of the population of interest is represented, giving a possibly incomplete picture of the entire system under study. Methods to reliably extract mobility information from such reduced data and to assess their sampling biases are lacking. To that end, we analyzed a data set of millions of taxi movements in New York City. We first show that, once they are appropriately transformed, mobility patterns are highly stable over long time scales. Based on this observation, we develop a supersampling methodology to reliably extrapolate mobility records from a reduced sample based on an entropy maximization procedure, and we propose a number of network-based metrics to assess the accuracy of the predicted vehicle flows. Our approach provides a well founded way to exploit temporal patterns to save effort in recording mobility data, and opens the possibility to scale up data from limited records when information on the full system is required.
A new paradigm for atomically detailed simulations of kinetics in biophysical systems.
Elber, Ron
2017-01-01
The kinetics of biochemical and biophysical events determined the course of life processes and attracted considerable interest and research. For example, modeling of biological networks and cellular responses relies on the availability of information on rate coefficients. Atomically detailed simulations hold the promise of supplementing experimental data to obtain a more complete kinetic picture. However, simulations at biological time scales are challenging. Typical computer resources are insufficient to provide the ensemble of trajectories at the correct length that is required for straightforward calculations of time scales. In the last years, new technologies emerged that make atomically detailed simulations of rate coefficients possible. Instead of computing complete trajectories from reactants to products, these approaches launch a large number of short trajectories at different positions. Since the trajectories are short, they are computed trivially in parallel on modern computer architecture. The starting and termination positions of the short trajectories are chosen, following statistical mechanics theory, to enhance efficiency. These trajectories are analyzed. The analysis produces accurate estimates of time scales as long as hours. The theory of Milestoning that exploits the use of short trajectories is discussed, and several applications are described.
Estimating fish exploitation and aquatic habitat loss across diffuse inland recreational fisheries.
de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy
2015-01-01
The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America's largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that (1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and (2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including (1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, (2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and (3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries.
Estimating Fish Exploitation and Aquatic Habitat Loss across Diffuse Inland Recreational Fisheries
de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy
2015-01-01
The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America’s largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that 1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and 2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including 1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, 2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and 3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries. PMID:25875790
10th Anniversary Review: a changing climate for coral reefs.
Lough, Janice M
2008-01-01
Tropical coral reefs are charismatic ecosystems that house a significant proportion of the world's marine biodiversity. Their valuable goods and services are fundamental to the livelihood of large coastal populations in the tropics. The health of many of the world's coral reefs, and the goods and services they provide, have already been severely compromised, largely due to over-exploitation by a range of human activities. These local-scale impacts, with the appropriate government instruments, support and management actions, can potentially be controlled and even ameliorated. Unfortunately, other human actions (largely in countries outside of the tropics), by changing global climate, have added additional global-scale threats to the continued survival of present-day coral reefs. Moderate warming of the tropical oceans has already resulted in an increase in mass coral bleaching events, affecting nearly all of the world's coral reef regions. The frequency of these events will only increase as global temperatures continue to rise. Weakening of coral reef structures will be a more insidious effect of changing ocean chemistry, as the oceans absorb part of the excess atmospheric carbon dioxide. More intense tropical cyclones, changed atmospheric and ocean circulation patterns will all affect coral reef ecosystems and the many associated plants and animals. Coral reefs will not disappear but their appearance, structure and community make-up will radically change. Drastic greenhouse gas mitigation strategies are necessary to prevent the full consequences of human activities causing such alterations to coral reef ecosystems.
NASA Astrophysics Data System (ADS)
Carrière, Simon D.; Chalikakis, Konstantinos; Danquigny, Charles; Davi, Hendrik; Mazzilli, Naomi; Ollivier, Chloé; Emblanch, Christophe
2016-11-01
Some portions of the porous rock matrix in the karst unsaturated zone (UZ) can contain large volumes of water and play a major role in water flow regulation. The essential results are presented of a local-scale study conducted in 2011 and 2012 above the Low Noise Underground Laboratory (LSBB - Laboratoire Souterrain à Bas Bruit) at Rustrel, southeastern France. Previous research revealed the geological structure and water-related features of the study site and illustrated the feasibility of specific hydrogeophysical measurements. In this study, the focus is on hydrodynamics at the seasonal and event timescales. Magnetic resonance sounding (MRS) measured a high water content (more than 10 %) in a large volume of rock. This large volume of water cannot be stored in fractures and conduits within the UZ. MRS was also used to measure the seasonal variation of water stored in the karst UZ. A process-based model was developed to simulate the effect of vegetation on groundwater recharge dynamics. In addition, electrical resistivity tomography (ERT) monitoring was used to assess preferential water pathways during a rain event. This study demonstrates the major influence of water flow within the porous rock matrix on the UZ hydrogeological functioning at both the local (LSBB) and regional (Fontaine de Vaucluse) scales. By taking into account the role of the porous matrix in water flow regulation, these findings may significantly improve karst groundwater hydrodynamic modelling, exploitation, and sustainable management.
Surfing biological surfaces: exploiting the nucleoid for partition and transport in bacteria.
Vecchiarelli, Anthony G; Mizuuchi, Kiyoshi; Funnell, Barbara E
2012-11-01
The ParA family of ATPases is responsible for transporting bacterial chromosomes, plasmids and large protein machineries. ParAs pattern the nucleoid in vivo, but how patterning functions or is exploited in transport is of considerable debate. Here we discuss the process of self-organization into patterns on the bacterial nucleoid and explore how it relates to the molecular mechanism of ParA action. We review ParA-mediated DNA partition as a general mechanism of how ATP-driven protein gradients on biological surfaces can result in spatial organization on a mesoscale. We also discuss how the nucleoid acts as a formidable diffusion barrier for large bodies in the cell, and make the case that the ParA family evolved to overcome the barrier by exploiting the nucleoid as a matrix for movement. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Massie, Danielle L.; Smith, Geoffrey; Bonvechio, Timothy F.; Bunch, Aaron J.; Lucchesi, David O.; Wagner, Tyler
2018-01-01
Quantifying spatial variability in fish growth and identifying large‐scale drivers of growth are fundamental to many conservation and management decisions. Although fish growth studies often focus on a single population, it is becoming increasingly clear that large‐scale studies are likely needed for addressing transboundary management needs. This is particularly true for species with high recreational value and for those with negative ecological consequences when introduced outside of their native range, such as the Flathead Catfish Pylodictis olivaris. This study quantified growth variability of the Flathead Catfish across a large portion of its contemporary range to determine whether growth differences existed between habitat types (i.e., reservoirs and rivers) and between native and introduced populations. Additionally, we investigated whether growth parameters varied as a function of latitude and time since introduction (for introduced populations). Length‐at‐age data from 26 populations across 11 states in the USA were modeled using a Bayesian hierarchical von Bertalanffy growth model. Population‐specific growth trajectories revealed large variation in Flathead Catfish growth and relatively high uncertainty in growth parameters for some populations. Relatively high uncertainty was also evident when comparing populations and when quantifying large‐scale patterns. Growth parameters (Brody growth coefficient [K] and theoretical maximum average length [L∞]) were not different (based on overlapping 90% credible intervals) between habitat types or between native and introduced populations. For populations within the introduced range of Flathead Catfish, latitude was negatively correlated with K. For native populations, we estimated an 85% probability that L∞ estimates were negatively correlated with latitude. Contrary to predictions, time since introduction was not correlated with growth parameters in introduced populations of Flathead Catfish. Results of this study suggest that Flathead Catfish growth patterns are likely shaped more strongly by finer‐scale processes (e.g., exploitation or prey abundances) as opposed to macro‐scale drivers.
LASSIE: simulating large-scale models of biochemical systems on GPUs.
Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo
2017-05-10
Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.
Demographic threats to the sustainability of Brazil nut exploitation.
Peres, Carlos A; Baider, Claudia; Zuidema, Pieter A; Wadt, Lúcia H O; Kainer, Karen A; Gomes-Silva, Daisy A P; Salomão, Rafael P; Simões, Luciana L; Franciosi, Eduardo R N; Cornejo Valverde, Fernando; Gribel, Rogério; Shepard, Glenn H; Kanashiro, Milton; Coventry, Peter; Yu, Douglas W; Watkinson, Andrew R; Freckleton, Robert P
2003-12-19
A comparative analysis of 23 populations of the Brazil nut tree (Bertholletia excelsa) across the Brazilian, Peruvian, and Bolivian Amazon shows that the history and intensity of Brazil nut exploitation are major determinants of population size structure. Populations subjected to persistent levels of harvest lack juvenile trees less than 60 centimeters in diameter at breast height; only populations with a history of either light or recent exploitation contain large numbers of juvenile trees. A harvesting model confirms that intensive exploitation levels over the past century are such that juvenile recruitment is insufficient to maintain populations over the long term. Without management, intensively harvested populations will succumb to a process of senescence and demographic collapse, threatening this cornerstone of the Amazonian extractive economy.
Organization of Single Molecule Magnets on Surfaces
NASA Astrophysics Data System (ADS)
Sessoli, Roberta
2006-03-01
The field of magnetic molecular clusters showing slow relaxation of the magnetization has attracted a great interest for the spectacular quantum effects in the dynamics of the magnetization that range from resonant quantum tunneling to topological interferences. Recently these systems, known as Single Molecule Magnets (SMMs), have also been proposed as model systems for the investigation of flame propagation in flammable substances. A renewed interest in SMMs also comes from the possibility to exploit their rich and complex magnetic behavior in nano-spintronics. However, at the crystalline state these molecular materials are substantially insulating. They can however exhibit significant transport properties if the conduction occurs through one molecule connected to two metal electrodes, or through a tunneling mechanism when the SMM is grafted on a conducting surface, as occurs in scanning tunnel microscopy experiments. Molecular compounds can be organized on surfaces thanks to the self assembly technique that exploits the strong affinity of some groups for the surface, e.g. thiols for gold surfaces. However the deposition of large molecules mainly comprising relatively weak coordinative bonds is far from trivial. Several different approaches have started to be investigated. We will briefly review here the strategies developed in a collaboration between the Universities of Florence and Modena. Well isolated molecules on Au(111) surfaces have been obtained with sub-monolayer coverage and different spacers. Organization on a large scale of micrometric structures has been obtained thanks to micro-contact printing. The magnetic properties of the grafted molecules have been investigated through magneto-optical techniques and the results show a significant change in the magnetization dynamics whose origin is still object of investigations.
Armstrong, Graeme; Phillips, Ben
2012-01-01
Wildfire is a fundamental disturbance process in many ecological communities, and is critical in maintaining the structure of some plant communities. In the past century, changes in global land use practices have led to changes in fire regimes that have radically altered the composition of many plant communities. As the severe biodiversity impacts of inappropriate fire management regimes are recognized, attempts are being made to manage fires within a more ‘natural’ regime. In this aim, the focus has typically been on determining the fire regime to which the community has adapted. Here we take a subtly different approach and focus on the probability of a patch being burnt. We hypothesize that competing sympatric taxa from different plant functional groups are able to coexist due to the stochasticity of the fire regime, which creates opportunities in both time and space that are exploited differentially by each group. We exploit this situation to find the fire probability at which three sympatric grasses, from different functional groups, are able to co-exist. We do this by parameterizing a spatio-temporal simulation model with the life-history strategies of the three species and then search for the fire frequency and scale at which they are able to coexist when in competition. The simulation gives a clear result that these species only coexist across a very narrow range of fire probabilities centred at 0.2. Conversely, fire scale was found only to be important at very large scales. Our work demonstrates the efficacy of using competing sympatric species with different regeneration niches to determine the probability of fire in any given patch. Estimating this probability allows us to construct an expected historical distribution of fire return intervals for the community; a critical resource for managing fire-driven biodiversity in the face of a growing carbon economy and ongoing climate change. PMID:22363670
Inventory of anthropogenic methane emissions in mainland China from 1980 to 2010
NASA Astrophysics Data System (ADS)
Peng, Shushi; Piao, Shilong; Bousquet, Philippe; Ciais, Philippe; Li, Bengang; Lin, Xin; Tao, Shu; Wang, Zhiping; Zhang, Yuan; Zhou, Feng
2016-11-01
Methane (CH4) has a 28-fold greater global warming potential than CO2 over 100 years. Atmospheric CH4 concentration has tripled since 1750. Anthropogenic CH4 emissions from China have been growing rapidly in the past decades and contribute more than 10 % of global anthropogenic CH4 emissions with large uncertainties in existing global inventories, generally limited to country-scale statistics. To date, a long-term CH4 emission inventory including the major sources sectors and based on province-level emission factors is still lacking. In this study, we produced a detailed annual bottom-up inventory of anthropogenic CH4 emissions from the eight major source sectors in China for the period 1980-2010. In the past 3 decades, the total CH4 emissions increased from 24.4 [18.6-30.5] Tg CH4 yr-1 in 1980 (mean [minimum-maximum of 95 % confidence interval]) to 44.9 [36.6-56.4] Tg CH4 yr-1 in 2010. Most of this increase took place in the 2000s decade with averaged yearly emissions of 38.5 [30.6-48.3] Tg CH4 yr-1. This fast increase of the total CH4 emissions after 2000 is mainly driven by CH4 emissions from coal exploitation. The largest contribution to total CH4 emissions also shifted from rice cultivation in 1980 to coal exploitation in 2010. The total emissions inferred in this work compare well with the EPA inventory but appear to be 36 and 18 % lower than the EDGAR4.2 inventory and the estimates using the same method but IPCC default emission factors, respectively. The uncertainty of our inventory is investigated using emission factors collected from state-of-the-art published literatures. We also distributed province-scale emissions into 0.1° × 0.1° maps using socioeconomic activity data. This new inventory could help understanding CH4 budgets at regional scale and guiding CH4 mitigation policies in China.
Large size space construction for space exploitation
NASA Astrophysics Data System (ADS)
Kondyurin, Alexey
2016-07-01
Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).