Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Gas-Centered Swirl Coaxial Liquid Injector Evaluations
NASA Technical Reports Server (NTRS)
Cohn, A. K.; Strakey, P. A.; Talley, D. G.
2005-01-01
Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.
Twisted versus braided magnetic flux ropes in coronal geometry. II. Comparative behaviour
NASA Astrophysics Data System (ADS)
Prior, C.; Yeates, A. R.
2016-06-01
Aims: Sigmoidal structures in the solar corona are commonly associated with magnetic flux ropes whose magnetic field lines are twisted about a mutual axis. Their dynamical evolution is well studied, with sufficient twisting leading to large-scale rotation (writhing) and vertical expansion, possibly leading to ejection. Here, we investigate the behaviour of flux ropes whose field lines have more complex entangled/braided configurations. Our hypothesis is that this internal structure will inhibit the large-scale morphological changes. Additionally, we investigate the influence of the background field within which the rope is embedded. Methods: A technique for generating tubular magnetic fields with arbitrary axial geometry and internal structure, introduced in part I of this study, provides the initial conditions for resistive-MHD simulations. The tubular fields are embedded in a linear force-free background, and we consider various internal structures for the tubular field, including both twisted and braided topologies. These embedded flux ropes are then evolved using a 3D MHD code. Results: Firstly, in a background where twisted flux ropes evolve through the expected non-linear writhing and vertical expansion, we find that flux ropes with sufficiently braided/entangled interiors show no such large-scale changes. Secondly, embedding a twisted flux rope in a background field with a sigmoidal inversion line leads to eventual reversal of the large-scale rotation. Thirdly, in some cases a braided flux rope splits due to reconnection into two twisted flux ropes of opposing chirality - a phenomenon previously observed in cylindrical configurations. Conclusions: Sufficiently complex entanglement of the magnetic field lines within a flux rope can suppress large-scale morphological changes of its axis, with magnetic energy reduced instead through reconnection and expansion. The structure of the background magnetic field can significantly affect the changing morphology of a flux rope.
Stability and stabilisation of a class of networked dynamic systems
NASA Astrophysics Data System (ADS)
Liu, H. B.; Wang, D. Q.
2018-04-01
We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers
NASA Technical Reports Server (NTRS)
Wallace, James M.; Ong, L.; Balint, J.-L.
1993-01-01
The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.
Studies of Postdisaster Economic Recovery: Analysis, Synthesis, and Assessment
1987-06-01
of a large-scale nuclear disaster can be viewed in the aggre- gate as attempting to answer two broad questions: 1. Do resources survive in sufficient...With respect to economic institutional issues in the aftermath of a nuclear disaster , published research has been, almost without exception, speculative...possibilities. There are at least three major themes that per- meate the literature on economic control in the event of a large-scale nuclear disaster . First
Gravitational waves and large field inflation
NASA Astrophysics Data System (ADS)
Linde, Andrei
2017-02-01
According to the famous Lyth bound, one can confirm large field inflation by finding tensor modes with sufficiently large tensor-to-scalar ratio r. Here we will try to answer two related questions: is it possible to rule out all large field inflationary models by not finding tensor modes with r above some critical value, and what can we say about the scale of inflation by measuring r? However, in order to answer these questions one should distinguish between two different definitions of the large field inflation and three different definitions of the scale of inflation. We will examine these issues using the theory of cosmological α-attractors as a convenient testing ground.
Unsuppressed primordial standard clocks in warm quasi-single field inflation
NASA Astrophysics Data System (ADS)
Tong, Xi; Wang, Yi; Zhou, Siyi
2018-06-01
We study the non-Gaussianities in quasi-single field inflation with a warm inflation background. The thermal effects at small scales can sufficiently enhance the magnitude of the primordial standard clock signal. This scenario offers us the possibility of probing the UV physics of the very early universe without the exponentially small Boltzmann factor when the mass of the isocurvaton is much heavier than Hubble. The thermal effects at small scales can be studied using the flat space thermal field theory, connected to an effective description using non-Bunch-Davies vacuum at large scales, with large clock signal.
Monitoring conservation success in a large oak woodland landscape
Rich Reiner; Emma Underwood; John-O Niles
2002-01-01
Monitoring is essential in understanding the success or failure of a conservation project and provides the information needed to conduct adaptive management. Although there is a large body of literature on monitoring design, it fails to provide sufficient information to practitioners on how to organize and apply monitoring when implementing landscape-scale conservation...
Commentary: Environmental nanophotonics and energy
NASA Astrophysics Data System (ADS)
Smith, Geoff B.
2011-01-01
The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.
Liu, Ke; Zhang, Jian; Bao, Jie
2015-11-01
A two stage hydrolysis of corn stover was designed to solve the difficulties between sufficient mixing at high solids content and high power input encountered in large scale bioreactors. The process starts with the quick liquefaction to convert solid cellulose to liquid slurry with strong mixing in small reactors, then followed the comprehensive hydrolysis to complete saccharification into fermentable sugars in large reactors without agitation apparatus. 60% of the mixing energy consumption was saved by removing the mixing apparatus in large scale vessels. Scale-up ratio was small for the first step hydrolysis reactors because of the reduced reactor volume. For large saccharification reactors in the second step, the scale-up was easy because of no mixing mechanism was involved. This two stage hydrolysis is applicable for either simple hydrolysis or combined fermentation processes. The method provided a practical process option for industrial scale biorefinery processing of lignocellulose biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sakurai, Hidehiro; Masukawa, Hajime; Kitashima, Masaharu; Inoue, Kazuhito
2010-01-01
In order to decrease CO(2) emissions from the burning of fossil fuels, the development of new renewable energy sources sufficiently large in quantity is essential. To meet this need, we propose large-scale H(2) production on the sea surface utilizing cyanobacteria. Although many of the relevant technologies are in the early stage of development, this chapter briefly examines the feasibility of such H(2) production, in order to illustrate that under certain conditions large-scale photobiological H(2) production can be viable. Assuming that solar energy is converted to H(2) at 1.2% efficiency, the future cost of H(2) can be estimated to be about 11 (pipelines) and 26.4 (compression and marine transportation) cents kWh(-1), respectively.
NASA Astrophysics Data System (ADS)
Nolan, R. H.; Boer, M. M.; Resco de Dios, V.; Caccamo, G.; Bradstock, R. A.
2016-05-01
The occurrence of large, high-intensity wildfires requires plant biomass, or fuel, that is sufficiently dry to burn. This poses the question, what is "sufficiently dry"? Until recently, the ability to address this question has been constrained by the spatiotemporal scale of available methods to monitor the moisture contents of both dead and live fuels. Here we take advantage of recent developments in macroscale monitoring of fuel moisture through a combination of remote sensing and climatic modeling. We show there are clear thresholds of fuel moisture content associated with the occurrence of wildfires in forests and woodlands. Furthermore, we show that transformations in fuel moisture conditions across these thresholds can occur rapidly, within a month. Both the approach presented here, and our findings, can be immediately applied and may greatly improve fire risk assessments in forests and woodlands globally.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Cosmic homogeneity: a spectroscopic and model-independent measurement
NASA Astrophysics Data System (ADS)
Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.
2018-03-01
Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.
Development of optimal grinding and polishing tools for aspheric surfaces
NASA Astrophysics Data System (ADS)
Burge, James H.; Anderson, Bill; Benjamin, Scott; Cho, Myung K.; Smith, Koby Z.; Valente, Martin J.
2001-12-01
The ability to grind and polish steep aspheric surfaces to high quality is limited by the tools used for working the surface. The optician prefers to use large, stiff tools to get good natural smoothing, avoiding small scale surface errors. This is difficult for steep aspheres because the tools must have sufficient compliance to fit the aspheric surface, yet we wish the tools to be stiff so they wear down high regions on the surface. This paper presents a toolkit for designing optimal tools that provide large scale compliance to fit the aspheric surface, yet maintain small scale stiffness for efficient polishing.
Large scale structure in universes dominated by cold dark matter
NASA Technical Reports Server (NTRS)
Bond, J. Richard
1986-01-01
The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.
Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits
NASA Astrophysics Data System (ADS)
Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas
2018-04-01
Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.
Hollow microcarriers for large-scale expansion of anchorage-dependent cells in a stirred bioreactor.
YekrangSafakar, Ashkan; Acun, Aylin; Choi, Jin-Woo; Song, Edward; Zorlutuna, Pinar; Park, Kidong
2018-03-26
With recent advances in biotechnology, mammalian cells are used in biopharmaceutical industries to produce valuable protein therapeutics and investigated as effective therapeutic agents to permanently degenerative diseases in cell based therapy. In these exciting and actively expanding fields, a reliable, efficient, and affordable platform to culture mammalian cells on a large scale is one of the most vital necessities. To produce and maintain a very large population of anchorage-dependent cells, a microcarrier-based stirred tank bioreactor is commonly used. In this approach, the cells are exposed to harmful hydrodynamic shear stress in the bioreactor and the mass transfer rates of nutrients and gases in the bioreactor are often kept below an optimal level to prevent cellular damages from the shear stress. In this paper, a hollow microcarrier (HMC) is presented as a novel solution to protect cells from shear stress in stirred bioreactors, while ensuring sufficient and uniform mass transfer rate of gases and nutrients. HMC is a hollow microsphere and cells are cultured on its inner surface to be protected, while openings on the HMC provide sufficient exchange of media inside the HMC. As a proof of concept, we demonstrated the expansion of fibroblasts, NIH/3T3 and the expansion and cardiac differentiation of human induced pluripotent stem cells, along with detailed numerical analysis. We believe that the developed HMC can be a practical solution to enable large-scale expansion of shear-sensitive anchorage-dependent cells in an industrial scale with stirred bioreactors. © 2018 Wiley Periodicals, Inc.
SAR STUDY OF NASAL TOXICITY: LESSONS FOR MODELING SMALL TOXICITY DATASETS
Most toxicity data, particularly from whole animal bioassays, are generated without the needs or capabilities of structure-activity relationship (SAR) modeling in mind. Some toxicity endpoints have been of sufficient regulatory concern to warrant large scale testing efforts (e.g....
Constraints to commercialization of algal fuels.
Chisti, Yusuf
2013-09-10
Production of algal crude oil has been achieved in various pilot scale facilities, but whether algal fuels can be produced in sufficient quantity to meaningfully displace petroleum fuels, has been largely overlooked. Limitations to commercialization of algal fuels need to be understood and addressed for any future commercialization. This review identifies the major constraints to commercialization of transport fuels from microalgae. Algae derived fuels are expensive compared to petroleum derived fuels, but this could change. Unfortunately, improved economics of production are not sufficient for an environmentally sustainable production, or its large scale feasibility. A low-cost point supply of concentrated carbon dioxide colocated with the other essential resources is necessary for producing algal fuels. An insufficiency of concentrated carbon dioxide is actually a major impediment to any substantial production of algal fuels. Sustainability of production requires the development of an ability to almost fully recycle the phosphorous and nitrogen nutrients that are necessary for algae culture. Development of a nitrogen biofixation ability to support production of algal fuels ought to be an important long term objective. At sufficiently large scale, a limited supply of freshwater will pose a significant limitation to production even if marine algae are used. Processes for recovering energy from the algal biomass left after the extraction of oil, are required for achieving a net positive energy balance in the algal fuel oil. The near term outlook for widespread use of algal fuels appears bleak, but fuels for niche applications such as in aviation may be likely in the medium term. Genetic and metabolic engineering of microalgae to boost production of fuel oil and ease its recovery, are essential for commercialization of algal fuels. Algae will need to be genetically modified for improved photosynthetic efficiency in the long term. Copyright © 2013 Elsevier B.V. All rights reserved.
Modified dispersion relations, inflation, and scale invariance
NASA Astrophysics Data System (ADS)
Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward
2018-02-01
For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.
LES with and without explicit filtering: comparison and assessment of various models
NASA Astrophysics Data System (ADS)
Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele
2000-11-01
The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.
Large-scale Density Structures in Magneto-rotational Disk Turbulence
NASA Astrophysics Data System (ADS)
Youdin, Andrew; Johansen, A.; Klahr, H.
2009-01-01
Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.
Critical gravitational collapse with angular momentum. II. Soft equations of state
NASA Astrophysics Data System (ADS)
Gundlach, Carsten; Baumgarte, Thomas W.
2018-03-01
We study critical phenomena in the collapse of rotating ultrarelativistic perfect fluids, in which the pressure P is related to the total energy density ρ by P =κ ρ , where κ is a constant. We generalize earlier results for radiation fluids with κ =1 /3 to other values of κ , focusing on κ <1 /9 . For 1 /9 <κ ≲0.49 , the critical solution has only one unstable, growing mode, which is spherically symmetric. For supercritical data it controls the black-hole mass, while for subcritical data it controls the maximum density. For κ <1 /9 , an additional axial l =1 mode becomes unstable. This controls either the black-hole angular momentum, or the maximum angular velocity. In theory, the additional unstable l =1 mode changes the nature of the black-hole threshold completely: at sufficiently large initial rotation rates Ω and sufficient fine-tuning of the initial data to the black-hole threshold we expect to observe nontrivial universal scaling functions (familiar from critical phase transitions in thermodynamics) governing the black-hole mass and angular momentum, and, with further fine-tuning, eventually a finite black-hole mass almost everywhere on the threshold. In practice, however, the second unstable mode grows so slowly that we do not observe this breakdown of scaling at the level of fine-tuning we can achieve, nor systematic deviations from the leading-order power-law scalings of the black-hole mass. We do see systematic effects in the black-hole angular momentum, but it is not clear yet if these are due to the predicted nontrivial scaling functions, or to nonlinear effects at sufficiently large initial angular momentum (which we do not account for in our theoretical model).
NASA Astrophysics Data System (ADS)
Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim
2017-06-01
Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water body acted as a barrier for the horizontal transport of air pollutants from the largest street in the valley and along the valley bottom, transporting them vertically instead and hence diluting them. We found that the stable stratification accumulates the street-level pollution from the transport corridor in shallow air pockets near the surface. The polluted air pockets are transported by the local recirculations to other less polluted areas with only slow dilution. This combination of relatively long distance and complex transport paths together with weak dispersion is not sufficiently resolved in classical air pollution models. The findings have important implications for the air quality predictions over urban areas. Any prediction not resolving these, or similar local dynamic features, might not be able to correctly simulate the dispersion of pollutants in cities.
Optical Communications With A Geiger Mode APD Array
2016-02-09
spurious fires from numerous sources, including crosstalk from other detectors in the same array . Additionally, after a 9 successful detection, the...be combined into arrays with large numbers of detectors , allowing for scaling of dynamic range with relatively little overhead on space and power...overall higher rate of dark counts than a single detector , this is more than compensated for by the extra detectors . A sufficiently large APD array could
NASA Astrophysics Data System (ADS)
Feldmann, Daniel; Bauer, Christian; Wagner, Claus
2018-03-01
We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.
Neighborhood scale quantification of ecosystem goods and ...
Ecosystem goods and services are those ecological structures and functions that humans can directly relate to their state of well-being. Ecosystem goods and services include, but are not limited to, a sufficient fresh water supply, fertile lands to produce agricultural products, shading, air and water of sufficient quality for designated uses, flood water retention, and places to recreate. The US Environmental Protection Agency (USEPA) Office of Research and Development’s Tampa Bay Ecosystem Services Demonstration Project (TBESDP) modeling efforts organized existing literature values for biophysical attributes and processes related to EGS. The goal was to develop a database for informing mapped-based EGS assessments for current and future land cover/use scenarios at multiple scales. This report serves as a demonstration of applying an EGS assessment approach at the large neighborhood scale (~1,000 acres of residential parcels plus common areas). Here, we present mapped inventories of ecosystem goods and services production at a neighborhood scale within the Tampa Bay, FL region. Comparisons of the inventory between two alternative neighborhood designs are presented as an example of how one might apply EGS concepts at this scale.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
Gaussian processes for personalized e-health monitoring with wearable sensors.
Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel
2013-01-01
Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.
Metabolic rates of giant pandas inform conservation strategies.
Fei, Yuxiang; Hou, Rong; Spotila, James R; Paladino, Frank V; Qi, Dunwu; Zhang, Zhihe
2016-06-06
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
NASA Astrophysics Data System (ADS)
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-06-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-01-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction. PMID:27264109
Correcting Measurement Error in Latent Regression Covariates via the MC-SIMEX Method
ERIC Educational Resources Information Center
Rutkowski, Leslie; Zhou, Yan
2015-01-01
Given the importance of large-scale assessments to educational policy conversations, it is critical that subpopulation achievement is estimated reliably and with sufficient precision. Despite this importance, biased subpopulation estimates have been found to occur when variables in the conditioning model side of a latent regression model contain…
Developments in Hollow Graphite Fiber Technology
NASA Technical Reports Server (NTRS)
Stallcup, Michael; Brantley, Lott W., Jr. (Technical Monitor)
2002-01-01
Hollow graphite fibers will be lighter than standard solid graphite fibers and, thus, will save weight in optical components. This program will optimize the processing and properties of hollow carbon fibers developed by MER and to scale-up the processing to produce sufficient fiber for fabricating a large ultra-lightweight mirror for delivery to NASA.
Characterizing dispersal patterns in a threatened seabird with limited genetic structure
Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery
2009-01-01
Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...
ERIC Educational Resources Information Center
Nielsen, Kristen
2014-01-01
Student writing achievement is essential to lifelong learner success, but supporting writing can be challenging for teachers. Several large-scale analyses of publications on writing have called for further study of instructional methods, as the current literature does not sufficiently address the need to support best teaching practices.…
NASA Technical Reports Server (NTRS)
Land, Norman S.; Pelz, Charles A.
1952-01-01
Force characteristics determined from tank tests of a 1/5.78 scale model of a hydro-ski-wheel combination for the Grumman JRF-5 airplane are presented. The model was tested in both the submerged and planing conditions over a range of trim, speed, and load sufficiently large to represent the most probable full-size conditions.
Scale-invariance underlying the logistic equation and its social applications
NASA Astrophysics Data System (ADS)
Hernando, A.; Plastino, A.
2013-01-01
On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.
Large-scale anisotropy in stably stratified rotating flows
Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...
2014-08-28
We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of this work will be a cross-flow turbine actuator line model to be used as an extension to the OpenFOAM computational fluid dynamics (CFD) software framework, which will likely require modifications to commonly-used dynamic stall models, in consideration of the turbines' high angle of attack excursions during normal operation.
NASA Technical Reports Server (NTRS)
Griffin, Roy N., Jr.; Holzhauser, Curt A.; Weiberg, James A.
1958-01-01
An investigation was made to determine the lifting effectiveness and flow requirements of blowing over the trailing-edge flaps and ailerons on a large-scale model of a twin-engine, propeller-driven airplane having a high-aspect-ratio, thick, straight wing. With sufficient blowing jet momentum to prevent flow separation on the flap, the lift increment increased for flap deflections up to 80 deg (the maximum tested). This lift increment also increased with increasing propeller thrust coefficient. The blowing jet momentum coefficient required for attached flow on the flaps was not significantly affected by thrust coefficient, angle of attack, or blowing nozzle height.
Evaluation of advanced microelectronics for inclusion in MIL-STD-975
NASA Technical Reports Server (NTRS)
Scott, W. Richard
1991-01-01
The approach taken by NASA and JPL (Jet Propulsion Laboratory) in the development of a MIL-STD-975 section which contains advanced technology such as Large Scale Integration and Very Large Scale Integration (LSI/VLSI) microelectronic devices is described. The parts listed in this section are recommended as satisfactory for NASA flight applications, in the absence of alternate qualified devices, based on satisfactory results of a vendor capability audit, the availability of sufficient characterization and reliability data from the manufacturers and users and negotiated detail procurement specifications. The criteria used in the selection and evaluation of the vendors and candidate parts, the preparation of procurement specifications, and the status of this activity are discussed.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., for a line right-of-way in excess of 100 feet in width or for a structure or facility right-of-way of over 10,000 square feet must state the reasons why the larger right-of-way is required. Rights-of-way... drawing on a scale sufficiently large to show clearly their dimensions and relative positions. When two or...
2016 Offshore Wind Energy Resource Assessment for the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musial, Walt; Heimiller, Donna; Beiter, Philipp
2016-09-01
This report, the 2016 Offshore Wind Energy Resource Assessment for the United States, was developed by the National Renewable Energy Laboratory, and updates a previous national resource assessment study, and refines and reaffirms that the available wind resource is sufficient for offshore wind to be a large-scale contributor to the nation's electric energy supply.
Detection of submicron scale cracks and other surface anomalies using positron emission tomography
Cowan, Thomas E.; Howell, Richard H.; Colmenares, Carlos A.
2004-02-17
Detection of submicron scale cracks and other mechanical and chemical surface anomalies using PET. This surface technique has sufficient sensitivity to detect single voids or pits of sub-millimeter size and single cracks or fissures of millimeter size; and single cracks or fissures of millimeter-scale length, micrometer-scale depth, and nanometer-scale length, micrometer-scale depth, and nanometer-scale width. This technique can also be applied to detect surface regions of differing chemical reactivity. It may be utilized in a scanning or survey mode to simultaneously detect such mechanical or chemical features over large interior or exterior surface areas of parts as large as about 50 cm in diameter. The technique involves exposing a surface to short-lived radioactive gas for a time period, removing the excess gas to leave a partial monolayer, determining the location and shape of the cracks, voids, porous regions, etc., and calculating the width, depth, and length thereof. Detection of 0.01 mm deep cracks using a 3 mm detector resolution has been accomplished using this technique.
Trietsch, Jasper; van Steenkiste, Ben; Hobma, Sjoerd; Frericks, Arnoud; Grol, Richard; Metsemakers, Job; van der Weijden, Trudy
2014-12-01
A quality improvement strategy consisting of comparative feedback and peer review embedded in available local quality improvement collaboratives proved to be effective in changing the test-ordering behaviour of general practitioners. However, implementing this strategy was problematic. We aimed for large-scale implementation of an adapted strategy covering both test ordering and prescribing performance. Because we failed to achieve large-scale implementation, the aim of this study was to describe and analyse the challenges of the transferring process. In a qualitative study 19 regional health officers, pharmacists, laboratory specialists and general practitioners were interviewed within 6 months after the transfer period. The interviews were audiotaped, transcribed and independently coded by two of the authors. The codes were matched to the dimensions of the normalization process theory. The general idea of the strategy was widely supported, but generating the feedback was more complex than expected and the need for external support after transfer of the strategy remained high because participants did not assume responsibility for the work and the distribution of resources that came with it. Evidence on effectiveness, a national infrastructure for these collaboratives and a general positive attitude were not sufficient for normalization. Thinking about managing large databases, responsibility for tasks and distribution of resources should start as early as possible when planning complex quality improvement strategies. Merely exploring the barriers and facilitators experienced in a preceding trial is not sufficient. Although multifaceted implementation strategies to change professional behaviour are attractive, their inherent complexity is also a pitfall for large-scale implementation. © 2014 John Wiley & Sons, Ltd.
Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances
Parker, V. Thomas
2015-01-01
Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560
Note on a modified return period scale for upper-truncated unbounded flood distributions
NASA Astrophysics Data System (ADS)
Bardsley, Earl
2017-01-01
Probability distributions unbounded to the right often give good fits to annual discharge maxima. However, all hydrological processes are in reality constrained by physical upper limits, though not necessarily well defined. A result of this contradiction is that for sufficiently small exceedance probabilities the unbounded distributions anticipate flood magnitudes which are impossibly large. This raises the question of whether displayed return period scales should, as is current practice, have some given number of years, such as 500 years, as the terminating rightmost tick-point. This carries the implication that the scale might be extended indefinitely to the right with a corresponding indefinite increase in flood magnitude. An alternative, suggested here, is to introduce a sufficiently high upper truncation point to the flood distribution and modify the return period scale accordingly. The rightmost tick-mark then becomes infinity, corresponding to the upper truncation point discharge. The truncation point is likely to be set as being above any physical upper bound and the return period scale will change only slightly over all practical return periods of operational interest. The rightmost infinity tick point is therefore proposed, not as an operational measure, but rather to signal in flood plots that the return period scale does not extend indefinitely to the right.
Inflationary tensor perturbations after BICEP2.
Caligiuri, Jerod; Kosowsky, Arthur
2014-05-16
The measurement of B-mode polarization of the cosmic microwave background at large angular scales by the BICEP experiment suggests a stochastic gravitational wave background from early-Universe inflation with a surprisingly large amplitude. The power spectrum of these tensor perturbations can be probed both with further measurements of the microwave background polarization at smaller scales and also directly via interferometry in space. We show that sufficiently sensitive high-resolution B-mode measurements will ultimately have the ability to test the inflationary consistency relation between the amplitude and spectrum of the tensor perturbations, confirming their inflationary origin. Additionally, a precise B-mode measurement of the tensor spectrum will predict the tensor amplitude on solar system scales to 20% accuracy for an exact power-law tensor spectrum, so a direct detection will then measure the running of the tensor spectral index to high precision.
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective
NASA Astrophysics Data System (ADS)
Cheng, W.; Samtaney, R.
2014-01-01
The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe
2007-01-01
FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Duane L; Pouquet, Dr. Annick; Mininni, Dr. Pablo D.
2015-01-01
We report results on rotating stratified turbulence in the absence of forcing, with large-scale isotropic initial conditions, using direct numerical simulations computed on grids of up tomore » $4096^3$ points. The Reynolds and Froude numbers are respectively equal to $$Re=5.4\\times 10^4$$ and $Fr=0.0242$$. The ratio of the Brunt-V\\"ais\\"al\\"a to the inertial wave frequency, $$N/f$, is taken to be equal to 5, a choice appropriate to model the dynamics of the southern abyssal ocean at mid latitudes. This gives a global buoyancy Reynolds number $$R_B=ReFr^2=32$$, a value sufficient for some isotropy to be recovered in the small scales beyond the Ozmidov scale, but still moderate enough that the intermediate scales where waves are prevalent are well resolved. We concentrate on the large-scale dynamics and confirm that the Froude number based on a typical vertical length scale is of order unity, with strong gradients in the vertical. Two characteristic scales emerge from this computation, and are identified from sharp variations in the spectral distribution of either total energy or helicity. A spectral break is also observed at a scale at which the partition of energy between the kinetic and potential modes changes abruptly, and beyond which a Kolmogorov-like spectrum recovers. Large slanted layers are ubiquitous in the flow in the velocity and temperature fields, and a large-scale enhancement of energy is also observed, directly attributable to the effect of rotation.« less
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
NASA Astrophysics Data System (ADS)
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
Baryon asymmetry from hypermagnetic helicity in dilaton hypercharge electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bamba, Kazuharu
2006-12-15
The generation of the baryon asymmetry of the Universe from the hypermagnetic helicity, the physical interpretation of which is given in terms of hypermagnetic knots, is studied in inflationary cosmology, taking into account the breaking of the conformal invariance of hypercharge electromagnetic fields through both a coupling with the dilaton and with a pseudoscalar field. It is shown that, if the electroweak phase transition is strongly first order and the present amplitude of the generated magnetic fields on the horizon scale is sufficiently large, a baryon asymmetry with a sufficient magnitude to account for the observed baryon-to-entropy ratio can bemore » generated.« less
Omega from the anisotropy of the redshift correlation function
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
Peculiar velocities distort the correlation function of galaxies observed in redshift space. In the large scale, linear regime, the distortion takes a characteristic quadrupole plus hexadecapole form, with the amplitude of the distortion depending on the cosmological density parameter omega. Preliminary measurements are reported here of the harmonics of the correlation function in the CfA, SSRS, and IRAS 2 Jansky redshift surveys. The observed behavior of the harmonics agrees qualitatively with the predictions of linear theory on large scales in every survey. However, real anisotropy in the galaxy distribution induces large fluctuations in samples which do not yet probe a sufficiently fair volume of the Universe. In the CfA 14.5 sample in particular, the Great Wall induces a large negative quadrupole, which taken at face value implies an unrealistically large omega 20. The IRAS 2 Jy survey, which covers a substantially larger volume than the optical surveys and is less affected by fingers-of-god, yields a more reliable and believable value, omega = 0.5 sup +.5 sub -.25.
Universal scaling and nonlinearity of aggregate price impact in financial markets.
Patzelt, Felix; Bouchaud, Jean-Philippe
2018-01-01
How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.
Universal scaling and nonlinearity of aggregate price impact in financial markets
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Bouchaud, Jean-Philippe
2018-01-01
How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.
Temporal Gain Correction for X-Ray Calorimeter Spectrometers
NASA Technical Reports Server (NTRS)
Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.
2016-01-01
Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be
At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2013-01-01
Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…
ERIC Educational Resources Information Center
Greene, Jay; Loveless, Tom; MacLeod, W. Bentley; Nechyba, Thomas; Peterson, Paul; Rosenthal, Meredith; Whitehurst, Grover
2010-01-01
Choice is most frequently realized within the public sector using the mechanisms of residence, magnet schools, and open enrollment systems, whereas the voucher-like systems applauded by choice advocates and feared by opponents are extremely rare. Further, the charter sector is neither large enough nor sufficiently prepared to go to scale to…
The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey
NASA Technical Reports Server (NTRS)
Burg, R.; Giacconi, R.; Forman, W.; Jones, C.
1994-01-01
We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.
HIV Topical Microbicides: Steer the Ship or Run Aground
Gross, Michael
2004-01-01
Six HIV candidate microbicides are scheduled to enter 6 large-scale effectiveness trials in the next year. The selection of products for testing and the design of this group of trials should be reconsidered to provide an answer to a key question now before the field: Does a sulfonated polyanion, delivered intravaginally as a gel, block HIV attachment to target cells with sufficient potency to protect women from sexually acquired HIV infection? Paradoxically, entering more candidates into more trials may confuse or compromise efforts to identify an effective product. Instead, a single trial of the most promising product(s) best serves the current candidates while also preserving resources needed to promptly advance innovative new protective concepts into future large-scale trials. PMID:15226123
Remote sensing applied to numerical modelling. [water resources pollution
NASA Technical Reports Server (NTRS)
Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.
1975-01-01
Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.
Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing
2017-07-01
The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0-20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. Copyright © 2017 Elsevier Ltd. All rights reserved.
What Determines Upscale Growth of Oceanic Convection into MCSs?
NASA Astrophysics Data System (ADS)
Zipser, E. J.
2017-12-01
Over tropical oceans, widely scattered convection of various depths may or may not grow upscale into mesoscale convective systems (MCSs). But what distinguishes the large-scale environment that favors such upscale growth from that favoring "unorganized", scattered convection? Is it some combination of large-scale low-level convergence and ascending motion, combined with sufficient instability? We recently put this to a test with ERA-I reanalysis data, with disappointing results. The "usual suspects" of total column water vapor, large-scale ascent, and CAPE may all be required to some extent, but their differences between large MCSs and scattered convection are small. The main positive results from this work (already published) demonstrate that the strength of convection is well correlated with the size and perhaps "organization" of convective features over tropical oceans, in contrast to tropical land, where strong convection is common for large or small convective features. So, important questions remain: Over tropical oceans, how should we define "organized" convection? By size of the precipitation area? And what environmental conditions lead to larger and better organized MCSs? Some recent attempts to answer these questions will be described, but good answers may require more data, and more insights.
Finite-size scaling above the upper critical dimension in Ising models with long-range interactions
NASA Astrophysics Data System (ADS)
Flores-Sola, Emilio J.; Berche, Bertrand; Kenna, Ralph; Weigel, Martin
2015-01-01
The correlation length plays a pivotal role in finite-size scaling and hyperscaling at continuous phase transitions. Below the upper critical dimension, where the correlation length is proportional to the system length, both finite-size scaling and hyperscaling take conventional forms. Above the upper critical dimension these forms break down and a new scaling scenario appears. Here we investigate this scaling behaviour by simulating one-dimensional Ising ferromagnets with long-range interactions. We show that the correlation length scales as a non-trivial power of the linear system size and investigate the scaling forms. For interactions of sufficiently long range, the disparity between the correlation length and the system length can be made arbitrarily large, while maintaining the new scaling scenarios. We also investigate the behavior of the correlation function above the upper critical dimension and the modifications imposed by the new scaling scenario onto the associated Fisher relation.
Natural snowfall reveals large-scale flow structures in the wake of a 2.5-MW wind turbine.
Hong, Jiarong; Toloui, Mostafa; Chamorro, Leonardo P; Guala, Michele; Howard, Kevin; Riley, Sean; Tucker, James; Sotiropoulos, Fotis
2014-06-24
To improve power production and structural reliability of wind turbines, there is a pressing need to understand how turbines interact with the atmospheric boundary layer. However, experimental techniques capable of quantifying or even qualitatively visualizing the large-scale turbulent flow structures around full-scale turbines do not exist today. Here we use snowflakes from a winter snowstorm as flow tracers to obtain velocity fields downwind of a 2.5-MW wind turbine in a sampling area of ~36 × 36 m(2). The spatial and temporal resolutions of the measurements are sufficiently high to quantify the evolution of blade-generated coherent motions, such as the tip and trailing sheet vortices, identify their instability mechanisms and correlate them with turbine operation, control and performance. Our experiment provides an unprecedented in situ characterization of flow structures around utility-scale turbines, and yields significant insights into the Reynolds number similarity issues presented in wind energy applications.
NASA Astrophysics Data System (ADS)
Lague, Marysa
Vegetation influences the atmosphere in complex and non-linear ways, such that large-scale changes in vegetation cover can drive changes in climate on both local and global scales. Large-scale land surface changes have been shown to introduce excess energy to one hemisphere, causing a shift in atmospheric circulation on a global scale. However, past work has not quantified how the climate response scales with the area of vegetation. Here, we systematically evaluate the response of climate to linearly increasing the area of forest cover over the northern mid-latitudes. We show that the magnitude of afforestation of the northern mid-latitudes determines the climate response in a non-linear fashion, and identify a threshold in vegetation-induced cloud feedbacks - a concept not previously addressed by large-scale vegetation manipulation experiments. Small increases in tree cover drive compensating cloud feedbacks, while latent heat fluxes reach a threshold after sufficiently large increases in tree cover, causing the troposphere to warm and dry, subsequently reducing cloud cover. Increased absorption of solar radiation at the surface is driven by both surface albedo changes and cloud feedbacks. We identify how vegetation-induced changes in cloud cover further feedback on changes in the global energy balance. We also show how atmospheric cross-equatorial energy transport changes as the area of afforestation is incrementally increased (a relationship which has not previously been demonstrated). This work demonstrates that while some climate effects (such as energy transport) of large scale mid-latitude afforestation scale roughly linearly across a wide range of afforestation areas, others (such as the local partitioning of the surface energy budget) are non-linear, and sensitive to the particular magnitude of mid-latitude forcing. Our results highlight the importance of considering both local and remote climate responses to large-scale vegetation change, and explore the scaling relationship between changes in vegetation cover and the resulting climate impacts.
Nunez, Paul L.; Srinivasan, Ramesh
2013-01-01
The brain is treated as a nested hierarchical complex system with substantial interactions across spatial scales. Local networks are pictured as embedded within global fields of synaptic action and action potentials. Global fields may act top-down on multiple networks, acting to bind remote networks. Because of scale-dependent properties, experimental electrophysiology requires both local and global models that match observational scales. Multiple local alpha rhythms are embedded in a global alpha rhythm. Global models are outlined in which cm-scale dynamic behaviors result largely from propagation delays in cortico-cortical axons and cortical background excitation level, controlled by neuromodulators on long time scales. The idealized global models ignore the bottom-up influences of local networks on global fields so as to employ relatively simple mathematics. The resulting models are transparently related to several EEG and steady state visually evoked potentials correlated with cognitive states, including estimates of neocortical coherence structure, traveling waves, and standing waves. The global models suggest that global oscillatory behavior of self-sustained (limit-cycle) modes lower than about 20 Hz may easily occur in neocortical/white matter systems provided: Background cortical excitability is sufficiently high; the strength of long cortico-cortical axon systems is sufficiently high; and the bottom-up influence of local networks on the global dynamic field is sufficiently weak. The global models provide "entry points" to more detailed studies of global top-down influences, including binding of weakly connected networks, modulation of gamma oscillations by theta or alpha rhythms, and the effects of white matter deficits. PMID:24505628
Scale Effects on Magnet Systems of Heliotron-Type Reactors
NASA Astrophysics Data System (ADS)
S, Imagawa; A, Sagara
2005-02-01
For power plants heliotron-type reactors have attractive advantages, such as no current-disruptions, no current-drive, and wide space between helical coils for the maintenance of in-vessel components. However, one disadvantage is that a major radius has to be large enough to obtain large Q-value or to produce sufficient space for blankets. Although the larger radius is considered to increase the construction cost, the influence has not been understood clearly, yet. Scale effects on superconducting magnet systems have been estimated under the conditions of a constant energy confinement time and similar geometrical parameters. Since the necessary magnetic field with a larger radius becomes lower, the increase rate of the weight of the coil support to the major radius is less than the square root. The necessary major radius will be determined mainly by the blanket space. The appropriate major radius will be around 13 m for a reactor similar to the Large Helical Device (LHD).
The impact of Lyman-α radiative transfer on large-scale clustering in the Illustris simulation
NASA Astrophysics Data System (ADS)
Behrens, C.; Byrohl, C.; Saito, S.; Niemeyer, J. C.
2018-06-01
Context. Lyman-α emitters (LAEs) are a promising probe of the large-scale structure at high redshift, z ≳ 2. In particular, the Hobby-Eberly Telescope Dark Energy Experiment aims at observing LAEs at 1.9 < z < 3.5 to measure the baryon acoustic oscillation (BAO) scale and the redshift-space distortion (RSD). However, it has been pointed out that the complicated radiative transfer (RT) of the resonant Lyman-α emission line generates an anisotropic selection bias in the LAE clustering on large scales, s ≳ 10 Mpc. This effect could potentially induce a systematic error in the BAO and RSD measurements. Also, there exists a recent claim to have observational evidence of the effect in the Lyman-α intensity map, albeit statistically insignificant. Aims: We aim at quantifying the impact of the Lyman-α RT on the large-scale galaxy clustering in detail. For this purpose, we study the correlations between the large-scale environment and the ratio of an apparent Lyman-α luminosity to an intrinsic one, which we call the "observed fraction", at 2 < z < 6. Methods: We apply our Lyman-α RT code by post-processing the full Illustris simulations. We simply assume that the intrinsic luminosity of the Lyman-α emission is proportional to the star formation rate of galaxies in Illustris, yielding a sufficiently large sample of LAEs to measure the anisotropic selection bias. Results: We find little correlation between large-scale environment and the observed fraction induced by the RT, and hence a smaller anisotropic selection bias than has previously been claimed. We argue that the anisotropy was overestimated in previous work due to insufficient spatial resolution; it is important to keep the resolution such that it resolves the high-density region down to the scale of the interstellar medium, that is, 1 physical kpc. We also find that the correlation can be further enhanced by assumptions in modeling intrinsic Lyman-α emission.
How well can regional fluxes be derived from smaller-scale estimates?
NASA Technical Reports Server (NTRS)
Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.
1992-01-01
Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.
Inflation physics from the cosmic microwave background and large scale structure
NASA Astrophysics Data System (ADS)
Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Buder, I.; Burke, D. L.; Calabrese, E.; Carlstrom, J. E.; Carvalho, C. S.; Chang, C. L.; Chiang, H. C.; Church, S.; Cooray, A.; Crawford, T. M.; Crill, B. P.; Dawson, K. S.; Das, S.; Devlin, M. J.; Dobbs, M.; Dodelson, S.; Doré, O.; Dunkley, J.; Feng, J. L.; Fraisse, A.; Gallicchio, J.; Giddings, S. B.; Green, D.; Halverson, N. W.; Hanany, S.; Hanson, D.; Hildebrandt, S. R.; Hincks, A.; Hlozek, R.; Holder, G.; Holzapfel, W. L.; Honscheid, K.; Horowitz, G.; Hu, W.; Hubmayr, J.; Irwin, K.; Jackson, M.; Jones, W. C.; Kallosh, R.; Kamionkowski, M.; Keating, B.; Keisler, R.; Kinney, W.; Knox, L.; Komatsu, E.; Kovac, J.; Kuo, C.-L.; Kusaka, A.; Lawrence, C.; Lee, A. T.; Leitch, E.; Linde, A.; Linder, E.; Lubin, P.; Maldacena, J.; Martinec, E.; McMahon, J.; Miller, A.; Mukhanov, V.; Newburgh, L.; Niemack, M. D.; Nguyen, H.; Nguyen, H. T.; Page, L.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sehgal, N.; Seljak, U.; Senatore, L.; Sievers, J.; Silverstein, E.; Slosar, A.; Smith, K. M.; Spergel, D.; Staggs, S. T.; Stark, A.; Stompor, R.; Vieregg, A. G.; Wang, G.; Watson, S.; Wollack, E. J.; Wu, W. L. K.; Yoon, K. W.; Zahn, O.; Zaldarriaga, M.
2015-03-01
Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments-the theory of cosmic inflation-and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5 σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.
Inflation Physics from the Cosmic Microwave Background and Large Scale Structure
NASA Technical Reports Server (NTRS)
Abazajian, K.N.; Arnold,K.; Austermann, J.; Benson, B.A.; Bischoff, C.; Bock, J.; Bond, J.R.; Borrill, J.; Buder, I.; Burke, D.L.;
2013-01-01
Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments---the theory of cosmic inflation---and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1 of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5-sigma measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.
Inflation physics from the cosmic microwave background and large scale structure
Abazajian, K. N.; Arnold, K.; Austermann, J.; ...
2014-06-26
Here, fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments—the theory of cosmic inflation—and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to amore » depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B -mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.« less
Investigating a link between large and small-scale chaos features on Europa
NASA Astrophysics Data System (ADS)
Tognetti, L.; Rhoden, A.; Nelson, D. M.
2017-12-01
Chaos is one of the most recognizable, and studied, features on Europa's surface. Most models of chaos formation invoke liquid water at shallow depths within the ice shell; the liquid destabilizes the overlying ice layer, breaking it into mobile rafts and destroying pre-existing terrain. This class of model has been applied to both large-scale chaos like Conamara and small-scale features (i.e. microchaos), which are typically <10 km in diameter. Currently unknown, however, is whether both large-scale and small-scale features are produced together, e.g. through a network of smaller sills linked to a larger liquid water pocket. If microchaos features do form as satellites of large-scale chaos features, we would expect a drop off in the number density of microchaos with increasing distance from the large chaos feature; the trend should not be observed in regions without large-scale chaos features. Here, we test the hypothesis that large chaos features create "satellite" systems of smaller chaos features. Either outcome will help us better understand the relationship between large-scale chaos and microchaos. We focus first on regions surrounding the large chaos features Conamara and Murias (e.g. the Mitten). We map all chaos features within 90,000 sq km of the main chaos feature and assign each one a ranking (High Confidence, Probable, or Low Confidence) based on the observed characteristics of each feature. In particular, we look for a distinct boundary, loss of preexisting terrain, the existence of rafts or blocks, and the overall smoothness of the feature. We also note features that are chaos-like but lack sufficient characteristics to be classified as chaos. We then apply the same criteria to map microchaos features in regions of similar area ( 90,000 sq km) that lack large chaos features. By plotting the distribution of microchaos with distance from the center point of the large chaos feature or the mapping region (for the cases without a large feature), we determine whether there is a distinct signature linking large-scale chaos features with nearby microchaos. We discuss the implications of these results on the process of chaos formation and the extent of liquid water within Europa's ice shell.
Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Benard P., Jr.; Woodard, Brian S.
2016-01-01
Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20% semispan), Midspan (64% semispan) and Outboard stations (83% semispan) of a wing based upon a 65% scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 deg C to -1.4 deg C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 deg C to -6.3 deg C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.
Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Bernard P., Jr.; Woodard, Brian S.
2016-01-01
Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20 percent semispan), Midspan (64 percent semispan) and Outboard stations (83 percent semispan) of a wing based upon a 65 percent scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 to -1.4 C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 to -6.3 C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis
Frazier, Zachary; Xu, Min; Alber, Frank
2017-01-01
SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576
PLASMA TURBULENCE AND KINETIC INSTABILITIES AT ION SCALES IN THE EXPANDING SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellinger, Petr; Trávnícek, Pavel M.; Matteini, Lorenzo
The relationship between a decaying strong turbulence and kinetic instabilities in a slowly expanding plasma is investigated using two-dimensional (2D) hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and vanishing correlation between the two fields. A turbulent cascade rapidly develops; magnetic field fluctuations exhibit a power-law spectrum at large scales and a steeper spectrum at ion scales. The turbulent cascade leads to an overall anisotropic proton heating, protons are heatedmore » in the perpendicular direction, and, initially, also in the parallel direction. The imposed expansion leads to generation of a large parallel proton temperature anisotropy which is at later stages partly reduced by turbulence. The turbulent heating is not sufficient to overcome the expansion-driven perpendicular cooling and the system eventually drives the oblique firehose instability in a form of localized nonlinear wave packets which efficiently reduce the parallel temperature anisotropy. This work demonstrates that kinetic instabilities may coexist with strong plasma turbulence even in a constrained 2D regime.« less
Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries
NASA Astrophysics Data System (ADS)
Marinagi, Catherine; Trivellas, Panagiotis; Reklitis, Panagiotis; Skourlas, Christos
2015-02-01
This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers' reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.
Large-scale Organized Magnetic Fields in O, B and A Stars
NASA Astrophysics Data System (ADS)
Mathys, G.
2009-06-01
The status of our current knowledge of magnetic fields in stars of spectral types ranging from early F to O is reviewed. Fields with large-scale organised structure have now been detected and measured throughout this range. These fields are consistent with the oblique rotator model. In early F to late B stars, their occurrence is restricted to the subgroup of the Ap stars, which have the best studied fields among the early-type stars. Presence of fields with more complex topologies in other A and late B stars has been suggested, but is not firmly established. Magnetic fields have not been studied in a sufficient number of OB stars yet so as to establish whether they occur in all or only in some subset of these stars.
Predicting the propagation of concentration and saturation fronts in fixed-bed filters.
Callery, O; Healy, M G
2017-10-15
The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
Modeling CMB lensing cross correlations with CLEFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modi, Chirag; White, Martin; Vlah, Zvonimir, E-mail: modichirag@berkeley.edu, E-mail: mwhite@berkeley.edu, E-mail: zvlah@stanford.edu
2017-08-01
A new generation of surveys will soon map large fractions of sky to ever greater depths and their science goals can be enhanced by exploiting cross correlations between them. In this paper we study cross correlations between the lensing of the CMB and biased tracers of large-scale structure at high z . We motivate the need for more sophisticated bias models for modeling increasingly biased tracers at these redshifts and propose the use of perturbation theories, specifically Convolution Lagrangian Effective Field Theory (CLEFT). Since such signals reside at large scales and redshifts, they can be well described by perturbative approaches.more » We compare our model with the current approach of using scale independent bias coupled with fitting functions for non-linear matter power spectra, showing that the latter will not be sufficient for upcoming surveys. We illustrate our ideas by estimating σ{sub 8} from the auto- and cross-spectra of mock surveys, finding that CLEFT returns accurate and unbiased results at high z . We discuss uncertainties due to the redshift distribution of the tracers, and several avenues for future development.« less
Experiment-scale molecular simulation study of liquid crystal thin films
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael
2014-03-01
Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.
Shape Memory Alloys for Vibration Isolation and Damping of Large-Scale Space Structures
2010-08-04
Portugal (2007) Figure 24 – Comparison of martensitic SMA with steel in sine upsweep 3.2.2.4 Dwell Test Comparison with Sine Sweep Results...International Conference on Experimental Vibration Analysis for Civil Engineering Structures (EVACES), Porto, Portugal (2007) † Lammering, Rolf...a unique jump in amplitude during a sine sweep if sufficient pre- stretch is applied. These results were significant, but investigation of more
Resource Provisioning in Large-Scale Self-Organizing Distributed Systems
2012-06-01
using the Provisioning Norm (α=0.99999) and the modified Kullback - Leibler . Figure 15 is the plot of how many services and nodes each method included in...Problem,” European Journal of Operational Research, vol. 174, issue 1, pp. 54-68, 2006. [64] S. Kullback , R.A. Leibler , “On Information and Sufficiency...Description Language KB Kilobytes KL Kullback - Leibler xvii KLD Kullback - Leibler Distance LRU Least Recently Used Mb
Uncorrelated Encounter Model of the National Airspace System, Version 2.0
2013-08-19
can exist to certify avoidance systems for operational use. Evaluations typically include flight tests, operational impact studies, and simulation of...appropriate for large-scale air traffic impact studies— for example, examination of sector loading or conflict rates. The focus here includes two types of...between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters of sufficient fidelity in the available data
Bimler, David; Kirkland, John; Pichler, Shaun
2004-02-01
The structure of color perception can be examined by collecting judgments about color dissimilarities. In the procedure used here, stimuli are presented three at a time on a computer monitor and the spontaneous grouping of most-similar stimuli into gestalts provides the dissimilarity comparisons. Analysis with multidimensional scaling allows such judgments to be pooled from a number of observers without obscuring the variations among them. The anomalous perceptions of color-deficient observers produce comparisons that are represented well by a geometric model of compressed individual color spaces, with different forms of deficiency distinguished by different directions of compression. The geometrical model is also capable of accommodating the normal spectrum of variation, so that there is greater variation in compression parameters between tests on normal subjects than in those between repeated tests on individual subjects. The method is sufficiently sensitive and the variations sufficiently large that they are not obscured by the use of a range of monitors, even under somewhat loosely controlled conditions.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Integrating scales of seagrass monitoring to meet conservation needs
Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.
2012-01-01
We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.
Magnetic Fields Recorded by Chondrules Formed in Nebular Shocks
NASA Astrophysics Data System (ADS)
Mai, Chuhong; Desch, Steven J.; Boley, Aaron C.; Weiss, Benjamin P.
2018-04-01
Recent laboratory efforts have constrained the remanent magnetizations of chondrules and the magnetic field strengths to which the chondrules were exposed as they cooled below their Curie points. An outstanding question is whether the inferred paleofields represent the background magnetic field of the solar nebula or were unique to the chondrule-forming environment. We investigate the amplification of the magnetic field above background values for two proposed chondrule formation mechanisms, large-scale nebular shocks and planetary bow shocks. Behind large-scale shocks, the magnetic field parallel to the shock front is amplified by factors of ∼10–30, regardless of the magnetic diffusivity. Therefore, chondrules melted in these shocks probably recorded an amplified magnetic field. Behind planetary bow shocks, the field amplification is sensitive to the magnetic diffusivity. We compute the gas properties behind a bow shock around a 3000 km radius planetary embryo, with and without atmospheres, using hydrodynamics models. We calculate the ionization state of the hot, shocked gas, including thermionic emission from dust, thermal ionization of gas-phase potassium atoms, and the magnetic diffusivity due to Ohmic dissipation and ambipolar diffusion. We find that the diffusivity is sufficiently large that magnetic fields have already relaxed to background values in the shock downstream where chondrules acquire magnetizations, and that these locations are sufficiently far from the planetary embryos that chondrules should not have recorded a significant putative dynamo field generated on these bodies. We conclude that, if melted in planetary bow shocks, chondrules probably recorded the background nebular field.
NASA Astrophysics Data System (ADS)
Xu, K.; Sühring, M.; Metzger, S.; Desai, A. R.
2017-12-01
Most eddy covariance (EC) flux towers suffer from footprint bias. This footprint not only varies rapidly in time, but is smaller than the resolution of most earth system models, leading to a systemic scale mismatch in model-data comparison. Previous studies have suggested this problem can be mitigated (1) with multiple towers, (2) by building a taller tower with a large flux footprint, and (3) by applying advanced scaling methods. Here we ask: (1) How many flux towers are needed to sufficiently sample the flux mean and variation across an Earth system model domain? (2) How tall is tall enough for a single tower to represent the Earth system model domain? (3) Can we reduce the requirements derived from the first two questions with advanced scaling methods? We test these questions with output from large eddy simulations (LES) and application of the environmental response function (ERF) upscaling method. PALM LES (Maronga et al. 2015) was set up over a domain of 12 km x 16 km x 1.8 km at 7 m spatial resolution and produced 5 hours of output at a time step of 0.3 s. The surface Bowen ratio alternated between 0.2 and 1 among a series of 3 km wide stripe-like surface patches, with horizontal wind perpendicular to the surface heterogeneity. A total of 384 virtual towers were arranged on a regular grid across the LES domain, recording EC observations at 18 vertical levels. We use increasing height of a virtual flux tower and increasing numbers of virtual flux towers in the domain to compute energy fluxes. Initial results show a large (>25) number of towers is needed sufficiently sample the mean domain energy flux. When the ERF upscaling method was applied to the virtual towers in the LES environment, we were able to map fluxes over the domain to within 20% precision with a significantly smaller number of towers. This was achieved by relating sub-hourly turbulent fluxes to meteorological forcings and surface properties. These results demonstrate how advanced scaling techniques can decrease the number of towers, and thus experimental expense, required for domain-scaling over heterogeneous surface.
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Solving large scale structure in ten easy steps with COLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Self-interacting inelastic dark matter: a viable solution to the small scale structure problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juan.herrero-garcia@adelaide.edu.au
2017-03-01
Self-interacting dark matter has been proposed as a solution to the small-scale structure problems, such as the observed flat cores in dwarf and low surface brightness galaxies. If scattering takes place through light mediators, the scattering cross section relevant to solve these problems may fall into the non-perturbative regime leading to a non-trivial velocity dependence, which allows compatibility with limits stemming from cluster-size objects. However, these models are strongly constrained by different observations, in particular from the requirements that the decay of the light mediator is sufficiently rapid (before Big Bang Nucleosynthesis) and from direct detection. A natural solution tomore » reconcile both requirements are inelastic endothermic interactions, such that scatterings in direct detection experiments are suppressed or even kinematically forbidden if the mass splitting between the two-states is sufficiently large. Using an exact solution when numerically solving the Schrödinger equation, we study such scenarios and find regions in the parameter space of dark matter and mediator masses, and the mass splitting of the states, where the small scale structure problems can be solved, the dark matter has the correct relic abundance and direct detection limits can be evaded.« less
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
ASSESSING THE IMPORTANCE OF THERMAL REFUGE ...
Salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. The importance of cold water refuges for migrating adult salmon and steelhead may seem intuitive, and refuges are clearly used by fish during warm water episodes. But quantifying the value of both small and large scale thermal features to salmon populations has been challenging due to the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We discuss the challenges and opportunities to simulating fish behaviors and linking exposures to migratory and reproductive fitness. In this talk and companion poster, we describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effects of warm waters include impacts to salmon and steelhead populations that may already be stressed by habitat alteration, disease, predation, and fishing pressures. Much effort is being expended to improve conditions for salmon and steelhea
Evolution of the magnetorotational instability on initially tangled magnetic fields
NASA Astrophysics Data System (ADS)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.; Subramanian, Kandaswamy
2017-12-01
The initial magnetic field of previous magnetorotational instability (MRI) simulations has always included a significant system-scale component, even if stochastic. However, it is of conceptual and practical interest to assess whether the MRI can grow when the initial field is turbulent. The ubiquitous presence of turbulent or random flows in astrophysical plasmas generically leads to a small-scale dynamo (SSD), which would provide initial seed turbulent velocity and magnetic fields in the plasma that becomes an accretion disc. Can the MRI grow from these more realistic initial conditions? To address this, we supply a standard shearing box with isotropically forced SSD generated magnetic and velocity fields as initial conditions and remove the forcing. We find that if the initially supplied fields are too weak or too incoherent, they decay from the initial turbulent cascade faster than they can grow via the MRI. When the initially supplied fields are sufficient to allow MRI growth and sustenance, the saturated stresses, large-scale fields and power spectra match those of the standard zero net flux MRI simulation with an initial large-scale vertical field.
Small-scale behavior in distorted turbulent boundary layers at low Reynolds number
NASA Technical Reports Server (NTRS)
Saddoughi, Seyed G.
1994-01-01
During the last three years we have conducted high- and low-Reynolds-number experiments, including hot-wire measurements of the velocity fluctuations, in the test-section-ceiling boundary layer of the 80- by 120-foot Full-Scale Aerodynamics Facility at NASA Ames Research Center, to test the local-isotropy predictions of Kolmogorov's universal equilibrium theory. This hypothesis, which states that at sufficiently high Reynolds numbers the small-scale structures of turbulent motions are independent of large-scale structures and mean deformations, has been used in theoretical studies of turbulence and computational methods such as large-eddy simulation; however, its range of validity in shear flows has been a subject of controversy. The present experiments were planned to enhance our understanding of the local-isotropy hypothesis. Our experiments were divided into two sets. First, measurements were taken at different Reynolds numbers in a plane boundary layer, which is a 'simple' shear flow. Second, experiments were designed to address this question: will our criteria for the existence of local isotropy hold for 'complex' nonequilibrium flows in which extra rates of mean strain are added to the basic mean shear?
On the linearity of tracer bias around voids
NASA Astrophysics Data System (ADS)
Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro
2017-07-01
The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.
Delensing CMB polarization with external datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kendrick M.; Hanson, Duncan; LoVerde, Marilena
2012-06-01
One of the primary scientific targets of current and future CMB polarization experiments is the search for a stochastic background of gravity waves in the early universe. As instrumental sensitivity improves, the limiting factor will eventually be B-mode power generated by gravitational lensing, which can be removed through use of so-called ''delensing'' algorithms. We forecast prospects for delensing using lensing maps which are obtained externally to CMB polarization: either from large-scale structure observations, or from high-resolution maps of CMB temperature. We conclude that the forecasts in either case are not encouraging, and that significantly delensing large-scale CMB polarization requires high-resolutionmore » polarization maps with sufficient sensitivity to measure the lensing B-mode. We also present a simple formalism for including delensing in CMB forecasts which is computationally fast and agrees well with Monte Carlos.« less
Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinagi, Catherine, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com; Trivellas, Panagiotis, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com; Reklitis, Panagiotis, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com
2015-02-09
This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefitsmore » from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.« less
DataWarrior: an open-source program for chemistry aware data visualization and analysis.
Sander, Thomas; Freyss, Joel; von Korff, Modest; Rufener, Christian
2015-02-23
Drug discovery projects in the pharmaceutical industry accumulate thousands of chemical structures and ten-thousands of data points from a dozen or more biological and pharmacological assays. A sufficient interpretation of the data requires understanding, which molecular families are present, which structural motifs correlate with measured properties, and which tiny structural changes cause large property changes. Data visualization and analysis software with sufficient chemical intelligence to support chemists in this task is rare. In an attempt to contribute to filling the gap, we released our in-house developed chemistry aware data analysis program DataWarrior for free public use. This paper gives an overview of DataWarrior's functionality and architecture. Exemplarily, a new unsupervised, 2-dimensional scaling algorithm is presented, which employs vector-based or nonvector-based descriptors to visualize the chemical or pharmacophore space of even large data sets. DataWarrior uses this method to interactively explore chemical space, activity landscapes, and activity cliffs.
Digital Archiving of People Flow by Recycling Large-Scale Social Survey Data of Developing Cities
NASA Astrophysics Data System (ADS)
Sekimoto, Y.; Watanabe, A.; Nakamura, T.; Horanont, T.
2012-07-01
Data on people flow has become increasingly important in the field of business, including the areas of marketing and public services. Although mobile phones enable a person's position to be located to a certain degree, it is a challenge to acquire sufficient data from people with mobile phones. In order to grasp people flow in its entirety, it is important to establish a practical method of reconstructing people flow from various kinds of existing fragmentary spatio-temporal data such as social survey data. For example, despite typical Person Trip Survey Data collected by the public sector showing the fragmentary spatio-temporal positions accessed, the data are attractive given the sufficiently large sample size to estimate the entire flow of people. In this study, we apply our proposed basic method to Japan International Cooperation Agency (JICA) PT data pertaining to developing cities around the world, and we propose some correction methods to resolve the difficulties in applying it to many cities and stably to infrastructure data.
Federal solar policies yield neither heat nor light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstein, M.
1978-02-06
Thirty years of Federal energy policies and bureaucracy are criticized for their limited success in promoting nuclear energy and their present involvement in solar technology. Mr. Silverstein feels that poor judgment was shown in pursuit of large-scale solar demonstrations between 1973 and 1976 when Federal agencies ignored existing solar companies and awarded contracts to the large corporations. A fetish for crash research programs, he also feels, led to the creation of the Solar Energy Research Institute (SERI), which concentrates on wasteful high-technology projects rather than building on what has already been developed in the field. He cites ''even more destructive''more » policies adopted by the Housing and Urban Development Agency (HUD), which attacked many solar suppliers without sufficient evidence and then developed a solar-water-heater grant program that effectively distorted the market. The author feels that the solar technology market is sufficiently viable and that government participation is more appropriate in the form of tax credits and guaranteed loans.« less
High-Accuracy Near-Surface Large-Eddy Simulation with Planar Topography
2015-08-03
Navier-Stokes equation, in effect randomizing the subfilter-scale (SFS) stress divergence. In the intervening years it has been discovered that this...surface stress models do introduce spurious effects that force deviations from LOTW at the first couple grid levels adjacent to the surface. Fig. 10 shows...SFS stress is sufficiently overwhelming to produce the overshoot. When the LES is moved into the HAZ so that the viscous effects causing the
Drivers Behind the PRC’s Port Investments: Cases in Darwin and Sri Lanka
2017-12-01
Territory Government’s Port of Darwin in Australia and the Port of Hambantota in Sri Lanka. It examines whether security concerns or economic ...objectives are driving Chinese, Australian, and Sri Lankan behavior. Through a detailed analysis of available policy statements and economic data, the thesis...principally motivated by economic goals. They both lack sufficient domestic funds to accomplish their own large-scale port development goals, with Darwin
U.S. Nuclear Weapons Enterprise: A Strategic Past and Unknown Future
2012-04-25
are left to base their planning assumptions, weapons designs and capabilities on outdated models . The likelihood of a large-scale nuclear war has...conduct any testing on nuclear weapons and must rely on computer modeling . While this may provide sufficient confidence in the current nuclear...unlikely the world will be free of nuclear weapons. 24 APPENDIX A – Acronyms ACC – Air Combat Command ACM – Advanced cruise missle CSAF
Struniawski, R; Szpechcinski, A; Poplawska, B; Skronski, M; Chorostowska-Wynimko, J
2013-01-01
The dried blood spot (DBS) specimens have been successfully employed for the large-scale diagnostics of α1-antitrypsin (AAT) deficiency as an easy to collect and transport alternative to plasma/serum. In the present study we propose a fast, efficient, and cost effective protocol of DNA extraction from dried blood spot (DBS) samples that provides sufficient quantity and quality of DNA and effectively eliminates any natural PCR inhibitors, allowing for successful AAT genotyping by real-time PCR and direct sequencing. DNA extracted from 84 DBS samples from chronic obstructive pulmonary disease patients was genotyped for AAT deficiency variants by real-time PCR. The results of DBS AAT genotyping were validated by serum IEF phenotyping and AAT concentration measurement. The proposed protocol allowed successful DNA extraction from all analyzed DBS samples. Both quantity and quality of DNA were sufficient for further real-time PCR and, if necessary, for genetic sequence analysis. A 100% concordance between AAT DBS genotypes and serum phenotypes in positive detection of two major deficiency S- and Z- alleles was achieved. Both assays, DBS AAT genotyping by real-time PCR and serum AAT phenotyping by IEF, positively identified PI*S and PI*Z allele in 8 out of the 84 (9.5%) and 16 out of 84 (19.0%) patients, respectively. In conclusion, the proposed protocol noticeably reduces the costs and the hand-on-time of DBS samples preparation providing genomic DNA of sufficient quantity and quality for further real-time PCR or genetic sequence analysis. Consequently, it is ideally suited for large-scale AAT deficiency screening programs and should be method of choice.
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
Pulsar recoil by large-scale anisotropies in supernova explosions.
Scheck, L; Plewa, T; Janka, H-Th; Kifonidis, K; Müller, E
2004-01-09
Assuming that the neutrino luminosity from the neutron star core is sufficiently high to drive supernova explosions by the neutrino-heating mechanism, we show that low-mode (l=1,2) convection can develop from random seed perturbations behind the shock. A slow onset of the explosion is crucial, requiring the core luminosity to vary slowly with time, in contrast to the burstlike exponential decay assumed in previous work. Gravitational and hydrodynamic forces by the globally asymmetric supernova ejecta were found to accelerate the remnant neutron star on a time scale of more than a second to velocities above 500 km s(-1), in agreement with observed pulsar proper motions.
NASA Astrophysics Data System (ADS)
Kumar, Narender; Singh, Ram Kishor; Sharma, Swati; Uma, R.; Sharma, R. P.
2018-01-01
This paper presents numerical simulations of laser beam (x-mode) coupling with a magnetosonic wave (MSW) in a collisionless plasma. The coupling arises through ponderomotive non-linearity. The pump beam has been perturbed by a periodic perturbation that leads to the nonlinear evolution of the laser beam. It is observed that the frequency spectra of the MSW have peaks at terahertz frequencies. The simulation results show quite complex localized structures that grow with time. The ensemble averaged power spectrum has also been studied which indicates that the spectral index follows an approximate scaling of the order of ˜ k-2.1 at large scales and scaling of the order of ˜ k-3.6 at smaller scales. The results indicate considerable randomness in the spatial structure of the magnetic field profile which gives sufficient indication of turbulence.
Haase, Doreen; Puan, Kia Joo; Starke, Mireille; Lai, Tuck Siong; Soh, Melissa Yan Ling; Karunanithi, Iyswariya; San Luis, Boris; Poh, Tuang Yeow; Yusof, Nurhashikin; Yeap, Chun Hsien; Phang, Chew Yen; Chye, Willis Soon Yuan; Chan, Marieta; Koh, Mickey Boon Chai; Goh, Yeow Tee; Bertin-Maghit, Sebastien; Nardin, Alessandra; Ho, Liam Pock; Rotzschke, Olaf
2015-01-01
Adoptive cell therapy is an emerging treatment strategy for a number of serious diseases. Regulatory T (Treg) cells represent 1 cell type of particular interest for therapy of inflammatory conditions, as they are responsible for controlling unwanted immune responses. Initial clinical trials of adoptive transfer of Treg cells in patients with graft-versus-host disease were shown to be safe. However, obtaining sufficient numbers of highly pure and functional Treg cells with minimal contamination remains a challenge. We developed a novel approach to isolate "untouched" human Treg cells from healthy donors on the basis of negative selection using the surface markers CD49d and CD127. This procedure, which uses an antibody cocktail and magnetic beads for separation in an automated system (RoboSep), was scaled up and adapted to be compatible with good manufacturing practice conditions. With this setup we performed 9 Treg isolations from large-scale leukapheresis samples in a good manufacturing practice facility. These runs yielded sufficient numbers of "untouched" Treg cells for immediate use in clinical applications. The cell preparations consisted of viable highly pure FoxP3-positive Treg cells that were functional in suppressing the proliferation of effector T cells. Contamination with CD4 effector T cells was <10%. All other cell types did not exceed 2% in the final product. Remaining isolation reagents were reduced to levels that are considered safe. Treg cells isolated with this procedure will be used in a phase I clinical trial of adoptive transfer into leukemia patients developing graft-versus-host disease after stem cell transplantation.
Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities
NASA Astrophysics Data System (ADS)
Doster, F.; Celia, M. A.; Nordbotten, J. M.
2012-12-01
Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.
Gadkar, Vijay J; Filion, Martin
2013-06-01
In various experimental systems, limiting available amounts of RNA may prevent a researcher from performing large-scale analyses of gene transcripts. One way to circumvent this is to 'pre-amplify' the starting RNA/cDNA, so that sufficient amounts are available for any downstream analysis. In the present study, we report the development of a novel protocol for constructing amplified cDNA libraries using the Phi29 DNA polymerase based multiple displacement amplification (MDA) system. Using as little as 200 ng of total RNA, we developed a linear concatenation strategy to make the single-stranded cDNA template amenable for MDA. The concatenation, made possible by the template switching property of the reverse transcriptase enzyme, resulted in the amplified cDNA library with intact 5' ends. MDA generated micrograms of template, allowing large-scale polymerase chain reaction analyses or other large-scale downstream applications. As the amplified cDNA library contains intact 5' ends, it is also compatible with 5' RACE analyses of specific gene transcripts. Empirical validation of this protocol is demonstrated on a highly characterized (tomato) and an uncharacterized (corn gromwell) experimental system.
Measuring Teaching Quality and Student Engagement in South Korea and The Netherlands
ERIC Educational Resources Information Center
van de Grift, Wim J. C. M.; Chun, Seyeoung; Maulana, Ridwan; Lee, Okhwa; Helms-Lorenz, Michelle
2017-01-01
Six observation scales for measuring the skills of teachers and 1 scale for measuring student engagement, assessed in South Korea and The Netherlands, are sufficiently reliable and offer sufficient predictive value for student engagement. A multigroup confirmatory factor analysis shows that the factor loadings and intercepts of the scales are the…
Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach
Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.
2016-01-01
Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
A numerical study of Coulomb interaction effects on 2D hopping transport.
Kinkhabwala, Yusuf A; Sverdlov, Viktor A; Likharev, Konstantin K
2006-02-15
We have extended our supercomputer-enabled Monte Carlo simulations of hopping transport in completely disordered 2D conductors to the case of substantial electron-electron Coulomb interaction. Such interaction may not only suppress the average value of hopping current, but also affect its fluctuations rather substantially. In particular, the spectral density S(I)(f) of current fluctuations exhibits, at sufficiently low frequencies, a 1/f-like increase which approximately follows the Hooge scaling, even at vanishing temperature. At higher f, there is a crossover to a broad range of frequencies in which S(I)(f) is nearly constant, hence allowing characterization of the current noise by the effective Fano factor [Formula: see text]. For sufficiently large conductor samples and low temperatures, the Fano factor is suppressed below the Schottky value (F = 1), scaling with the length L of the conductor as F = (L(c)/L)(α). The exponent α is significantly affected by the Coulomb interaction effects, changing from α = 0.76 ± 0.08 when such effects are negligible to virtually unity when they are substantial. The scaling parameter L(c), interpreted as the average percolation cluster length along the electric field direction, scales as [Formula: see text] when Coulomb interaction effects are negligible and [Formula: see text] when such effects are substantial, in good agreement with estimates based on the theory of directed percolation.
Kim, Hyoung Jun; Kim, Tae Oh; Shin, Bong Chul; Woo, Jae Gon; Seo, Eun Hee; Joo, Hee Rin; Heo, Nae-Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo; Shin, Jin-Yong; Lee, Nae Young
2012-01-01
Currently, a split-dose of polyethylene glycol (PEG) is the mainstay of bowel preparation due to its tolerability, bowel-cleansing action, and safety. However, bowel preparation with PEG is suboptimal because residual fluid reduces the polyp detection rate and requires a more thorough colon inspection. The aim of our study was to demonstrate the efficacy of a sufficient dose of prokinetics on bowel cleansing together with split-dose PEG. A prospective endoscopist-blinded study was conducted. Patients were randomly allocated to two groups: prokinetic with split-dose PEG or split-dose PEG alone. A prokinetic [100 mg itopride (Itomed)], was administered twice simultaneously with each split-dose of PEG. Bowel-cleansing efficacy was measured by endoscopists using the Ottawa scale and the segmental fluidity scale score. Each participant completed a bowel preparation survey. Mean scores from the Ottawa scale, segmental fluid scale, and rate of poor preparation were compared between both groups. Patients in the prokinetics with split-dose PEG group showed significantly lower total Ottawa and segmental fluid scores compared with patients in the split-dose of PEG alone group. A sufficient dose of prokinetics with a split-dose of PEG showed efficacy in bowel cleansing for morning colonoscopy, largely due to the reduction in colonic fluid. Copyright © 2012 S. Karger AG, Basel.
A three-term conjugate gradient method under the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa
2017-08-01
Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.
A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd
2017-08-01
Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.
A Feasibility Experiment for a Soft X-Ray Laser
1976-09-01
has embarked on a large scale laser fusion program initially aimed at achieving sufficient thermometric yield from a single pellet to initiate a...gold, aluminum ). The report suggests that 10 to 20 percent of the incident laser energy can be converted to X rays below 1 keV. A Lawrence Livermore...Computa- tions of the population inversion for the inner shell electrons, as found in 3 I-.--I~ . . AFWL-TR-76-107 aluminum , indicate a favorable
Anomalous Fluctuations in Autoregressive Models with Long-Term Memory
NASA Astrophysics Data System (ADS)
Sakaguchi, Hidetsugu; Honjo, Haruo
2015-10-01
An autoregressive model with a power-law type memory kernel is studied as a stochastic process that exhibits a self-affine-fractal-like behavior for a small time scale. We find numerically that the root-mean-square displacement Δ(m) for the time interval m increases with a power law as mα with α < 1/2 for small m but saturates at sufficiently large m. The exponent α changes with the power exponent of the memory kernel.
2013-07-01
Humanitarian Response Depot (in Malaysia ) UNISDR United Nations International Strategy for Disaster Reduction UNOCHA United Nations Office for the...suffered two different types of disasters—(1) the earthquake and tsunami and (2) large-scale radioactive contamination , we focus our analysis on the...trends of the decline in energy, food and water sufficiency, and the increase in HIV transmission, drug addiction and people smuggling “will have
Secondary flow structures in large rivers
NASA Astrophysics Data System (ADS)
Chauvet, H.; Devauchelle, O.; Metivier, F.; Limare, A.; Lajeunesse, E.
2012-04-01
Measuring the velocity field in large rivers remains a challenge, even with recent measurement techniques such as Acoustic Doppler Current Profiler (ADCP). Indeed, due to the diverging angle between its ultrasonic beams, an ADCP cannot detect small-scale flow structures. However, when the measurements are limited to a single location for a sufficient period of time, averaging can reveal large, stationary flow structures. Here we present velocity measurements in a straight reach of the Seine river in Paris, France, where the cross-section is close to rectangular. The transverse modulation of the streamwise velocity indicates secondary flow cells, which seem to occupy the entire width of the river. This observation is reminiscent of the longitudinal vortices observed in laboratory experiments (e.g. Blanckaert et al., Advances in Water Resources, 2010, 33, 1062-1074). Although the physical origin of these secondary structures remains unclear, their measured velocity is sufficient to significantly impact the distribution of streamwise momentum. We propose a model for the transverse profile of the depth-averaged velocity based on a crude representation of the longitudinal vortices, with a single free parameter. Preliminary results are in good agreement with field measurements. This model also provides an estimate for the bank shear stress, which controls bank erosion.
Endocytic reawakening of motility in jammed epithelia
NASA Astrophysics Data System (ADS)
Malinverno, Chiara; Corallino, Salvatore; Giavazzi, Fabio; Bergert, Martin; Li, Qingsen; Leoni, Marco; Disanza, Andrea; Frittoli, Emanuela; Oldani, Amanda; Martini, Emanuele; Lendenmann, Tobias; Deflorian, Gianluca; Beznoussenko, Galina V.; Poulikakos, Dimos; Ong, Kok Haur; Uroz, Marina; Trepat, Xavier; Parazzoli, Dario; Maiuri, Paolo; Yu, Weimiao; Ferrari, Aldo; Cerbino, Roberto; Scita, Giorgio
2017-05-01
Dynamics of epithelial monolayers has recently been interpreted in terms of a jamming or rigidity transition. How cells control such phase transitions is, however, unknown. Here we show that RAB5A, a key endocytic protein, is sufficient to induce large-scale, coordinated motility over tens of cells, and ballistic motion in otherwise kinetically arrested monolayers. This is linked to increased traction forces and to the extension of cell protrusions, which align with local velocity. Molecularly, impairing endocytosis, macropinocytosis or increasing fluid efflux abrogates RAB5A-induced collective motility. A simple model based on mechanical junctional tension and an active cell reorientation mechanism for the velocity of self-propelled cells identifies regimes of monolayer dynamics that explain endocytic reawakening of locomotion in terms of a combination of large-scale directed migration and local unjamming. These changes in multicellular dynamics enable collectives to migrate under physical constraints and may be exploited by tumours for interstitial dissemination.
Willow bioenergy plantation research in the Northeast
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, E.H.; Abrahamson, L.P.; Kopp, R.F.
1993-12-31
Experiments were established in Central New York in the spring of 1987 to evaluate the potential of Salix for biomass production in bioenergy plantations. Emphasis of the research was on developing and refining establishment, tending and maintenance techniques, with complimentary study of breeding, coppice physiology, pests, nutrient use and bioconversion to energy products. Current yields utilizing salix clones developed in cooperation with the University of Toronto in short-rotation intensive culture bioenergy plantations in the Northeast approximate 8 oven dry tons per acre per year with annual harvesting. Successful clones have been identified and culture techniques refined. The results are nowmore » being integrated to establish a 100 acre Salix large-scale bioenergy farm to demonstrate current successful biomass production technology and to provide plantations of sufficient size to test harvesters; adequately assess economics of the systems; and provide large quantities of uniform biomass for pilot-scale conversion facilities.« less
NASA Technical Reports Server (NTRS)
Engeln, J. F.; Stein, S.
1984-01-01
A new model for the Easter plate is presented in which rift propagation has resulted in the formation of a rigid plate between the propagating and dying ridges. The distribution of earthquakes, eleven new focal mechanisms, and existing bathymetric and magnetic data are used to describe the tectonics of this area. Both the Easter-Nazca and Easter-Pacific Euler poles are sufficiently close to the Easter plate to cause rapid changes in rates and directions of motion along the boundaries. The east and west boundaries are propagating and dying ridges; the southwest boundary is a slow-spreading ridge and the northern boundary is a complex zone of convergent and transform motion. The Easter plate may reflect the tectonics of rift propagation on a large scale, where rigid plate tectonics requires boundary reorientation. Simple schematic models to illustrate the general features and processes which occur at plates resulting from large-scale rift propagation are used.
Thin Disks Gone MAD: Magnetically Arrested Accretion in the Thin Regime
NASA Astrophysics Data System (ADS)
Avara, Mark J.; McKinney, Jonathan C.; Reynolds, Christopher S.
2015-01-01
The collection and concentration of surrounding large scale magnetic fields by black hole accretion disks may be required for production of powerful, spin driven jets. So far, accretion disks have not been shown to grow sufficient poloidal flux via the turbulent dynamo alone to produce such persistent jets. Also, there have been conflicting answers as to how, or even if, an accretion disk can collect enough magnetic flux from the ambient environment. Extending prior numerical studies of magnetically arrested disks (MAD) in the thick (angular height, H/R~1) and intermediate (H/R~.2-.6) accretion regimes, we present our latest results from fully general relativistic MHD simulations of the thinnest BH (H/R~.1) accretion disks to date exhibiting the MAD mode of accretion. We explore the significant deviations of this accretion mode from the standard picture of thin, MRI-driven accretion, and demonstrate the accumulation of large-scale magnetic flux.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
The Universe at Moderate Redshift
NASA Technical Reports Server (NTRS)
Cen, Renyue; Ostriker, Jeremiah P.
1997-01-01
The report covers the work done in the past year and a wide range of fields including properties of clusters of galaxies; topological properties of galaxy distributions in terms of galaxy types; patterns of gravitational nonlinear clustering process; development of a ray tracing algorithm to study the gravitational lensing phenomenon by galaxies, clusters and large-scale structure, one of whose applications being the effects of weak gravitational lensing by large-scale structure on the determination of q(0); the origin of magnetic fields on the galactic and cluster scales; the topological properties of Ly(alpha) clouds the Ly(alpha) optical depth distribution; clustering properties of Ly(alpha) clouds; and a determination (lower bound) of Omega(b) based on the observed Ly(alpha) forest flux distribution. In the coming year, we plan to continue the investigation of Ly(alpha) clouds using larger dynamic range (about a factor of two) and better simulations (with more input physics included) than what we have now. We will study the properties of galaxies on 1 - 100h(sup -1) Mpc scales using our state-of-the-art large scale galaxy formation simulations of various cosmological models, which will have a resolution about a factor of 5 (in each dimension) better than our current, best simulations. We will plan to study the properties of X-ray clusters using unprecedented, very high dynamic range (20,000) simulations which will enable us to resolve the cores of clusters while keeping the simulation volume sufficiently large to ensure a statistically fair sample of the objects of interest. The details of the last year's works are now described.
NASA Astrophysics Data System (ADS)
Rasskazov, Andrey; Chertovskih, Roman; Zheligovsky, Vladislav
2018-04-01
We introduce six families of three-dimensional space-periodic steady solenoidal flows, whose kinetic helicity density is zero at any point. Four families are analytically defined. Flows in four families have zero helicity spectrum. Sample flows from five families are used to demonstrate numerically that neither zero kinetic helicity density nor zero helicity spectrum prohibit generation of large-scale magnetic field by the two most prominent dynamo mechanisms: the magnetic α -effect and negative eddy diffusivity. Our computations also attest that such flows often generate small-scale field for sufficiently small magnetic molecular diffusivity. These findings indicate that kinetic helicity and helicity spectrum are not the quantities controlling the dynamo properties of a flow regardless of whether scale separation is present or not.
Soteriades, Andreas Diomedes; Stott, Alistair William; Moreau, Sindy; Charroin, Thierry; Blanchard, Melanie; Liu, Jiayi; Faverdin, Philippe
2016-01-01
We aimed at quantifying the extent to which agricultural management practices linked to animal production and land use affect environmental outcomes at a larger scale. Two practices closely linked to farm environmental performance at a larger scale are farming intensity, often resulting in greater off-farm environmental impacts (land, non-renewable energy use etc.) associated with the production of imported inputs (e.g. concentrates, fertilizer); and the degree of self-sufficiency, i.e. the farm's capacity to produce goods from its own resources, with higher control over nutrient recycling and thus minimization of losses to the environment, often resulting in greater on-farm impacts (eutrophication, acidification etc.). We explored the relationship of these practices with farm environmental performance for 185 French specialized dairy farms. We used Partial Least Squares Structural Equation Modelling to build, and relate, latent variables of environmental performance, intensification and self-sufficiency. Proxy indicators reflected the latent variables for intensification (milk yield/cow, use of maize silage etc.) and self-sufficiency (home-grown feed/total feed use, on-farm energy/total energy use etc.). Environmental performance was represented by an aggregate 'eco-efficiency' score per farm derived from a Data Envelopment Analysis model fed with LCA and farm output data. The dataset was split into two spatially heterogeneous (bio-physical conditions, production patterns) regions. For both regions, eco-efficiency was significantly negatively related with milk yield/cow and the use of maize silage and imported concentrates. However, these results might not necessarily hold for intensive yet more self-sufficient farms. This requires further investigation with latent variables for intensification and self-sufficiency that do not largely overlap- a modelling challenge that occurred here. We conclude that the environmental 'sustainability' of intensive dairy farming depends on particular farming systems and circumstances, although we note that more self-sufficient farms may be preferable when they may benefit from relatively low land prices and agri-environment schemes aimed at maintaining grasslands.
Soteriades, Andreas Diomedes; Stott, Alistair William; Moreau, Sindy; Charroin, Thierry; Blanchard, Melanie; Liu, Jiayi; Faverdin, Philippe
2016-01-01
We aimed at quantifying the extent to which agricultural management practices linked to animal production and land use affect environmental outcomes at a larger scale. Two practices closely linked to farm environmental performance at a larger scale are farming intensity, often resulting in greater off-farm environmental impacts (land, non-renewable energy use etc.) associated with the production of imported inputs (e.g. concentrates, fertilizer); and the degree of self-sufficiency, i.e. the farm’s capacity to produce goods from its own resources, with higher control over nutrient recycling and thus minimization of losses to the environment, often resulting in greater on-farm impacts (eutrophication, acidification etc.). We explored the relationship of these practices with farm environmental performance for 185 French specialized dairy farms. We used Partial Least Squares Structural Equation Modelling to build, and relate, latent variables of environmental performance, intensification and self-sufficiency. Proxy indicators reflected the latent variables for intensification (milk yield/cow, use of maize silage etc.) and self-sufficiency (home-grown feed/total feed use, on-farm energy/total energy use etc.). Environmental performance was represented by an aggregate ‘eco-efficiency’ score per farm derived from a Data Envelopment Analysis model fed with LCA and farm output data. The dataset was split into two spatially heterogeneous (bio-physical conditions, production patterns) regions. For both regions, eco-efficiency was significantly negatively related with milk yield/cow and the use of maize silage and imported concentrates. However, these results might not necessarily hold for intensive yet more self-sufficient farms. This requires further investigation with latent variables for intensification and self-sufficiency that do not largely overlap- a modelling challenge that occurred here. We conclude that the environmental ‘sustainability’ of intensive dairy farming depends on particular farming systems and circumstances, although we note that more self-sufficient farms may be preferable when they may benefit from relatively low land prices and agri-environment schemes aimed at maintaining grasslands. PMID:27832199
NASA Astrophysics Data System (ADS)
Tiselj, Iztok
2014-12-01
Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.
Homogenization of a Directed Dispersal Model for Animal Movement in a Heterogeneous Environment.
Yurk, Brian P
2016-10-01
The dispersal patterns of animals moving through heterogeneous environments have important ecological and epidemiological consequences. In this work, we apply the method of homogenization to analyze an advection-diffusion (AD) model of directed movement in a one-dimensional environment in which the scale of the heterogeneity is small relative to the spatial scale of interest. We show that the large (slow) scale behavior is described by a constant-coefficient diffusion equation under certain assumptions about the fast-scale advection velocity, and we determine a formula for the slow-scale diffusion coefficient in terms of the fast-scale parameters. We extend the homogenization result to predict invasion speeds for an advection-diffusion-reaction (ADR) model with directed dispersal. For periodic environments, the homogenization approximation of the solution of the AD model compares favorably with numerical simulations. Invasion speed approximations for the ADR model also compare favorably with numerical simulations when the spatial period is sufficiently small.
The second law of thermodynamics and quantum heat engines: Is the law strictly enforced?
NASA Astrophysics Data System (ADS)
Keefe, Peter D.
2010-01-01
A quantum heat engine is a construct having a working medium which is cyclically processed through a pair of control variables of state involving a Bose-Einstein condensation (BEC) in which a heat input is converted into a work output. Of interest is a first species of quantum heat engine in which the working medium is macroscopic in the sense the size scale is sufficiently large that the BEC is not volumetrically coherent. In this first species of quantum heat engine, near Carnot efficiencies may be possible. Of particular interest is a second species of quantum heat engine in which the working medium is mesoscopic in the sense that the size scale is sufficiently small that the BEC is volumetrically coherent. In this second species of quantum heat engine, the resulting in-process non-equilibrium condition affects the finally arrived at control variables of state such that Carnot efficiencies and beyond may be possible. A Type I superconductor is used to model the first and second species of quantum heat engine.
NASA Astrophysics Data System (ADS)
Cassanelli, James P.; Head, James W.
2018-05-01
The Reull Vallis outflow channel is a segmented system of fluvial valleys which originates from the volcanic plains of the Hesperia Planum region of Mars. Explanation of the formation of the Reull Vallis outflow channel by canonical catastrophic groundwater release models faces difficulties with generating sufficient hydraulic head, requiring unreasonably high aquifer permeability, and from limited recharge sources. Recent work has proposed that large-scale lava-ice interactions could serve as an alternative mechanism for outflow channel formation on the basis of predictions of regional ice sheet formation in areas that also underwent extensive contemporaneous volcanic resurfacing. Here we assess in detail the potential formation of outflow channels by large-scale lava-ice interactions through an applied case study of the Reull Vallis outflow channel system, selected for its close association with the effusive volcanic plains of the Hesperia Planum region. We first review the geomorphology of the Reull Vallis system to outline criteria that must be met by the proposed formation mechanism. We then assess local and regional lava heating and loading conditions and generate model predictions for the formation of Reull Vallis to test against the outlined geomorphic criteria. We find that successive events of large-scale lava-ice interactions that melt ice deposits, which then undergo re-deposition due to climatic mechanisms, best explains the observed geomorphic criteria, offering improvements over previously proposed formation models, particularly in the ability to supply adequate volumes of water.
NASA Technical Reports Server (NTRS)
Smith, Charlee C., Jr.; Lovell, Powell M., Jr.
1954-01-01
An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.
Tarras-Wahlberg, N H
2002-06-01
This paper considers technical measures and policy initiatives needed to improve environmental management in the Portovelo-Zaruma mining district of southern Ecuador. In this area, gold is mined by a large number of small-scale and artisanal operators, and discharges of cyanide and metal-laden tailings have had a severe impact on the shared Ecuadorian-Peruvian Puyango river system. It is shown to be technically possible to confine mining waste and tailings at a reasonable cost. However, the complex topography of the mining district forces tailings management to be communal, where all operators are connected to one central tailings impoundment. This, in turn, implies two things: (i) that a large number of operators must agree to pool resources to bring such a facility into reality; and (ii) that miners must move away from rudimentary operations that survive on a day-to-day basis, towards bigger, mechanized and longer-term sustainable operations that are based on proven ore reserves. It is deemed unlikely that existing environmental regulations and the provision of technical solutions will be sufficient to resolve the environmental problems. Important impediments relate to the limited financial resources available to each individual miner and the problems of pooling these resources, and to the fact that the main impacts of pollution are suffered downstream of the mining district and, hence, do not affect the miners themselves. Three policy measures are therefore suggested. First, the enforcement of existing regulations must be improved, and this may be achieved by the strengthening of the central authority charged with supervision and control of mining activities. Second, local government involvement and local public participation in environmental management needs to be promoted. Third, a clear policy should be defined which promotes the reorganisation of small operations into larger units that are strong enough to sustain rational exploration and environmental obligations. The case study suggests that mining policy in lesser-developed countries should develop to enable small-scale and artisanal miners to form entities that are of a sufficiently large scale to allow adequate and cost-effective environmental protection.
NASA Technical Reports Server (NTRS)
Friedson, James; Ingersoll, Andrew P.
1987-01-01
A model is presented for the thermodynamics of the seasonal meridional energy balance and thermal structure of the Uranian atmosphere. The model considers radiation and small-scale convection, and dynamical heat fluxes due to large-scale baroclinic eddies. Phase oscillations with a period of 0.5 Uranian year are discerned in the total internal power and global enthalpy storage. The variations in the identity of the main transport agent with the magnitude of the internal heat source are discussed. It is shown that meridional heat transport in the atmosphere is sufficient to lower seasonal horizontal temperature contrasts below those predicted with radiative-convection models.
Cruz-Motta, Juan José; Miloslavich, Patricia; Palomo, Gabriela; Iken, Katrin; Konar, Brenda; Pohle, Gerhard; Trott, Tom; Benedetti-Cecchi, Lisandro; Herrera, César; Hernández, Alejandra; Sardi, Adriana; Bueno, Andrea; Castillo, Julio; Klein, Eduardo; Guerra-Castro, Edlin; Gobin, Judith; Gómez, Diana Isabel; Riosmena-Rodríguez, Rafael; Mead, Angela; Bigatti, Gregorio; Knowlton, Ann; Shirayama, Yoshihisa
2010-01-01
Assemblages associated with intertidal rocky shores were examined for large scale distribution patterns with specific emphasis on identifying latitudinal trends of species richness and taxonomic distinctiveness. Seventy-two sites distributed around the globe were evaluated following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). There were no clear patterns of standardized estimators of species richness along latitudinal gradients or among Large Marine Ecosystems (LMEs); however, a strong latitudinal gradient in taxonomic composition (i.e., proportion of different taxonomic groups in a given sample) was observed. Environmental variables related to natural influences were strongly related to the distribution patterns of the assemblages on the LME scale, particularly photoperiod, sea surface temperature (SST) and rainfall. In contrast, no environmental variables directly associated with human influences (with the exception of the inorganic pollution index) were related to assemblage patterns among LMEs. Correlations of the natural assemblages with either latitudinal gradients or environmental variables were equally strong suggesting that neither neutral models nor models based solely on environmental variables sufficiently explain spatial variation of these assemblages at a global scale. Despite the data shortcomings in this study (e.g., unbalanced sample distribution), we show the importance of generating biological global databases for the use in large-scale diversity comparisons of rocky intertidal assemblages to stimulate continued sampling and analyses. PMID:21179546
Reionization Models Classifier using 21cm Map Deep Learning
NASA Astrophysics Data System (ADS)
Hassan, Sultan; Liu, Adrian; Kohn, Saul; Aguirre, James E.; La Plante, Paul; Lidz, Adam
2018-05-01
Next-generation 21cm observations will enable imaging of reionization on very large scales. These images will contain more astrophysical and cosmological information than the power spectrum, and hence providing an alternative way to constrain the contribution of different reionizing sources populations to cosmic reionization. Using Convolutional Neural Networks, we present a simple network architecture that is sufficient to discriminate between Galaxy-dominated versus AGN-dominated models, even in the presence of simulated noise from different experiments such as the HERA and SKA.
History of the Army Ground Forces. Study Number 24. History of the Mountain Training Center
1948-01-01
yaws, pius the knowledge of mevoral experienced mountaineers and skiers in the Office of the Quartermster General. On 20 May 1943 the Mountain and... skier ? Or will a mini- of knowledge and proficiency be sufficient? These are the questions that tad to be answered before the large-scale trainiag of...the prerogatives of Army co-nand were subordinated to the superior knowledge and skills of the mountaineering experts who had come into the Army7
NASA Technical Reports Server (NTRS)
Baker, C. R.
1975-01-01
Liquid hydrogen is being considered as a substitute for conventional hydrocarbon-based fuels for future generations of commercial jet aircraft. Its acceptance will depend, in part, upon the technology and cost of liquefaction. The process and economic requirements for providing a sufficient quantity of liquid hydrogen to service a major airport are described. The design is supported by thermodynamic studies which determine the effect of process arrangement and operating parameters on the process efficiency and work of liquefaction.
Propagation of barn owls in captivity
Maestrelli, J.R.
1973-01-01
Some aspects of the biology and life history of native birds often are more readily obtained in captivity than in the field. This is particularly true in evaluating the effects of pesticides or other pollutants on birds, because establishing cause-and-effect relationships requires experimental studies. Few wild species have been bred in captivity with sufficient success to permit the large-scale studies that are needed. This paper reports successful efforts to breed Barn Owls (Tyto alba prolinicola) in captivity and presents biological data concerning reproduction.
Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies
Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.
2009-01-01
Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601
2013-07-01
Strategic Dialogue UN United Nations UNHRD United Nations Humanitarian Response Depot (in Malaysia ) UNISDR United Nations International Strategy for...large-scale radioactive contamination , we focus our analysis on the former type of disaster, as it offers a better lens through with which to assess...to Security, Alan Dupont predicts that a failure to reverse the trends of the decline in energy, food and water sufficiency, and the increase in
Renewable Energy Zone (REZ) Transmission Planning Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Nathan
A REZ is a geographical area that enables the development of profitable, cost-effective, grid-connected renewable energy (RE). The REZ Transmission Planning Process is a proactive approach to plan, approve, and build transmission infrastructure connecting REZs to the power system which helps to increase the share of solar, wind and other RE resources in the power system while maintaining reliability and economics, and focuses on large-scale wind and solar resources that can be developed in sufficient quantities to warrant transmission system expansion and upgrades.
Bertoni, Bridget; Ipek, Seyda; McKeen, David; ...
2015-04-30
Here, cold dark matter explains a wide range of data on cosmological scales. However, there has been a steady accumulation of evidence for discrepancies between simulations and observations at scales smaller than galaxy clusters. One promising way to affect structure formation on small scales is a relatively strong coupling of dark matter to neutrinos. We construct an experimentally viable, simple, renormalizable model with new interactions between neutrinos and dark matter and provide the first discussion of how these new dark matter-neutrino interactions affect neutrino phenomenology. We show that addressing the small scale structure problems requires asymmetric dark matter with amore » mass that is tens of MeV. Generating a sufficiently large dark matter-neutrino coupling requires a new heavy neutrino with a mass around 100 MeV. The heavy neutrino is mostly sterile but has a substantial τ neutrino component, while the three nearly massless neutrinos are partly sterile. This model can be tested by future astrophysical, particle physics, and neutrino oscillation data. Promising signatures of this model include alterations to the neutrino energy spectrum and flavor content observed from a future nearby supernova, anomalous matter effects in neutrino oscillations, and a component of the τ neutrino with mass around 100 MeV.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. G. Little
1999-03-01
The Idaho National Engineering and Environmental Laboratory (INEEL), through the US Department of Energy (DOE), has proposed that a large-scale wind test facility (LSWTF) be constructed to study, in full-scale, the behavior of low-rise structures under simulated extreme wind conditions. To determine the need for, and potential benefits of, such a facility, the Idaho Operations Office of the DOE requested that the National Research Council (NRC) perform an independent assessment of the role and potential value of an LSWTF in the overall context of wind engineering research. The NRC established the Committee to Review the Need for a Large-scale Testmore » Facility for Research on the Effects of Extreme Winds on Structures, under the auspices of the Board on Infrastructure and the Constructed Environment, to perform this assessment. This report conveys the results of the committee's deliberations as well as its findings and recommendations. Data developed at large-scale would enhanced the understanding of how structures, particularly light-frame structures, are affected by extreme winds (e.g., hurricanes, tornadoes, sever thunderstorms, and other events). With a large-scale wind test facility, full-sized structures, such as site-built or manufactured housing and small commercial or industrial buildings, could be tested under a range of wind conditions in a controlled, repeatable environment. At this time, the US has no facility specifically constructed for this purpose. During the course of this study, the committee was confronted by three difficult questions: (1) does the lack of a facility equate to a need for the facility? (2) is need alone sufficient justification for the construction of a facility? and (3) would the benefits derived from information produced in an LSWTF justify the costs of producing that information? The committee's evaluation of the need and justification for an LSWTF was shaped by these realities.« less
Learning, climate and the evolution of cultural capacity.
Whitehead, Hal
2007-03-21
Patterns of environmental variation influence the utility, and thus evolution, of different learning strategies. I use stochastic, individual-based evolutionary models to assess the relative advantages of 15 different learning strategies (genetic determination, individual learning, vertical social learning, horizontal/oblique social learning, and contingent combinations of these) when competing in variable environments described by 1/f noise. When environmental variation has little effect on fitness, then genetic determinism persists. When environmental variation is large and equal over all time-scales ("white noise") then individual learning is adaptive. Social learning is advantageous in "red noise" environments when variation over long time-scales is large. Climatic variability increases with time-scale, so that short-lived organisms should be able to rely largely on genetic determination. Thermal climates usually are insufficiently red for social learning to be advantageous for species whose fitness is very determined by temperature. In contrast, population trajectories of many species, especially large mammals and aquatic carnivores, are sufficiently red to promote social learning in their predators. The ocean environment is generally redder than that on land. Thus, while individual learning should be adaptive for many longer-lived organisms, social learning will often be found in those dependent on the populations of other species, especially if they are marine. This provides a potential explanation for the evolution of a prevalence of social learning, and culture, in humans and cetaceans.
Reisner, A E
2005-11-01
The building and expansion of large-scale swine facilities has created considerable controversy in many neighboring communities, but to date, no systematic analysis has been done of the types of claims made during these conflicts. This study examined how local newspapers in one state covered the transition from the dominance of smaller, diversified swine operations to large, single-purpose pig production facilities. To look at publicly made statements concerning large-scale swine facilities (LSSF), the study collected all articles related to LSSF from 22 daily Illinois newspapers over a 3-yr period (a total of 1,737 articles). The most frequent sets of claims used by proponents of LSSF were that the environment was not harmed, that state regulations were sufficiently strict, and that the state economically needed this type of agriculture. The most frequent claims made by opponents were that LSSF harmed the environment and neighboring communities and that stricter regulations were needed. Proponents' claims were primarily defensive and, to some degree, underplayed the advantages of LSSF. Pro-and anti-LSSF groups were talking at cross-purposes, to some degree. Even across similar themes, those in favor of LSSF and those opposed were addressing different sets of concerns. The newspaper claims did not indicate any effective alliances forming between local anti-LSSF groups and national environmental or animal rights groups.
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
Inflationary magnetogenesis without the strong coupling problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, Ricardo J.Z.; Jain, Rajeev Kumar; Sloth, Martin S., E-mail: ferreira@cp3.dias.sdu.dk, E-mail: jain@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk
2013-10-01
The simplest gauge invariant models of inflationary magnetogenesis are known to suffer from the problems of either large backreaction or strong coupling, which make it difficult to self-consistently achieve cosmic magnetic fields from inflation with a field strength larger than 10{sup −32}G today on the Mpc scale. Such a strength is insufficient to act as seed for the galactic dynamo effect, which requires a magnetic field larger than 10{sup −20}G. In this paper we analyze simple extensions of the minimal model, which avoid both the strong coupling and back reaction problems, in order to generate sufficiently large magnetic fields onmore » the Mpc scale today. First we study the possibility that the coupling function which breaks the conformal invariance of electromagnetism is non-monotonic with sharp features. Subsequently, we consider the effect of lowering the energy scale of inflation jointly with a scenario of prolonged reheating where the universe is dominated by a stiff fluid for a short period after inflation. In the latter case, a systematic study shows upper bounds for the magnetic field strength today on the Mpc scale of 10{sup −13}G for low scale inflation and 10{sup −25}G for high scale inflation, thus improving on the previous result by 7-19 orders of magnitude. These results are consistent with the strong coupling and backreaction constraints.« less
Land grabbing: a preliminary quantification of economic impacts on rural livelihoods.
Davis, Kyle F; D'Odorico, Paolo; Rulli, Maria Cristina
2014-01-01
Global demands on agricultural land are increasing due to population growth, dietary changes and the use of biofuels. Their effect on food security is to reduce humans' ability to cope with the uncertainties of global climate change. In light of the 2008 food crisis, to secure reliable future access to sufficient agricultural land, many nations and corporations have begun purchasing large tracts of land in the global South, a phenomenon deemed "land grabbing" by popular media. Because land investors frequently export crops without providing adequate employment, this represents an effective income loss for local communities. We study 28 countries targeted by large-scale land acquisitions [comprising 87 % of reported cases and 27 million hectares (ha)] and estimate the effects of such investments on local communities' incomes. We find that this phenomenon can potentially affect the incomes of ~12 million people globally with implications for food security, poverty levels and urbanization. While it is important to note that our study incorporates a number of assumptions and limitations, it provides a much needed initial quantification of the economic impacts of large-scale land acquisitions on rural livelihoods.
NASA Astrophysics Data System (ADS)
Okamoto, Ryuichi; Komura, Shigeyuki; Fournier, Jean-Baptiste
2017-07-01
We theoretically investigate the dynamics of a floating lipid bilayer membrane coupled with a two-dimensional cytoskeleton network, taking into account explicitly the intermonolayer friction, the discrete lattice structure of the cytoskeleton, and its prestress. The lattice structure breaks lateral continuous translational symmetry and couples Fourier modes with different wave vectors. It is shown that within a short time interval a long-wavelength deformation excites a collection of modes with wavelengths shorter than the lattice spacing. These modes relax slowly with a common renormalized rate originating from the long-wavelength mode. As a result, and because of the prestress, the slowest relaxation is governed by the intermonolayer friction. Conversely, and most interestingly, forces applied at the scale of the cytoskeleton for a sufficiently long time can cooperatively excite large-scale modes.
Star Formation: Answering Fundamental Questions During the Spitzer Warm Mission Phase
NASA Astrophysics Data System (ADS)
Strom, Steve; Allen, Lori; Carpenter, John; Hartmann, Lee; Megeath, S. Thomas; Rebull, Luisa; Stauffer, John R.; Liu, Michael
2007-10-01
Through existing studies of star-forming regions, Spitzer has created rich databases which have already profoundly influenced our ability to understand the star and planet formation process on micro and macro scales. However, it is essential to note that Spitzer observations to date have focused largely on deep observations of regions of recent star formation associated directly with well-known molecular clouds located within 500 pc. What has not been done is to explore to sufficient depth or breadth a representative sample of the much larger regions surrounding the more massive of these molecular clouds. Also, while there have been targeted studies of specific distant star forming regions, in general, there has been little attention devoted to mapping and characterizing the stellar populations and star-forming histories of the surrounding giant molecular clouds (GMCs). As a result, we have yet to develop an understanding of the major physical processes that control star formation on the scale or spiral arms. Doing so will allow much better comparison of star-formation in our galaxy to the star-forming complexes that dominate the spiral arms of external galaxies. The power of Spitzer in the Warm Mission for studies of star formation is its ability to carry out large-scale surveys unbiased by prior knowledge of ongoing star formation or the presence of molecular clouds. The Spitzer Warm Mission will provide two uniquely powerful capabilities that promise equally profound advances : high sensitivity and efficient coverage of many hundreds of square degrees, and angular resolution sufficient to resolve dense groups and clusters of YSOs and to identify contaminating background galaxies whose colors mimic those of young stars. In this contribution, we describe two major programs: a survey of the outer regions of selected nearby OB associations, and a study of distant GMCs and star formation on the scale of a spiral arm.
Experience of public procurement of Open Compute servers
NASA Astrophysics Data System (ADS)
Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony
2015-12-01
The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).
Effective algorithm for routing integral structures with twolayer switching
NASA Astrophysics Data System (ADS)
Nazarov, A. V.; Shakhnov, V. A.; Vlasov, A. I.; Novikov, A. N.
2018-05-01
The paper presents an algorithm for routing switching objects such as large-scale integrated circuits (LSICs) with two layers of metallization, embossed printed circuit boards, microboards with pairs of wiring layers on each side, and other similar constructs. The algorithm allows eliminating the effect of mutual blocking of routes in the classical wave algorithm by implementing a special circuit of digital wave motion in two layers of metallization, allowing direct intersections of all circuit conductors in a combined layer. However, information about the belonging of the topology elements to the circuits is sufficient for layering and minimizing the number of contact holes. In addition, the paper presents a specific example which shows that, in contrast to the known routing algorithms using a wave model, just one byte of memory per discrete of the work field is sufficient to implement the proposed algorithm.
Coronal mass ejections and their sheath regions in interplanetary space
NASA Astrophysics Data System (ADS)
Kilpua, Emilia; Koskinen, Hannu E. J.; Pulkkinen, Tuija I.
2017-11-01
Interplanetary coronal mass ejections (ICMEs) are large-scale heliospheric transients that originate from the Sun. When an ICME is sufficiently faster than the preceding solar wind, a shock wave develops ahead of the ICME. The turbulent region between the shock and the ICME is called the sheath region. ICMEs and their sheaths and shocks are all interesting structures from the fundamental plasma physics viewpoint. They are also key drivers of space weather disturbances in the heliosphere and planetary environments. ICME-driven shock waves can accelerate charged particles to high energies. Sheaths and ICMEs drive practically all intense geospace storms at the Earth, and they can also affect dramatically the planetary radiation environments and atmospheres. This review focuses on the current understanding of observational signatures and properties of ICMEs and the associated sheath regions based on five decades of studies. In addition, we discuss modelling of ICMEs and many fundamental outstanding questions on their origin, evolution and effects, largely due to the limitations of single spacecraft observations of these macro-scale structures. We also present current understanding of space weather consequences of these large-scale solar wind structures, including effects at the other Solar System planets and exoplanets. We specially emphasize the different origin, properties and consequences of the sheaths and ICMEs.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J; Inzé, Dirk; Van de Peer, Yves
2013-03-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein-protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies.
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
Transposon facilitated DNA sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, D.E.; Berg, C.M.; Huang, H.V.
1990-01-01
The purpose of this research is to investigate and develop methods that exploit the power of bacterial transposable elements for large scale DNA sequencing: Our premise is that the use of transposons to put primer binding sites randomly in target DNAs should provide access to all portions of large DNA fragments, without the inefficiencies of methods involving random subcloning and attendant repetitive sequencing, or of sequential synthesis of many oligonucleotide primers that are used to match systematically along a DNA molecule. Two unrelated bacterial transposons, Tn5 and {gamma}{delta}, are being used because they have both proven useful for molecular analyses,more » and because they differ sufficiently in mechanism and specificity of transposition to merit parallel development.« less
Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations
NASA Astrophysics Data System (ADS)
Cooperman, Joshua H.
2018-05-01
The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral dimension as a physical observable with which to delineate renormalization group trajectories in the hope of taking a continuum limit of causal dynamical triangulations at a nontrivial ultraviolet fixed point (Ambjørn et al 2016 Phys. Rev. D 93 104032, 2014 Class. Quantum Grav. 31 165003, Cooperman 2016 Gen. Relativ. Gravit. 48 1, Cooperman 2016 arXiv:1604.01798, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151).
Multi-scale structures of turbulent magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T. K. M., E-mail: takuma.nakamura@oeaw.ac.at; Nakamura, R.; Narita, Y.
2016-05-15
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in whichmore » modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.« less
Multi-scale structures of turbulent magnetic reconnection
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Narita, Y.; Baumjohann, W.; Daughton, W.
2016-05-01
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in which modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.
Cumulative Damage in Strength-Dominated Collisions of Rocky Asteroids: Rubble Piles and Brick Piles
NASA Technical Reports Server (NTRS)
Housen, Kevin
2009-01-01
Laboratory impact experiments were performed to investigate the conditions that produce large-scale damage in rock targets. Aluminum cylinders (6.3 mm diameter) impacted basalt cylinders (69 mm diameter) at speeds ranging from 0.7 to 2.0 km/s. Diagnostics included measurements of the largest fragment mass, velocities of the largest remnant and large fragments ejected from the periphery of the target, and X-ray computed tomography imaging to inspect some of the impacted targets for internal damage. Significant damage to the target occurred when the kinetic energy per unit target mass exceeded roughly 1/4 of the energy required for catastrophic shattering (where the target is reduced to one-half its original mass). Scaling laws based on a rate-dependent strength were developed that provide a basis for extrapolating the results to larger strength-dominated collisions. The threshold specific energy for widespread damage was found to scale with event size in the same manner as that for catastrophic shattering. Therefore, the factor of four difference between the two thresholds observed in the lab also applies to larger collisions. The scaling laws showed that for a sequence of collisions that are similar in that they produce the same ratio of largest fragment mass to original target mass, the fragment velocities decrease with increasing event size. As a result, rocky asteroids a couple hundred meters in diameter should retain their large ejecta fragments in a jumbled rubble-pile state. For somewhat larger bodies, the ejection velocities are sufficiently low that large fragments are essentially retained in place, possibly forming ordered "brick-pile" structures.
Climate Drivers of Alaska Summer Stream Temperature
NASA Astrophysics Data System (ADS)
Bieniek, P.; Bhatt, U. S.; Plumb, E. W.; Thoman, R.; Trammell, E. J.
2016-12-01
The temperature of the water in lakes, rivers and streams has wide ranging impacts from local water quality and fish habitats to global climate change. Salmon fisheries in Alaska, a critical source of food in many subsistence communities, are sensitive to large-scale climate variability and river and stream temperatures have also been linked with salmon production in Alaska. Given current and projected climate change, understanding the mechanisms that link the large-scale climate and river and stream temperatures is essential to better understand the changes that may occur with aquatic life in Alaska's waterways on which subsistence users depend. An analysis of Alaska stream temperatures in the context of reanalysis, downscaled, station and other climate data is undertaken in this study to fill that need. Preliminary analysis identified eight stream observation sites with sufficiently long (>15 years) data available for climate-scale analysis in Alaska with one station, Terror Creek in Kodiak, having a 30-year record. Cross-correlation of summer (June-August) water temperatures between the stations are generally high even though they are spread over a large geographic region. Correlation analysis of the Terror Creek summer observations with seasonal sea surface temperatures (SSTs) in the North Pacific broadly resembles the SST anomaly fields typically associated with the Pacific Decadal Oscillation (PDO). A similar result was found for the remaining stations and in both cases PDO-like correlation patterns also occurred in the preceding spring. These preliminary results demonstrate that there is potential to diagnose the mechanisms that link the large-scale climate system and Alaska stream temperatures.
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
The evolution of a binary in a retrograde circular orbit embedded in an accretion disk
NASA Astrophysics Data System (ADS)
Ivanov, P. B.; Papaloizou, J. C. B.; Paardekooper, S.-J.; Polnarev, A. G.
2015-04-01
Aims: Supermassive black hole binaries may form as a consequence of galaxy mergers. Both prograde and retrograde orbits have been proposed. We study a binary with a small mass ratio, q, in a retrograde orbit immersed in and interacting with a gaseous accretion disk in order to estimate the time scales for inward migration that leads to coalescence and the accretion rate to the secondary component. Methods: We employed both semi-analytic methods and two-dimensional numerical simulations, focusing on the case where the binary mass ratio is small but large enough to significantly perturb the disk. Results: We develop the theory of type I migration in this case and go on to determine the conditions for gap formation. We find that when this happens inward migration occurs on a time scale equal to the time required for one half of the secondary mass to be accreted through the unperturbed accretion disk. The accretion rate onto the secondary itself is found to only play a minor role in the orbital evolution as it is of the order of q1/3 of that to the primary. We obtain good general agreement between the semi-analytic and fully numerical approaches and note that the former can be applied to disks with a wide dynamic range on long time scales. Conclusions: We conclude that inward migration induced by interaction with the disk can enable the binary to migrate inwards, alleviating the so-called final parsec problem. When q is sufficiently small, there is no well-pronounced cavity inside the binary orbit, unlike the prograde case. The accretion rate to the secondary does not influence the binary orbital evolution much, but can lead to some interesting observational consequences, provided the accretion efficiency is sufficiently large. In this case the binary may be detected as, for example, two sources of radiation rotating around each other. However, the study should be extended to consider orbits with significant eccentricity and the effects of gravitational radiation at small length scales. Also, torques acting between a circumbinary accretion disk, which has a non-zero inclination with respect to a retrograde binary orbit at large distances, may cause the inclination to increase on a time scale that can be similar to, or smaller than, the time scale of orbital evolution, depending on the disk parameters and binary mass ratio. This is also an aspect for future study. The movies are available in electronic form at http://www.aanda.org
Liquefaction of calcium-containing subbituminous coals and coals of lower rank
Brunson, Roy J.
1979-01-01
An improved process for the treatment of a calcium-containing subbituminous coal and coals of lower rank to form insoluble, thermally stable calcium salts which remain within the solids portions of the residue on liquefaction of the coal, thereby suppressing the formation of scale, made up largely of calcium carbonate which normally forms within the coal liquefaction reactor (i.e., coal liquefaction zone), e.g., on reactor surfaces, lines, auxiliary equipment and the like. An oxide of sulfur, in liquid phase, is contacted with a coal feed sufficient to impregnate the pores of the coal. The impregnated coal, in particulate form, can thereafter be liquefied in a coal liquefaction reactor (reaction zone) at coal liquefaction conditions without significant formation of scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...
2017-06-07
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
Material Characterization for the Analysis of Skin/Stiffener Separation
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Leone, Frank A.; Song, Kyongchan; Ratcliffe, James G.; Rose, Cheryl A.
2017-01-01
Test results show that separation failure in co-cured skin/stiffener interfaces is characterized by dense networks of interacting cracks and crack path migrations that are not present in standard characterization tests for delamination. These crack networks result in measurable large-scale and sub-ply-scale R curve toughening mechanisms, such as fiber bridging, crack migration, and crack delving. Consequently, a number of unknown issues exist regarding the level of analysis detail that is required for sufficient predictive fidelity. The objective of the present paper is to examine some of the difficulties associated with modeling separation failure in stiffened composite structures. A procedure to characterize the interfacial material properties is proposed and the use of simplified models based on empirical interface properties is evaluated.
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
Solution-Processed Metal Coating to Nonwoven Fabrics for Wearable Rechargeable Batteries.
Lee, Kyulin; Choi, Jin Hyeok; Lee, Hye Moon; Kim, Ki Jae; Choi, Jang Wook
2017-12-27
Wearable rechargeable batteries require electrode platforms that can withstand various physical motions, such as bending, folding, and twisting. To this end, conductive textiles and paper have been highlighted, as their porous structures can accommodate the stress built during various physical motions. However, fabrics with plain weaves or knit structures have been mostly adopted without exploration of nonwoven counterparts. Also, the integration of conductive materials, such as carbon or metal nanomaterials, to achieve sufficient conductivity as current collectors is not well-aligned with large-scale processing in terms of cost and quality control. Here, the superiority of nonwoven fabrics is reported in electrochemical performance and bending capability compared to currently dominant woven counterparts, due to smooth morphology near the fiber intersections and the homogeneous distribution of fibers. Moreover, solution-processed electroless deposition of aluminum and nickel-copper composite is adopted for cathodes and anodes, respectively, demonstrating the large-scale feasibility of conductive nonwoven platforms for wearable rechargeable batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dissociative recombination of the ground state of N2(+)
NASA Technical Reports Server (NTRS)
Guberman, Steven L.
1991-01-01
Large-scale calculations of the dissociative recombination cross sections and rates for the v = 0 level of the N2(+) ground state are reported, and the important role played by vibrationally excited Rydberg states lying both below and above the v = 0 level of the ion is demonstrated. The large-scale electronic wave function calculations were done using triple zeta plus polarization nuclear-centered-valence Gaussian basis sets. The electronic widths were obtained using smaller wave functions, and the cross sections were calculated on the basis of the multichannel quantum defect theory. The DR rate is calculated at 1.6 x 10 to the -7th x (Te/300) to the -0.37 cu cm/sec for Te in the range of 100 to 1000 K, and is found to be in excellent agreement with prior microwave afterglow experiments but in disagreement with recent merged beam results. It is inferred that the dominant mechanism for DR imparts sufficient energy to the product atoms to allow for escape from the Martian atmosphere.
Large-Scale Transient Transfection of Suspension Mammalian Cells for VLP Production.
Cervera, Laura; Kamen, Amine A
2018-01-01
Large-scale transient transfection of mammalian cell suspension cultures enables the production of biological products in sufficient quantity and under stringent quality attributes to perform accelerated in vitro evaluations and has the potential to support preclinical or even clinical studies. Here we describe the methodology to produce VLPs in a 3L bioreactor, using suspension HEK 293 cells and PEIPro as a transfection reagent. Cells are grown in the bioreactor to 1 × 10 6 cells/mL and transfected with a plasmid DNA-PEI complex at a ratio of 1:2. Dissolved oxygen and pH are controlled and are online monitored during the production phase and cell growth and viability can be measured off line taking samples from the bioreactor. If the product is labeled with a fluorescent marker, transfection efficiency can be also assessed using flow cytometry analysis. Typically, the production phase lasts between 48 and 96 h until the product is harvested.
Sea-level-induced seismicity and submarine landslide occurrence
Brothers, Daniel S.; Luttrell, Karen M.; Chaytor, Jason D.
2013-01-01
The temporal coincidence between rapid late Pleistocene sea-level rise and large-scale slope failures is widely documented. Nevertheless, the physical mechanisms that link these phenomena are poorly understood, particularly along nonglaciated margins. Here we investigate the causal relationships between rapid sea-level rise, flexural stress loading, and increased seismicity rates along passive margins. We find that Coulomb failure stress across fault systems of passive continental margins may have increased more than 1 MPa during rapid late Pleistocene–early Holocene sea-level rise, an amount sufficient to trigger fault reactivation and rupture. These results suggest that sea-level–modulated seismicity may have contributed to a number of poorly understood but widely observed phenomena, including (1) increased frequency of large-scale submarine landslides during rapid, late Pleistocene sea-level rise; (2) emplacement of coarse-grained mass transport deposits on deep-sea fans during the early stages of marine transgression; and (3) the unroofing and release of methane gas sequestered in continental slope sediments.
Clockwork for neutrino masses and lepton flavor violation
NASA Astrophysics Data System (ADS)
Ibarra, Alejandro; Kushwaha, Ashwani; Vempati, Sudhir K.
2018-05-01
We investigate the generation of small neutrino masses in a clockwork framework which includes Dirac mass terms as well as Majorana mass terms for the new fermions. We derive analytic formulas for the masses of the new particles and for their Yukawa couplings to the lepton doublets, in the scenario where the clockwork parameters are universal. When the universal Majorana mass vanishes, the zero mode of the clockwork sector forms a Dirac pair with the active neutrino, with a mass which is in agreement with oscillations experiments for a sufficiently large number of clockwork gears. On the other hand, when it does not vanish, neutrino masses are generated via the seesaw mechanism. In this case, and due to the fact that the effective Yukawa couplings of the higher modes can be sizable, neutrino masses can only be suppressed by postulating a large Majorana mass scale. Finally, we discuss the constraints on the mass scale of the clockwork fermions from the non-observation of the rare leptonic decay μ → eγ.
Dynamics of oxygen supply and consumption during mainstream large-scale composting in China.
Zeng, Jianfei; Shen, Xiuli; Han, Lujia; Huang, Guangqun
2016-11-01
This study characterized some physicochemical and biological parameters to systematically evaluate the dynamics of oxygen supply and consumption during large-scale trough composting in China. The results showed that long active phases, low maximum temperatures, low organic matter losses and high pore methane concentrations were observed in different composting layers. Pore oxygen concentrations in the top, middle and bottom layers maintained <5vol.% for 40, 42 and 45days, respectively, which accounted for more than 89% of the whole period. After each mechanical turning, oxygen was consumed at a stable respiration rate to a concentration of 5vol.% in no more than 99min and remained anaerobic in the subsequent static condition. The daily percentage of time under aerobic condition was no more than 14% of a single day. Therefore, improving FAS, adjusting aeration interval or combining turning with forced aeration was suggested to provide sufficient oxygen during composting. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset
NASA Astrophysics Data System (ADS)
Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.
2017-12-01
Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Planetesimal Formation through the Streaming Instability
NASA Astrophysics Data System (ADS)
Yang, Chao-Chin; Johansen, Anders; Schäfer, Urs
2015-12-01
The streaming instability is a promising mechanism to circumvent the barriers in direct dust growth and lead to the formation of planetesimals, as demonstrated by many previous studies. In order to resolve the thin layer of solids, however, most of these studies were focused on a local region of a protoplanetary disk with a limited simulation domain. It remains uncertain how the streaming instability is affected by the disk gas on large scales, and models that have sufficient dynamical range to capture both the thin particle layer and the large-scale disk dynamics are required.We hereby systematically push the limits of the computational domain up to more than the gas scale height, and study the particle-gas interaction on large scales in the saturated state of the streaming instability and the initial mass function of the resulting planetesimals. To overcome the numerical challenges posed by this kind of models, we have developed a new technique to simultaneously relieve the stringent time step constraints due to small-sized particles and strong local solid concentrations. Using these models, we demonstrate that the streaming instability can drive multiple radial, filamentary concentrations of solids, implying that planetesimals are born in well separated belt-like structures. We also find that the initial mass function of planetesimals via the streaming instability has a characteristic exponential form, which is robust against computational domain as well as resolution. These findings will help us further constrain the cosmochemical history of the Solar system as well as the planet formation theory in general.
NASA Astrophysics Data System (ADS)
Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.
2017-12-01
Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.
Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.
Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R
2007-12-01
After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.
NASA Astrophysics Data System (ADS)
Michalak, D. J.; Bruno, A.; Caudillo, R.; Elsherbini, A. A.; Falcon, J. A.; Nam, Y. S.; Poletto, S.; Roberts, J.; Thomas, N. K.; Yoscovits, Z. R.; Dicarlo, L.; Clarke, J. S.
Experimental quantum computing is rapidly approaching the integration of sufficient numbers of quantum bits for interesting applications, but many challenges still remain. These challenges include: realization of an extensible design for large array scale up, sufficient material process control, and discovery of integration schemes compatible with industrial 300 mm fabrication. We present recent developments in extensible circuits with vertical delivery. Toward the goal of developing a high-volume manufacturing process, we will present recent results on a new Josephson junction process that is compatible with current tooling. We will then present the improvements in NbTiN material uniformity that typical 300 mm fabrication tooling can provide. While initial results on few-qubit systems are encouraging, advanced processing control is expected to deliver the improvements in qubit uniformity, coherence time, and control required for larger systems. Research funded by Intel Corporation.
NASA Astrophysics Data System (ADS)
Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.
2017-12-01
In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at even larger geographical domains. Keywords: PAHs; Community multi-scale air quality model; Multimedia fate model; Land use
Helical bottleneck effect in 3D homogeneous isotropic turbulence
NASA Astrophysics Data System (ADS)
Stepanov, Rodion; Golbraikh, Ephim; Frick, Peter; Shestakov, Alexander
2018-02-01
We present the results of modelling the development of homogeneous and isotropic turbulence with a large-scale source of energy and a source of helicity distributed over scales. We use the shell model for numerical simulation of the turbulence at high Reynolds number. The results show that the helicity injection leads to a significant change in the behavior of the energy and helicity spectra in scales larger and smaller than the energy injection scale. We suggest the phenomenology for direct turbulent cascades with the helicity effect, which reduces the efficiency of the spectral energy transfer. Therefore the energy is accumulated and redistributed so that non-linear interactions will be sufficient to provide a constant energy flux. It can be interpreted as the ‘helical bottleneck effect’ which, depending on the parameters of the injection helicity, reminds one of the well-known bottleneck effect at the end of inertial range. Simulations which included the infrared part of the spectrum show that the inverse cascade hardly develops under distributed helicity forcing.
Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide
Tang, William; Wang, Bei; Ethier, Stephane; ...
2016-11-01
The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less
Drieschner, Klaus H; Boomsma, Anne
2008-06-01
The Treatment Motivation Scales for forensic outpatient treatment (TMS-F) is a Dutch 85-item self-report questionnaire for the motivation of forensic outpatients to engage in their treatment and six cognitive and affective determinants of this motivation. Following descriptions of the conceptual basis and construction, the psychometric properties of the TMS-F are evaluated in two studies. In Study 1 (N = 378), the factorial structure of the instrument and the dimensionality of its scales are evaluated by confirmative factor analysis. In Study 2 with a new sample (N = 376), the results of Study 1 are largely confirmed. It is found that the factorial structure of the TMS-F is in accordance with expectations, that all scales are sufficiently homogeneous and reliable to interpret the sum scores, and that these results are stable across independent samples. The relative importance of the six determinants of the motivation to engage in the treatment and the generalizability of the results are discussed.
Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick
2007-11-01
We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.
Radiative PQ breaking and the Higgs boson mass
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio
2015-06-01
The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.
Mutoh, Hiroki; Mishina, Yukiko; Gallero-Salas, Yasir; Knöpfel, Thomas
2015-01-01
Traditional small molecule voltage sensitive dye indicators have been a powerful tool for monitoring large scale dynamics of neuronal activities but have several limitations including the lack of cell class specific targeting, invasiveness and difficulties in conducting longitudinal studies. Recent advances in the development of genetically-encoded voltage indicators have successfully overcome these limitations. Genetically-encoded voltage indicators (GEVIs) provide sufficient sensitivity to map cortical representations of sensory information and spontaneous network activities across cortical areas and different brain states. In this study, we directly compared the performance of a prototypic GEVI, VSFP2.3, with that of a widely used small molecule voltage sensitive dye (VSD), RH1691, in terms of their ability to resolve mesoscopic scale cortical population responses. We used three synchronized CCD cameras to simultaneously record the dual emission ratiometric fluorescence signal from VSFP2.3 and RH1691 fluorescence. The results show that VSFP2.3 offers more stable and less invasive recording conditions, while the signal-to-noise level and the response dynamics to sensory inputs are comparable to RH1691 recordings. PMID:25964738
Collective synthesis of natural products by means of organocascade catalysis
Jones, Spencer B.; Simmons, Bryon; Mastracchio, Anthony; MacMillan, David W. C.
2012-01-01
Organic chemists are now able to synthesize small quantities of almost any known natural product, given sufficient time, resources and effort. However, translation of the academic successes in total synthesis to the large-scale construction of complex natural products and the development of large collections of biologically relevant molecules present significant challenges to synthetic chemists. Here we show that the application of two nature-inspired techniques, namely organocascade catalysis and collective natural product synthesis, can facilitate the preparation of useful quantities of a range of structurally diverse natural products from a common molecular scaffold. The power of this concept has been demonstrated through the expedient, asymmetric total syntheses of six well-known alkaloid natural products: strychnine, aspidospermidine, vincadifformine, akuammicine, kopsanone and kopsinine. PMID:21753848
Large-scale effects on the regulation of tropical sea surface temperature
NASA Technical Reports Server (NTRS)
Hartmann, Dennis L.; Michelsen, Marc L.
1993-01-01
The dominant terms in the surface energy budget of the tropical oceans are absorption of solar radiation and evaporative cooling. If it is assumed that relative humidity in the boundary layer remains constant, evaporative cooling will increase rapidly with sea surface temperature (SST) because of the strong temperature dependence of saturation water vapor pressure. The resulting stabilization of SST provided by evaporative cooling is sufficient to overcome positive feedback contributed by the decrease of surface net longwave cooling with increasing SST. Evaporative cooling is sensitive to small changes in boundary-layer relative humidity. Large and negative shortwave cloud forcing in the regions of highest SST are supported by the moisture convergence associated with largescale circulations. In the descending portions of these circulations the shortwave cloud forcing is suppressed. When the effect of these circulations is taken into account by spatial averaging, the area-averaged cloud forcing shows no sensitivity to area-averaged SST changes associated with the 1987 warming event in the tropical Pacific. While the shortwave cloud forcing is large and important in the convective regions, the importance of its role in regulating the average temperature of the tropics and in modulating temperature gradients within the tropics is less clear. A heuristic model of SST is used to illustrate the possible role of large-scale atmospheric circulations on SST in the tropics and the coupling between SST gradients and mean tropical SST. The intensity of large-scale circulations responds sensitivity to SST gradients and affects the mean tropical SST by supplying dry air to the planetary boundary layer. Large SST gradients generate vigorous circulations that increase evaporation and reduce the mean SST.
A rapid mechanism to remobilize and homogenize highly crystalline magma bodies.
Burgisser, Alain; Bergantz, George W
2011-03-10
The largest products of magmatic activity on Earth, the great bodies of granite and their corresponding large eruptions, have a dual nature: homogeneity at the large scale and spatial and temporal heterogeneity at the small scale. This duality calls for a mechanism that selectively removes the large-scale heterogeneities associated with the incremental assembly of these magmatic systems and yet occurs rapidly despite crystal-rich, viscous conditions seemingly resistant to mixing. Here we show that a simple dynamic template can unify a wide range of apparently contradictory observations from both large plutonic bodies and volcanic systems by a mechanism of rapid remobilization (unzipping) of highly viscous crystal-rich mushes. We demonstrate that this remobilization can lead to rapid overturn and produce the observed juxtaposition of magmatic materials with very disparate ages and complex chemical zoning. What distinguishes our model is the recognition that the process has two stages. Initially, a stiff mushy magma is reheated from below, producing a reduction in crystallinity that leads to the growth of a subjacent buoyant mobile layer. When the thickening mobile layer becomes sufficiently buoyant, it penetrates the overlying viscous mushy magma. This second stage rapidly exports homogenized material from the lower mobile layer to the top of the system, and leads to partial overturn within the viscous mush itself as an additional mechanism of mixing. Model outputs illustrate that unzipping can rapidly produce large amounts of mobile magma available for eruption. The agreement between calculated and observed unzipping rates for historical eruptions at Pinatubo and at Montserrat demonstrates the general applicability of the model. This mechanism furthers our understanding of both the formation of periodically homogenized plutons (crust building) and of ignimbrites by large eruptions.
Large ejecta fragments from asteroids. [Abstract only
NASA Technical Reports Server (NTRS)
Asphaug, E.
1994-01-01
The asteroid 4 Vesta, with its unique basaltic crust, remains a key mystery of planetary evolution. A localized olivine feature suggests excavation of subcrustal material in a crater or impact basin comparable in size to the planetary radius (R(sub vesta) is approximately = 280 km). Furthermore, a 'clan' of small asteroids associated with Vesta (by spectral and orbital similarities) may be ejecta from this impact 151 and direct parents of the basaltic achondrites. To escape, these smaller (about 4-7 km) asteroids had to be ejected at speeds greater than the escape velocity, v(sub esc) is approximately = 350 m/s. This evidence that large fragments were ejected at high speed from Vesta has not been reconciled with the present understanding of impact physics. Analytical spallation models predict that an impactor capable of ejecting these 'chips off Vesta' would be almost the size of Vesta! Such an impact would lead to the catastrophic disruption of both bodies. A simpler analysis is outlined, based on comparison with cratering on Mars, and it is shown that Vesta could survive an impact capable of ejecting kilometer-scale fragments at sufficient speed. To what extent does Vesta survive the formation of such a large crater? This is best addressed using a hydrocode such as SALE 2D with centroidal gravity to predict velocities subsequent to impact. The fragmentation outcome and velocity subsequent to the impact described to demonstrate that Vesta survives without large-scale disassembly or overturning of the crust. Vesta and its clan represent a valuable dataset for testing fragmentation hydrocodes such as SALE 2D and SPH 3D at planetary scales. Resolution required to directly model spallation 'chips' on a body 100 times as large is now marginally possible on modern workstations. These boundaries are important in near-surface ejection processes and in large-scale disruption leading to asteroid families and stripped cores.
Crops and food security--experiences and perspectives from Taiwan.
Huang, Chen-Te; Fu, Tzu-Yu Richard; Chang, Su-San
2009-01-01
Food security is an important issue that is of concern for all countries around the world. There are many factors which may cause food insecurity including increasing demand, shortage of supply, trade condition, another countries' food policy, lack of money, high food and oil prices, decelerating productivity, speculation, etc. The food self-sufficiency ratio of Taiwan is only 30.6% weighted by energy in 2007. Total agriculture imports and cereals have increased significantly due to the expansion of livestock and fishery industries and improve living standard. The agriculture sector of Taiwan is facing many challenges, such as: low level of food self-sufficiency, aging farmers, large acreage of set-aside farmlands, small scale farming, soaring price of fertilizers, natural disasters accelerated by climate change, and rapid changes in the world food economy. To cope with these challenges, the present agricultural policy is based on three guidelines: "Healthfulness, Efficiency, and Sustainability." A program entitled "Turning Small Landlords into Large Tenants" was launched to make effective use of idle lands. Facing globalization and the food crisis, Taiwan will secure stable food supply through revitalization of its set-aside farmlands and international markets, and provide technical assistance to developing countries, in particular for staple food crops.
Deep learning with non-medical training used for chest pathology identification
NASA Astrophysics Data System (ADS)
Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit
2015-03-01
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
Thorstenson, Sten; Molin, Jesper; Lundström, Claes
2014-01-01
Recent technological advances have improved the whole slide imaging (WSI) scanner quality and reduced the cost of storage, thereby enabling the deployment of digital pathology for routine diagnostics. In this paper we present the experiences from two Swedish sites having deployed routine large-scale WSI for primary review. At Kalmar County Hospital, the digitization process started in 2006 to reduce the time spent at the microscope in order to improve the ergonomics. Since 2008, more than 500,000 glass slides have been scanned in the routine operations of Kalmar and the neighboring Linköping University Hospital. All glass slides are digitally scanned yet they are also physically delivered to the consulting pathologist who can choose to review the slides on screen, in the microscope, or both. The digital operations include regular remote case reporting by a few hospital pathologists, as well as around 150 cases per week where primary review is outsourced to a private clinic. To investigate how the pathologists choose to use the digital slides, a web-based questionnaire was designed and sent out to the pathologists in Kalmar and Linköping. The responses showed that almost all pathologists think that ergonomics have improved and that image quality was sufficient for most histopathologic diagnostic work. 38 ± 28% of the cases were diagnosed digitally, but the survey also revealed that the pathologists commonly switch back and forth between digital and conventional microscopy within the same case. The fact that two full-scale digital systems have been implemented and that a large portion of the primary reporting is voluntarily performed digitally shows that large-scale digitization is possible today. PMID:24843825
NASA Astrophysics Data System (ADS)
Steinhaus, Ben; Shen, Amy; Sureshkumar, Radhakrishna
2006-11-01
We investigate the effects of fluid elasticity and channel geometry on polymeric droplet pinch-off by performing systematic experiments using viscoelastic polymer solutions which possess practically shear rate-independent viscosity (Boger fluids). Four different geometric sizes (width and depth are scaled up proportionally at the ratio of 0.5, 1, 2, 20) are used to study the effect of the length scale, which in turn influences the ratio of elastic to viscous forces as well as the Rayleigh time scale associated with the interfacial instability of a cylindrical column of liquid. We observe a power law relationship between the dimensionless (scaled with respect to the Rayleigh time scale) capillary pinch-off time, T, and the elasticity number, E, defined as the ratio of the fluid relaxation time to the time scale of viscous diffusion. In general, T increases dramatically with increasing E. The inhibition of ``bead-on-a-string'' formation is observed for flows with effective Deborah number, De, defined as the ratio of the fluid relaxation time to the Rayleigh time scale becomes greater than 10. For sufficiently large values of De, the Rayleigh instability may be modified substantially by fluid elasticity.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J.; Inzé, Dirk; Van de Peer, Yves
2013-01-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein–protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies. PMID:23532071
Ramakrishnan, Divakar; Curtis, Wayne R
2004-10-20
Trickle-bed root culture reactors are shown to achieve tissue concentrations as high as 36 g DW/L (752 g FW/L) at a scale of 14 L. Root growth rate in a 1.6-L reactor configuration with improved operational conditions is shown to be indistinguishable from the laboratory-scale benchmark, the shaker flask (mu=0.33 day(-1)). These results demonstrate that trickle-bed reactor systems can sustain tissue concentrations, growth rates and volumetric biomass productivities substantially higher than other reported bioreactor configurations. Mass transfer and fluid dynamics are characterized in trickle-bed root reactors to identify appropriate operating conditions and scale-up criteria. Root tissue respiration goes through a minimum with increasing liquid flow, which is qualitatively consistent with traditional trickle-bed performance. However, liquid hold-up is much higher than traditional trickle-beds and alternative correlations based on liquid hold-up per unit tissue mass are required to account for large changes in biomass volume fraction. Bioreactor characterization is sufficient to carry out preliminary design calculations that indicate scale-up feasibility to at least 10,000 liters.
Large scale electromechanical transistor with application in mass sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Leisheng; Li, Lijie, E-mail: L.Li@swansea.ac.uk
Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to bemore » used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.« less
Enhanced peculiar velocities in brane-induced gravity
NASA Astrophysics Data System (ADS)
Wyman, Mark; Khoury, Justin
2010-08-01
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the ΛCDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3σ level with ΛCDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8σ level, with an estimated probability between 3.3×10-11 and 3.6×10-9 in ΛCDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravity becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2σ level with bulk flow observations. The occurrence of the bullet cluster in these theories is ≈104 times more probable than in ΛCDM cosmology.
Enhanced peculiar velocities in brane-induced gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyman, Mark; Khoury, Justin
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the {Lambda}CDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3{sigma} level with {Lambda}CDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8{sigma} level, with an estimated probability between 3.3x10{sup -11} and 3.6x10{sup -9} in {Lambda}CDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravitymore » becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2{sigma} level with bulk flow observations. The occurrence of the bullet cluster in these theories is {approx_equal}10{sup 4} times more probable than in {Lambda}CDM cosmology.« less
NASA Astrophysics Data System (ADS)
Pan, Wen-hao; Liu, Shi-he; Huang, Li
2018-02-01
This study developed a three-layer velocity model for turbulent flow over large-scale roughness. Through theoretical analysis, this model coupled both surface and subsurface flow. Flume experiments with flat cobble bed were conducted to examine the theoretical model. Results show that both the turbulent flow field and the total flow characteristics are quite different from that in the low gradient flow over microscale roughness. The velocity profile in a shallow stream converges to the logarithmic law away from the bed, while inflecting over the roughness layer to the non-zero subsurface flow. The velocity fluctuations close to a cobble bed are different from that of a sand bed, and it indicates no sufficiently large peak velocity. The total flow energy loss deviates significantly from the 1/7 power law equation when the relative flow depth is shallow. Both the coupled model and experiments indicate non-negligible subsurface flow that accounts for a considerable proportion of the total flow. By including the subsurface flow, the coupled model is able to predict a wider range of velocity profiles and total flow energy loss coefficients when compared with existing equations.
Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M
2017-10-01
Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.
Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus
2017-01-01
Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
Coarse-grained incompressible magnetohydrodynamics: Analyzing the turbulent cascades
Aluie, Hussein
2017-02-21
Here, we formulate a coarse-graining approach to the dynamics of magnetohydrodynamic (MHD) fluids at a continuum of length-scales. In this methodology, effective equations are derived for the observable velocity and magnetic fields spatially-averaged at an arbitrary scale of resolution. The microscopic equations for the bare velocity and magnetic fields are renormalized by coarse-graining to yield macroscopic effective equations that contain both a subscale stress and a subscale electromotive force (EMF) generated by nonlinear interaction of eliminated fields and plasma motions. At large coarse-graining length-scales, the direct dissipation of invariants by microscopic mechanisms (such as molecular viscosity and Spitzer resistivity) ismore » shown to be negligible. The balance at large scales is dominated instead by the subscale nonlinear terms, which can transfer invariants across scales, and are interpreted in terms of work concepts for energy and in terms of topological flux-linkage for the two helicities. An important application of this approach is to MHD turbulence, where the coarse-graining length ℓ lies in the inertial cascade range. We show that in the case of sufficiently rough velocity and/or magnetic fields, the nonlinear inter-scale transfer need not vanish and can persist to arbitrarily small scales. Although closed expressions are not available for subscale stress and subscale EMF, we derive rigorous upper bounds on the effective dissipation they produce in terms of scaling exponents of the velocity and magnetic fields. These bounds provide exact constraints on phenomenological theories of MHD turbulence in order to allow the nonlinear cascade of energy and cross-helicity. On the other hand, we show that the forward cascade of magnetic helicity to asymptotically small scales is impossible unless 3rd-order moments of either velocity or magnetic field become infinite.« less
NASA Astrophysics Data System (ADS)
Boella, Elisabetta; Herrero-Gonzalez, Diego; Innocenti, Maria Elena; Bemporad, Alessandro; Lapenta, Giovanni
2017-04-01
Fully kinetic simulations of magnetic reconnection events in the solar environment are especially challenging due to the extreme range of spatial and temporal scales that characterises them. As one moves from the photosphere to the chromosphere and the corona, the temperature increases from sub eV to 10-100 eV, while the mass density decreases from 10-4 to 10-12 kg/m3 and further. The intrinsic scales of kinetic reconnection (inertial length and gyroradius) are tremendously smaller than the maximum resolution available in observations. Furthermore, no direct information is available on the size of reconnection regions, plasmoids and reconnection fronts, while observations suggest that the process can cascade down to very small scale te{Bemporad}. Resolving the electron and ion scales while simulating a sufficiently large domain is a great challenge facing solar modelling. An especially challenging aspect is the need to consider the Debye length. The very low temperature of the electrons and the large spatial and temporal scales make these simulations hard to implement within existing Particle in Cell (PIC) methods. The limit is the ratio of the grid spacing to the Debye length. PIC methods show good stability and energy conservation when the grid does not exceed the Debye length too much. Semi-implicit methods te{Brackbill, Langdon} improve on this point. Only the recently developed fully energy conserving implicit methods have solved the problem te{Markidis, Chen}, but at a high computational cost. Very recently, we have developed an efficient new semi-implicit algorithm, which has been proven to conserve energy exactly to machine precision te{Lapenta}. In this work, we illustrate the main steps that enabled this great breakthrough and report the implementation on a new massively parallel three dimensional PIC code, called ECsim te{Lapenta2}. The new approach is applied to the problem of reconnection in the solar environment. We compare results of a simple 2D configuration similar to the so-called GEM challenge for different ranges of electron temperature, density and magnetic field, relative to different distances from the photosphere, demonstrating the capability of the new code. Finally, we report on the first results (to the authors' knowledge) of realistic magnetic 3D reconnection simulations in the solar environment, considering a large domain sufficient to describe the interaction of large scale dynamics with the reconnection process. A. Bemporad, ApJ 689, 572 (2008). J.U. Brackbill and D.W. Forslund, J. Comput. Phys. 46, 271 (1982). A. Langdon et al., J. Comput. Phys. 51, 107 (1983). S. Markidis and G. Lapenta, J. Comput. Phys. 230, 7037 (2011). G. Chen et al., J. Comput. Phys. 230, 7018 (2011). G. Lapenta, arXiv preprint arXiv:1602.06326 (2016). G. Lapenta et al., arXiv preprint arXiv:1612.08289 (2016).
A Thermal Technique of Fault Nucleation, Growth, and Slip
NASA Astrophysics Data System (ADS)
Garagash, D.; Germanovich, L. N.; Murdoch, L. C.; Martel, S. J.; Reches, Z.; Elsworth, D.; Onstott, T. C.
2009-12-01
Fractures and fluids influence virtually all mechanical processes in the crust, but many aspects of these processes remain poorly understood largely because of a lack of controlled field experiments at appropriate scale. We have developed an in-situ experimental approach to create carefully controlled faults at scale of ~10 meters using thermal techniques to modify in situ stresses to the point where the rock fails in shear. This approach extends experiments on fault nucleation and growth to length scales 2-3 orders of magnitude greater than are currently possible in the laboratory. The experiments could be done at depths where the modified in situ stresses are sufficient to drive faulting, obviating the need for unrealistically large loading frames. Such experiments require an access to large rock volumes in the deep subsurface in a controlled setting. The Deep Underground Science and Engineering Laboratory (DUSEL), which is a research facility planned to occupy the workings of the former Homestake gold mine in the northern Black Hills, South Dakota, presents an opportunity for accessing locations with vertical stresses as large as 60 MPa (down to 2400 m depth), which is sufficient to create faults. One of the most promising methods for manipulating stresses to create faults that we have evaluated involves drilling two parallel planar arrays of boreholes and circulating cold fluid (e.g., liquid nitrogen) to chill the region in the vicinity of the boreholes. Cooling a relatively small region around each borehole causes the rock to contract, reducing the normal compressive stress throughout much larger region between the arrays of boreholes. This scheme was evaluated using both scaling analysis and a finite element code. Our results show that if the boreholes are spaced by ~1 m, in several days to weeks, the normal compressive stress can be reduced by 10 MPa or more, and it is even possible to create net tension between the borehole arrays. According to the Mohr-Coulomb strength criterion with standard Byerlee parameters, a fault will initiate before the net tension occurs. After a new fault is created, hot fluid can be injected into the boreholes to increase the temperature and reverse the direction of fault slip. This process can be repeated to study the formation of gouge, and how the properties of gouge control fault slip and associated seismicity. Instrumenting the site with arrays of geophones, tiltmeters, strain gauges, and displacement transducers as well as back mining - an opportunity provided by the DUSEL project - can reveal details of the fault geometry and gouge. We also expect to find small faults (with cm-scale displacement) during construction of DUSEL drifts. The same thermal technique can be used to induce slip on one of them and compare the “man-made” and natural gouges. The thermal technique appears to be a relatively simple way to rapidly change the stress field and either create slip on existing fractures or create new faults at scales up to 10 m or more.
Astrophysical constraints on Planck scale dissipative phenomena.
Liberati, Stefano; Maccione, Luca
2014-04-18
The emergence of a classical spacetime from any quantum gravity model is still a subtle and only partially understood issue. If indeed spacetime is arising as some sort of large scale condensate of more fundamental objects, then it is natural to expect that matter, being a collective excitation of the spacetime constituents, will present modified kinematics at sufficiently high energies. We consider here the phenomenology of the dissipative effects necessarily arising in such a picture. Adopting dissipative hydrodynamics as a general framework for the description of the energy exchange between collective excitations and the spacetime fundamental degrees of freedom, we discuss how rates of energy loss for elementary particles can be derived from dispersion relations and used to provide strong constraints on the base of current astrophysical observations of high-energy particles.
Higgs boson gluon-fusion production in QCD at three loops.
Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; Herzog, Franz; Mistlberger, Bernhard
2015-05-29
We present the cross section for the production of a Higgs boson at hadron colliders at next-to-next-to-next-to-leading order (N^{3}LO) in perturbative QCD. The calculation is based on a method to perform a series expansion of the partonic cross section around the threshold limit to an arbitrary order. We perform this expansion to sufficiently high order to obtain the value of the hadronic cross at N^{3}LO in the large top-mass limit. For renormalization and factorization scales equal to half the Higgs boson mass, the N^{3}LO corrections are of the order of +2.2%. The total scale variation at N^{3}LO is 3%, reducing the uncertainty due to missing higher order QCD corrections by a factor of 3.
Baryon asymmetry from primordial black holes
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Iso, Satoshi
2017-03-01
We propose a new scenario of the baryogenesis from primordial black holes (PBH). Assuming the presence of microscopic baryon (or lepton) number violation, and the presence of an effective CP-violating operator such as ∂αF (R…)Jα , where F (R…) is a scalar function of the Riemann tensor and Jα is a baryonic (leptonic) current, the time evolution of an evaporating black hole generates baryonic (leptonic) chemical potential at the horizon; consequently PBH emanates asymmetric Hawking radiation between baryons (leptons) and antibaryons (leptons). Though the operator is higher-dimensional and largely suppressed by a high mass scale M* , we show that a sufficient amount of asymmetry can be generated for a wide range of parameters of the PBH mass MPBH , its abundance ΩPBH , and the scale M*.
Heat shield characterization: Outer planet atmospheric entry probe
NASA Technical Reports Server (NTRS)
Mezines, S. A.; Rusert, E. L.; Disser, E. F.
1976-01-01
A full scale carbon phenolic heat shield was fabricated for the Outer Planet Probe in order to demonstrate the feasibility of molding large carbon phenolic parts with a new fabrication processing method (multistep). The sphere-cone heat shield was molded as an integral unit with the nose cap plies configured into a double inverse chevron shape to achieve the desired ply orientation. The fabrication activity was successful and the feasibility of the multistep processing technology was established. Delaminations or unbonded plies were visible on the heat shield and resulted from excessive loss of resin and lack of sufficient pressure applied on the part during the curing cycle. A comprehensive heat shield characterization test program was conducted, including: nondestructive tests with the full scale heat shield and thermal and mechanical property tests with small test specimen.
Withdrawal of ground water and pond water on Long Island from 1904 to 1949
Lusczynski, Norbert J.
1950-01-01
For more than 50 years the highly productive and readily replenishable water-bearing sands and gravels on Long Island -- capable of yielding an average of at least 1,000 million gallons a day -- and also some surface streams and ponds have been utilized on a large scale of public water supply and industrial, agricultural and domestic uses. During the drought months of 1949, when many surface and groundwater supplied were being depleted at an alarming rate in many localities in the Northeast, the abundant water resources of Long Island provided sufficient water for public water supply for a large number of private companies and municipalities, as well as for large emergency drafts by the City of New York. In addition they kept industrial concerns from curtailing production, saved millions of dollars of potato, cauliflower, and other Long Island crops, and even furnished, during the summer heat, comfort cooling and theatergoers.
Large patternable metal nanoparticle sheets by photo/e-beam lithography
NASA Astrophysics Data System (ADS)
Saito, Noboru; Wang, Pangpang; Okamoto, Koichi; Ryuzaki, Sou; Tamada, Kaoru
2017-10-01
Techniques for micro/nano-scale patterning of large metal nanoparticle sheets can potentially be used to realize high-performance photoelectronic devices because the sheets provide greatly enhanced electrical fields around the nanoparticles due to localized surface plasmon resonances. However, no single metal nanoparticle sheet currently exists with sufficient durability for conventional lithographical processes. Here, we report large photo and/or e-beam lithographic patternable metal nanoparticle sheets with improved durability by incorporating molecular cross-linked structures between nanoparticles. The cross-linked structures were easily formed by a one-step chemical reaction; immersing a single nanoparticle sheet consisting of core metals, to which capping molecules ionically bond, in a dithiol ethanol solution. The ligand exchange reaction processes were discussed in detail, and we demonstrated 20 μm wide line and space patterns, and a 170 nm wide line of the silver nanoparticle sheets.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Scaling in two-fluid pinch-off
NASA Astrophysics Data System (ADS)
Pommer, Chris; Harris, Michael; Basaran, Osman
2010-11-01
The physics of two-fluid pinch-off, which arises whenever drops, bubbles, or jets of one fluid are ejected from a nozzle into another fluid, is scientifically important and technologically relevant. While the breakup of a drop in a passive environment is well understood, the physics of pinch-off when both the inner and outer fluids are dynamically active remains inadequately understood. Here, the breakup of a compound jet whose core and shell are incompressible Newtonian fluids is analyzed computationally when the interior is a "bubble" and the exterior is a liquid. The numerical method employed is an implicit method of lines ALE algorithm which uses finite elements with elliptic mesh generation and adaptive finite differences for time integration. Thus, the new approach neither starts with a priori idealizations, as has been the case with previous computations, nor is limited to length scales above that set by the wavelength of visible light as in any experimental study. In particular, three distinct responses are identified as the ratio m of the outer fluid's viscosity to the inner fluid's viscosity is varied. For small m, simulations show that the minimum neck radius r initially scales with time τ before breakup as r ˜0.58° (in accord with previous experiments and inviscid fluid models) but that r ˜τ once r becomes sufficiently small. For intermediate and large values of m, r ˜&αcirc;, where the exponent α may not equal one, once again as r becomes sufficiently small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elghozi, Thomas; Mavromatos, Nick E.; Sakellariadou, Mairi
In a previous publication by some of the authors (N.E.M., M.S. and M.F.Y.), we have argued that the ''D-material universe'', that is a model of a brane world propagating in a higher-dimensional bulk populated by collections of D-particle stringy defects, provides a model for the growth of large-scale structure in the universe via the vector field in its spectrum. The latter corresponds to D-particle recoil velocity excitations as a result of the interactions of the defects with stringy matter and radiation on the brane world. In this article, we first elaborate further on the results of the previous study onmore » the galactic growth era and analyse the circumstances under which the D-particle recoil velocity fluid may ''mimic'' dark matter in galaxies. A lensing phenomenology is also presented for some samples of galaxies, which previously were known to provide tension for modified gravity (TeVeS) models. The current model is found in agreement with these lensing data. Then we discuss a cosmic evolution for the D-material universe by analysing the conditions under which the late eras of this universe associated with large-scale structure are connected to early epochs, where inflation takes place. It is shown that inflation is induced by dense populations of D-particles in the early universe, with the rôle of the inflaton field played by the condensate of the D-particle recoil-velocity fields under their interaction with relativistic stringy matter, only for sufficiently large brane tensions and low string mass scales compared to the Hubble scale. On the other hand, for large string scales, where the recoil-velocity condensate fields are weak, inflation cannot be driven by the D-particle defects alone. In such cases inflation may be driven by dilaton (or other moduli) fields in the underlying string theory.« less
Growns, Ivor; Astles, Karen; Gehrke, Peter
2006-03-01
We studied the multiscale (sites, river reaches and rivers) and short-term temporal (monthly) variability in a freshwater fish assemblage. We found that small-scale spatial variation and short-term temporal variability significantly influenced fish community structure in the Macquarie and Namoi Rivers. However, larger scale spatial differences between rivers were the largest source of variation in the data. The interaction between temporal change and spatial variation in fish community structure, whilst statistically significant, was smaller than the variation between rivers. This suggests that although the fish communities within each river changed between sampling occasions, the underlying differences between rivers were maintained. In contrast, the strongest interaction between temporal and spatial effects occurred at the smallest spatial scale, at the level of individual sites. This means whilst the composition of the fish assemblage at a given site may fluctuate, the magnitude of these changes is unlikely to affect larger scale differences between reaches within rivers or between rivers. These results suggest that sampling at any time within a single season will be sufficient to show spatial differences that occur over large spatial scales, such as comparisons between rivers or between biogeographical regions.
Supersonic turbulent boundary layers with periodic mechanical non-equilibrium
NASA Astrophysics Data System (ADS)
Ekoto, Isaac Wesley
Previous studies have shown that favorable pressure gradients reduce the turbulence levels and length scales in supersonic flow. Wall roughness has been shown to reduce the large-scales in wall bounded flow. Based on these previous observations new questions have been raised. The fundamental questions this dissertation addressed are: (1) What are the effects of wall topology with sharp versus blunt leading edges? and (2) Is it possible that a further reduction of turbulent scales can occur if surface roughness and favorable pressure gradients are combined? To answer these questions and to enhance the current experimental database, an experimental analysis was performed to provide high fidelity documentation of the mean and turbulent flow properties along with surface and flow visualizations of a high-speed (M = 2.86), high Reynolds number (Retheta ≈ 60,000) supersonic turbulent boundary layer distorted by curvature-induced favorable pressure gradients and large-scale ( k+s ≈ 300) uniform surface roughness. Nine models were tested at three separate locations. Three pressure gradient models strengths (a nominally zero, a weak, and a strong favorable pressure gradient) and three roughness topologies (aerodynamically smooth, square, and diamond shaped roughness elements) were used. Highly resolved planar measurements of mean and fluctuating velocity components were accomplished using particle image velocimetry. Stagnation pressure profiles were acquired with a traversing Pitot probe. Surface pressure distributions were characterized using pressure sensitive paint. Finally flow visualization was accomplished using schlieren photographs. Roughness topology had a significant effect on the boundary layer mean and turbulent properties due to shock boundary layer interactions. Favorable pressure gradients had the expected stabilizing effect on turbulent properties, but the improvements were less significant for models with surface roughness near the wall due to increased tendency towards flow separation. It was documented that proper roughness selection coupled with a sufficiently strong favorable pressure gradient produced regions of "negative" production in the transport of turbulent stress. This led to localized areas of significant turbulence stress reduction. With proper roughness selection and sufficient favorable pressure gradient strength, it is believed that localized relaminarization of the boundary layer is possible.
Concurrent Spectral and Separation-space Views of Small-scale Anisotropy in Rotating Turbulence
NASA Astrophysics Data System (ADS)
Vallefuoco, D.; Godeferd, F. S.; Naso, A.
2017-12-01
Rotating turbulence is central in astrophysical, geophysical and industrial flows. A background rotation about a fixed axis introduces significant anisotropy in the turbulent dynamics through both linear and nonlinear mechanisms. The flow regime can be characterized by two independent non-dimensional parameters, e.g. the Reynolds and Rossby numbers or, equivalently, the ratio of the integral scale to the Kolmogorov scale L/η, and the ratio rZ/L, where rZ=√(ɛ/Ω3) is the Zeman scale, ɛ is the mean dissipation and Ω is the rotation rate. rZ is the scale at which the inertial timescale equals the rotation timescale. According to classical dimensional arguments (Zeman 1994), if the Reynolds number is large, scales much larger than rZ are mainly affected by rotation while scales much smaller than rZare dominated by the nonlinear dynamics and are expected to recover isotropy. In this work, we characterize incompressible rotating turbulence scale- and direction-dependent anisotropy through high Reynolds number pseudo-spectral forced DNS. We first focus on energy direction-dependent spectra in Fourier space: we show that a high anisotropy small wavenumber range and a low anisotropy large wavenumber range arise. Importantly, anisotropy arises even at scales much smaller than rZ and no small-scale isotropy is observed in our DNS, in contrast with previous numerical results (Delache et al. 2014, Mininni et al. 2012) but in agreement with experiments (Lamriben et al. 2011). Then, we estimate the value of the threshold wavenumber kT between these two anisotropic ranges for a large number of runs, and show that it corresponds to the scale at which dissipative effects are of the same order as those of rotation. Therefore, in the asymptotic inviscid limit, kT tends to infinity and only the low-wavenumber anisotropic range should persist. In this range anisotropy decreases with wavenumber, which is consistent with the classical Zeman argument. In addition, anisotropy at scales much smaller than rZ can be detected in physical space too, in particular for the third-order two-point vector moment F=<δu2 δu>, where δu is the velocity increment. We find the expected inertial trends for F (Galtier 2009) at scales sufficiently larger than the dissipative scale, while smaller scales exhibit qualitatively opposite anisotropic features.
Spatially Resolved Spectroscopy of Narrow-line Seyfert 1 Host Galaxies
NASA Astrophysics Data System (ADS)
Scharwächter, J.; Husemann, B.; Busch, G.; Komossa, S.; Dopita, M. A.
2017-10-01
We present optical integral field spectroscopy for five z< 0.062 narrow-line Seyfert 1 (NLS1) galaxies, probing their host galaxies at ≳ 2{--}3 {kpc} scales. Emission lines from the active galactic nucleus (AGN) and the large-scale host galaxy are analyzed separately, based on an AGN-host decomposition technique. The host galaxy gas kinematics indicates large-scale gas rotation in all five sources. At the probed scales of ≳ 2{--}3 {kpc}, the host galaxy gas is found to be predominantly ionized by star formation without any evidence of a strong AGN contribution. None of the five objects shows specific star formation rates (SFRs) exceeding the main sequence of low-redshift star-forming galaxies. The specific SFRs for MCG-05-01-013 and WPVS 007 are roughly consistent with the main sequence, while ESO 399-IG20, MS 22549-3712, and TON S180 show lower specific SFRs, intermediate to the main sequence and the red quiescent galaxies. The host galaxy metallicities, derived for the two sources with sufficient data quality (ESO 399-IG20 and MCG-05-01-013), indicate central oxygen abundances just below the low-redshift mass-metallicity relation. Based on this initial case study, we outline a comparison of AGN and host galaxy parameters as a starting point for future extended NLS1 studies with similar methods.
NASA Astrophysics Data System (ADS)
Kirkil, Gokhan; Constantinescu, George
2009-06-01
Detailed knowledge of the dynamics of large-scale turbulence structures is needed to understand the geomorphodynamic processes around in-stream obstacles present in rivers. Detached Eddy Simulation is used to study the flow past a high-aspect-ratio rectangular cylinder (plate) mounted on a flat-bed relatively shallow channel at a channel Reynolds number of 2.4 × 105. Similar to other flows past surface-mounted bluff bodies, the large amplification of the turbulence inside the horseshoe vortex system is because the core of the main necklace vortex is subject to large-scale bimodal oscillations. The presence of a sharp edge at the flanks of the obstruction fixes the position of the flow separation at all depths and induces the formation and shedding of very strong wake rollers over the whole channel depth. Compared with the case of a circular cylinder where the intensity of the rollers decays significantly in the near-bed region because the incoming flow velocity is not sufficient to force the wake to transition from subcritical to supercritical regime, in the case of a high-aspect-ratio rectangular cylinder the passage of the rollers was found to induce high bed-shear stresses at large distances (6-8 D) behind the obstruction. Also, the nondimensional values of the pressure root-mean-square fluctuations at the bed were found to be about 1 order of magnitude higher than the ones predicted for circular cylinders. Overall, this shows that the shape of the in-stream obstruction can greatly modify the dynamics of the large-scale coherent structures, the nature of their interactions, and ultimately, their capability to entrain and transport sediment particles and the speed at which the scour process evolves during its initial stages.
NASA Technical Reports Server (NTRS)
1976-01-01
Results of studies performed on the magnetospheric and plasma portion of the AMPS are presented. Magnetospheric and plasma in space experiments and instruments are described along with packaging (palletization) concepts. The described magnetospheric and plasma experiments were considered as separate entities. Instrumentation ospheric and plasma experiments were considered as separate entities. Instrumentation requirements and operations were formulated to provide sufficient data for unambiguous interpretation of results without relying upon other experiments of the series. Where ground observations are specified, an assumption was made that large-scale additions or modifications to existing facilities were not required.
Summary appraisals of the Nation's ground-water resources; Great Basin region
Eakin, Thomas E.; Price, Don; Harrill, J.R.
1976-01-01
Only a few areas of the Great Basin Region have been studied in detail sufficient to enable adequate design of an areawide groundwater development. These areas already have been developed. As of 1973 data for broadly outlining the ground-water resources of the region had been obtained. However, if large-scale planned development is to become a reality, a program for obtaining adequate hydrologic and related data would be a prerequisite. Ideally, the data should be obtained in time to be available for the successively more intensive levels of planning required to implement developments.
Reinforcing loose foundation stones in trait-based plant ecology.
Shipley, Bill; De Bello, Francesco; Cornelissen, J Hans C; Laliberté, Etienne; Laughlin, Daniel C; Reich, Peter B
2016-04-01
The promise of "trait-based" plant ecology is one of generalized prediction across organizational and spatial scales, independent of taxonomy. This promise is a major reason for the increased popularity of this approach. Here, we argue that some important foundational assumptions of trait-based ecology have not received sufficient empirical evaluation. We identify three such assumptions and, where possible, suggest methods of improvement: (i) traits are functional to the degree that they determine individual fitness, (ii) intraspecific variation in functional traits can be largely ignored, and (iii) functional traits show general predictive relationships to measurable environmental gradients.
Non-gaussian statistics of pencil beam surveys
NASA Technical Reports Server (NTRS)
Amendola, Luca
1994-01-01
We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.
COBE DMR-normalized open inflation cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.
1995-01-01
A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aburjania, G. D.; Machabeli, G. Z.; Kharshiladze, O. A.
2006-07-15
The modulational instability in a plasma in a strong constant external magnetic field is considered. The plasmon condensate is modulated not by conventional low-frequency ion sound but by the beatings of two high-frequency transverse electromagnetic waves propagating along the magnetic field. The instability reduces the spatial scales of Langmuir turbulence along the external magnetic field and generates electromagnetic fields. It is shown that, for a pump wave with a sufficiently large amplitude, the effect described in the present paper can be a dominant nonlinear process.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Kuwahara, Takuya; Moras, Gianpietro; Moseler, Michael
2017-09-01
Large-scale quantum molecular dynamics of water-lubricated diamond (111) surfaces in sliding contact reveals multiple friction regimes. While water starvation causes amorphization of the tribological interface, small H2O traces are sufficient to preserve crystallinity. This can result in high friction due to cold welding via ether groups or in ultralow friction due to aromatic surface passivation triggered by tribo-induced Pandey reconstruction. At higher water coverage, Grotthuss-type diffusion and H2O dissociation yield dense H /OH surface passivation leading to another ultralow friction regime.
Rapid step-gradient purification of mitochondrial DNA.
Welter, C; Meese, E; Blin, N
1988-01-01
A convenient modification of the step gradient (CsCl/ethidium bomide) procedure is described. This rapid method allows isolation of covalently closed circular DNA separated from contaminating proteins, RNA and chromosomal DNA in ca. 5 h. Large scale preparations can be performed for circular DNA from eukaryotic organelles (mitochondria). The protocol uses organelle pelleting/NaCl-sarcosyl incubation steps for mitochondria followed by a CsCl step gradient and exhibits yields equal to the conventional procedures. It results in DNA sufficiently pure to be used for restriction endonuclease analysis, subcloning, 5'-end labeling, gel retention assays, and various types of hybridization.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders
Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael
2015-01-01
Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.
Initial Low-Reynolds Number Iced Aerodynamic Performance for CRM Wing
NASA Technical Reports Server (NTRS)
Woodard, Brian; Diebold, Jeff; Broeren, Andy; Potapczuk, Mark; Lee, Sam; Bragg, Michael
2015-01-01
NASA, FAA, ONERA, and other partner organizations have embarked on a significant, collaborative research effort to address the technical challenges associated with icing on large scale, three-dimensional swept wings. These are extremely complex phenomena important to the design, certification and safe operation of small and large transport aircraft. There is increasing demand to balance trade-offs in aircraft efficiency, cost and noise that tend to compete directly with allowable performance degradations over an increasing range of icing conditions. Computational fluid dynamics codes have reached a level of maturity that they are being proposed by manufacturers for use in certification of aircraft for flight in icing. However, sufficient high-quality data to evaluate their performance on iced swept wings are not currently available in the public domain and significant knowledge gaps remain.
Collective synthesis of natural products by means of organocascade catalysis.
Jones, Spencer B; Simmons, Bryon; Mastracchio, Anthony; MacMillan, David W C
2011-07-13
Organic chemists are now able to synthesize small quantities of almost any known natural product, given sufficient time, resources and effort. However, translation of the academic successes in total synthesis to the large-scale construction of complex natural products and the development of large collections of biologically relevant molecules present significant challenges to synthetic chemists. Here we show that the application of two nature-inspired techniques, namely organocascade catalysis and collective natural product synthesis, can facilitate the preparation of useful quantities of a range of structurally diverse natural products from a common molecular scaffold. The power of this concept has been demonstrated through the expedient, asymmetric total syntheses of six well-known alkaloid natural products: strychnine, aspidospermidine, vincadifformine, akuammicine, kopsanone and kopsinine. ©2011 Macmillan Publishers Limited. All rights reserved
Holography and the Coleman-Mermin-Wagner theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anninos, Dionysios; Hartnoll, Sean A.; Iqbal, Nabil
2010-09-15
In 2+1 dimensions at finite temperature, spontaneous symmetry breaking of global symmetries is precluded by large thermal fluctuations of the order parameter. The holographic correspondence implies that analogous effects must also occur in 3+1 dimensional theories with gauged symmetries in certain curved spacetimes with horizon. By performing a one loop computation in the background of a holographic superconductor, we show that bulk quantum fluctuations wash out the classical order parameter at sufficiently large distance scales. The low temperature phase is seen to exhibit algebraic long-range order. Beyond the specific example we study, holography suggests that IR singular quantum fluctuations ofmore » the fields and geometry will play an interesting role for many 3+1 dimensional asymptotically anti-de Sitter spacetimes with planar horizon.« less
Scalable loading of a two-dimensional trapped-ion array
Bruzewicz, Colin D.; McConnell, Robert; Chiaverini, John; Sage, Jeremy M.
2016-01-01
Two-dimensional arrays of trapped-ion qubits are attractive platforms for scalable quantum information processing. Sufficiently rapid reloading capable of sustaining a large array, however, remains a significant challenge. Here with the use of a continuous flux of pre-cooled neutral atoms from a remotely located source, we achieve fast loading of a single ion per site while maintaining long trap lifetimes and without disturbing the coherence of an ion quantum bit in an adjacent site. This demonstration satisfies all major criteria necessary for loading and reloading extensive two-dimensional arrays, as will be required for large-scale quantum information processing. Moreover, the already high loading rate can be increased by loading ions in parallel with only a concomitant increase in photo-ionization laser power and no need for additional atomic flux. PMID:27677357
Study of Evaporation Rate of Water in Hydrophobic Confinement using Forward Flux Sampling
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Debenedetti, Pablo G.
2012-02-01
Drying of hydrophobic cavities is of interest in understanding biological self assembly, protein stability and opening and closing of ion channels. Liquid-to-vapor transition of water in confinement is associated with large kinetic barriers which preclude its study using conventional simulation techniques. Using forward flux sampling to study the kinetics of the transition between two hydrophobic surfaces, we show that a) the free energy barriers to evaporation scale linearly with the distance between the two surfaces, d; b) the evaporation rates increase as the lateral size of the surfaces, L increases, and c) the transition state to evaporation for sufficiently large L is a cylindrical vapor cavity connecting the two hydrophobic surfaces. Finally, we decouple the effects of confinement geometry and surface chemistry on the evaporation rates.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe
2017-04-01
Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.
NASA Astrophysics Data System (ADS)
Lintner, B. R.; Loikith, P. C.; Pike, M.; Aragon, C.
2017-12-01
Climate change information is increasingly required at impact-relevant scales. However, most state-of-the-art climate models are not of sufficiently high spatial resolution to resolve features explicitly at such scales. This challenge is particularly acute in regions of complex topography, such as the Pacific Northwest of the United States. To address this scale mismatch problem, we consider large-scale meteorological patterns (LSMPs), which can be resolved by climate models and associated with the occurrence of local scale climate and climate extremes. In prior work, using self-organizing maps (SOMs), we computed LSMPs over the northwestern United States (NWUS) from daily reanalysis circulation fields and further related these to the occurrence of observed extreme temperatures and precipitation: SOMs were used to group LSMPs into 12 nodes or clusters spanning the continuum of synoptic variability over the regions. Here this observational foundation is utilized as an evaluation target for a suite of global climate models from the Fifth Phase of the Coupled Model Intercomparison Project (CMIP5). Evaluation is performed in two primary ways. First, daily model circulation fields are assigned to one of the 12 reanalysis nodes based on minimization of the mean square error. From this, a bulk model skill score is computed measuring the similarity between the model and reanalysis nodes. Next, SOMs are applied directly to the model output and compared to the nodes obtained from reanalysis. Results reveal that many of the models have LSMPs analogous to the reanalysis, suggesting that the models reasonably capture observed daily synoptic states.
Jung, Yousung; Shao, Yihan; Head-Gordon, Martin
2007-09-01
The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.
[Development of a Japanese version of the TALE scale].
Ochiai, Tsutomu; Oguchi, Takashi
2013-12-01
The Thinking About Life Experiences (TALE) Scale (Bluck & Alea, 2011) has three subscales that assess the self, social, and directive functions of autobiographical memory. This study constructs a Japanese version of the TALE Scale and examines its reliability and validity. Fifteen items that assess the three functions of autobiographical memory were translated into Japanese. We conducted an online investigation with 600 men and women between 20-59 years of age. In Study 1, exploratory and confirmatory factor analysis identified that the three-factor structure of the Japanese version of the TALE Scale was the same as the original TALE Scale. Sufficient internal consistency of the scale was found, and the construct validity of the scale was supported by correlation analysis. Study 2 confirmed that the test-retest reliabilities of the three subscales were sufficient. Thus, this Japanese version of the TALE Scale is useful to assess autobiographical memory functions in Japan.
Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model
NASA Astrophysics Data System (ADS)
Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato
2015-11-01
Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Characterizing unknown systematics in large scale structure surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Nishant; Ho, Shirley; Myers, Adam D.
Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data,more » we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.« less
NASA Astrophysics Data System (ADS)
Egerer, Sabine; Claussen, Martin; Reick, Christian; Stanelle, Tanja
2017-09-01
The abrupt change in North Atlantic dust deposition found in sediment records has been associated with a rapid large scale transition of Holocene Saharan landscape. We hypothesize that gradual changes in the landscape may have caused this abrupt shift in dust deposition either because of the non-linearity in dust activation or because of the heterogeneous distribution of major dust sources. To test this hypothesis, we investigate the response of North Atlantic dust deposition to a prescribed 1) gradual and spatially homogeneous decrease and 2) gradual southward retreat of North African vegetation and lakes during the Holocene using the aerosol-climate model ECHAM-HAM. In our simulations, we do not find evidence of an abrupt increase in dust deposition as observed in marine sediment records along the Northwest African margin. We conclude that such gradual changes in landscape are not sufficient to explain the observed abrupt changes in dust accumulation in marine sediment records. Instead, our results point to a rapid large-scale retreat of vegetation and lakes in the area of significant dust sources.
NASA Astrophysics Data System (ADS)
Sakata, Yasuyo
The survey of interview, resource acquisition, photographic operation, and questionnaire were carried out in the “n” Community in the “y” District in Hakusan City in Ishikawa Prefecture to investigate the actual condition of paddy field levee maintenance in the area where land-renting market was proceeding, large-scale farming was dominant, and the problems of geographically scattered farm-land existed. In the study zone, 1) an agricultural production legal person rent-cultivated some of the paddy fields and maintained the levees, 2) another agricultural production legal person rent-cultivated some of the soy bean fields for crop changeover and land owners maintained the levees. The results indicated that sufficient maintenance was executed on the levees of the paddy fields cultivated by the agricultural production legal person, the soy bean fields for crop changeover, and the paddy fields cultivated by the land owners. Each reason is considered to be the managerial strategy, the economic incentive, the mutual monitoring and cross-regulatory mechanism, etc.
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Barbielini, G; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2012-01-01
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron- plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between approx. 6 and approx. 13 GeV with an estimated uncertainty of approx. 2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.
Optimization of Industrial Ozone Generation with Pulsed Power
NASA Astrophysics Data System (ADS)
Lopez, Jose; Guerrero, Daniel; Freilich, Alfred; Ramoino, Luca; Seton Hall University Team; Degremont Technologies-Ozonia Team
2013-09-01
Ozone (O3) is widely used for applications ranging from various industrial chemical synthesis processes to large-scale water treatment. The consequent surge in world-wide demand has brought about the requirement for ozone generation at the rate of several hundreds grams per kilowatt hour (g/kWh). For many years, ozone has been generated by means of dielectric barrier discharges (DBD), where a high-energy electric field between two electrodes separated by a dielectric and gap containing pure oxygen or air produce various microplasmas. The resultant microplasmas provide sufficient energy to dissociate the oxygen molecules while allowing the proper energetics channels for the formation of ozone. This presentation will review the current power schemes used for large-scale ozone generation and explore the use of high-voltage nanosecond pulses with reduced electric fields. The created microplasmas in a high reduced electric field are expected to be more efficient for ozone generation. This is confirmed with the current results of this work which observed that the efficiency of ozone generation increases by over eight time when the rise time and pulse duration are shortened. Department of Physics, South Orange, NJ, USA.
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Ajello, M.
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in themore » Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.« less
TURBULENCE IN THE SOLAR WIND MEASURED WITH COMET TAIL TEST PARTICLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, C. E.; Howard, T. A.; Matthaeus, W. H.
2015-10-20
By analyzing the motions of test particles observed remotely in the tail of Comet Encke, we demonstrate that the solar wind undergoes turbulent processing enroute from the Sun to the Earth and that the kinetic energy entrained in the large-scale turbulence is sufficient to explain the well-known anomalous heating of the solar wind. Using the heliospheric imaging (HI-1) camera on board NASA's STEREO-A spacecraft, we have observed an ensemble of compact features in the comet tail as they became entrained in the solar wind near 0.4 AU. We find that the features are useful as test particles, via mean-motion analysismore » and a forward model of pickup dynamics. Using population analysis of the ensemble's relative motion, we find a regime of random-walk diffusion in the solar wind, followed, on larger scales, by a surprising regime of semiconfinement that we attribute to turbulent eddies in the solar wind. The entrained kinetic energy of the turbulent motions represents a sufficient energy reservoir to heat the solar wind to observed temperatures at 1 AU. We determine the Lagrangian-frame diffusion coefficient in the diffusive regime, derive upper limits for the small scale coherence length of solar wind turbulence, compare our results to existing Eulerian-frame measurements, and compare the turbulent velocity with the size of the observed eddies extrapolated to 1 AU. We conclude that the slow solar wind is fully mixed by turbulence on scales corresponding to a 1–2 hr crossing time at Earth; and that solar wind variability on timescales shorter than 1–2 hr is therefore dominated by turbulent processing rather than by direct solar effects.« less
NASA Technical Reports Server (NTRS)
Starr, David O'C.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus M.; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric
2000-01-01
The GEWEX Cloud System Study (GCSS, GEWEX is the Global Energy and Water Cycle Experiment) is a community activity aiming to promote development of improved cloud parameterizations for application in the large-scale general circulation models (GCMs) used for climate research and for numerical weather prediction. The GCSS strategy is founded upon the use of cloud-system models (CSMs). These are "process" models with sufficient spatial and temporal resolution to represent individual cloud elements, but spanning a wide range of space and time scales to enable statistical analysis of simulated cloud systems. GCSS also employs single-column versions of the parametric cloud models (SCMs) used in GCMs. GCSS has working groups on boundary-layer clouds, cirrus clouds, extratropical layer cloud systems, precipitating deep convective cloud systems, and polar clouds.
Dark matter and cosmological nucleosynthesis
NASA Technical Reports Server (NTRS)
Schramm, D. N.
1986-01-01
Existing dark matter problems, i.e., dynamics, galaxy formation and inflation, are considered, along with a model which proposes dark baryons as the bulk of missing matter in a fractal universe. It is shown that no combination of dark, nonbaryonic matter can either provide a cosmological density parameter value near unity or, as in the case of high energy neutrinos, allow formation of condensed matter at epochs when quasars already existed. The possibility that correlations among galactic clusters are scale-free is discussed. Such a distribution of matter would yield a fractal of 1.2, close to a one-dimensional universe. Biasing, cosmic superstrings, and percolated explosions and hot dark matter are theoretical approaches that would satisfy the D = 1.2 fractal model of the large-scale structure of the universe and which would also allow sufficient dark matter in halos to close the universe.
Dias, W S; Bertrand, D; Lyra, M L
2017-06-01
Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d>4.
NASA Astrophysics Data System (ADS)
Dias, W. S.; Bertrand, D.; Lyra, M. L.
2017-06-01
Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d >4 .
Metabolic engineering of biosynthetic pathway for production of renewable biofuels.
Singh, Vijai; Mani, Indra; Chaudhary, Dharmendra Kumar; Dhar, Pawan Kumar
2014-02-01
Metabolic engineering is an important area of research that involves editing genetic networks to overproduce a certain substance by the cells. Using a combination of genetic, metabolic, and modeling methods, useful substances have been synthesized in the past at industrial scale and in a cost-effective manner. Currently, metabolic engineering is being used to produce sufficient, economical, and eco-friendly biofuels. In the recent past, a number of efforts have been made towards engineering biosynthetic pathways for large scale and efficient production of biofuels from biomass. Given the adoption of metabolic engineering approaches by the biofuel industry, this paper reviews various approaches towards the production and enhancement of renewable biofuels such as ethanol, butanol, isopropanol, hydrogen, and biodiesel. We have also identified specific areas where more work needs to be done in the future.
Assessing field-scale biogeophysical signatures of bioremediation over a mature crude oil spill
Slater, Lee; Ntarlagiannis, Dimitrios; Atekwana, Estella; Mewafy, Farag; Revil, Andre; Skold, Magnus; Gorby, Yuri; Day-Lewis, Frederick D.; Lane, John W.; Trost, Jared J.; Werkema, Dale D.; Delin, Geoffrey N.; Herkelrath, William N.; Rectanus, H.V.; Sirabian, R.
2011-01-01
We conducted electrical geophysical measurements at the National Crude Oil Spill Fate and Natural Attenuation Research Site (Bemidji, MN). Borehole and surface self-potential measurements do not show evidence for the existence of a biogeobattery mechanism in response to the redox gradient resulting from biodegradation of oil. The relatively small self potentials recorded are instead consistent with an electrodiffusion mechanism driven by differences in the mobility of charge carriers associated with biodegradation byproducts. Complex resistivity measurements reveal elevated electrical conductivity and interfacial polarization at the water table where oil contamination is present, extending into the unsaturated zone. This finding implies that the effect of microbial cell growth/attachment, biofilm formation, and mineral weathering accompanying hydrocarbon biodegradation on complex interfacial conductivity imparts a sufficiently large electrical signal to be measured using field-scale geophysical techniques.
Materials identification using a small-scale pixellated x-ray diffraction system
NASA Astrophysics Data System (ADS)
O'Flynn, D.; Crews, C.; Drakos, I.; Christodoulou, C.; Wilson, M. D.; Veale, M. C.; Seller, P.; Speller, R. D.
2016-05-01
A transmission x-ray diffraction system has been developed using a pixellated, energy-resolving detector (HEXITEC) and a small-scale, mains operated x-ray source (Amptek Mini-X). HEXITEC enables diffraction to be measured without the requirement of incident spectrum filtration, or collimation of the scatter from the sample, preserving a large proportion of the useful signal compared with other diffraction techniques. Due to this efficiency, sufficient molecular information for material identification can be obtained within 5 s despite the relatively low x-ray source power. Diffraction data are presented from caffeine, hexamine, paracetamol, plastic explosives and narcotics. The capability to determine molecular information from aspirin tablets inside their packaging is demonstrated. Material selectivity and the potential for a sample classification model is shown with principal component analysis, through which each different material can be clearly resolved.
Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics
Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.
2014-01-01
Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314
Standardization of fluorine-18 manufacturing processes: new scientific challenges for PET.
Hjelstuen, Ole K; Svadberg, Anders; Olberg, Dag E; Rosser, Mark
2011-08-01
In [(18)F]fluoride chemistry, the minute amounts of radioactivity taking part in a radiolabeling reaction are easily outnumbered by other reactants. Surface areas become comparably larger and more influential than in standard fluorine chemistry, while leachables, extractables, and other components that normally are considered small impurities can have a considerable influence on the efficiency of the reaction. A number of techniques exist to give sufficient (18)F-tracer for a study in a pre-clinical or clinical system, but the chemical and pharmaceutical understanding has significant gaps when it comes to scaling up or making the reaction more efficient. Automation and standardization of [(18)F]fluoride PET tracers is a prerequisite for reproducible manufacturing across multiple PET centers. So far, large-scale, multi-site manufacture has been established only for [(18)F]FDG, but several new tracers are emerging. In general terms, this transition from small- to large-scale production has disclosed several scientific challenges that need to be addressed. There are still areas of limited knowledge in the fundamental [(18)F]fluoride chemistry. The role of pharmaceutical factors that could influence the (18)F-radiosynthesis and the gaps in precise chemistry knowledge are discussed in this review based on a normal synthesis pattern. Copyright © 2011 Elsevier B.V. All rights reserved.
Zorick, Todd; Mandelkern, Mark A
2015-01-01
Electroencephalography (EEG) is typically viewed through the lens of spectral analysis. Recently, multiple lines of evidence have demonstrated that the underlying neuronal dynamics are characterized by scale-free avalanches. These results suggest that techniques from statistical physics may be used to analyze EEG signals. We utilized a publicly available database of fourteen subjects with waking and sleep stage 2 EEG tracings per subject, and observe that power-law dynamics of critical-state neuronal avalanches are not sufficient to fully describe essential features of EEG signals. We hypothesized that this could reflect the phenomenon of discrete scale invariance (DSI) in EEG large voltage deflections (LVDs) as being more prominent in waking consciousness. We isolated LVDs, and analyzed logarithmically transformed LVD size probability density functions (PDF) to assess for DSI. We find evidence of increased DSI in waking, as opposed to sleep stage 2 consciousness. We also show that the signatures of DSI are specific for EEG LVDs, and not a general feature of fractal simulations with similar statistical properties to EEG. Removing only LVDs from waking EEG produces a reduction in power in the alpha and beta frequency bands. These findings may represent a new insight into the understanding of the cortical dynamics underlying consciousness.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
First results from the IllustrisTNG simulations: matter and galaxy clustering
NASA Astrophysics Data System (ADS)
Springel, Volker; Pakmor, Rüdiger; Pillepich, Annalisa; Weinberger, Rainer; Nelson, Dylan; Hernquist, Lars; Vogelsberger, Mark; Genel, Shy; Torrey, Paul; Marinacci, Federico; Naiman, Jill
2018-03-01
Hydrodynamical simulations of galaxy formation have now reached sufficient volume to make precision predictions for clustering on cosmologically relevant scales. Here, we use our new IllustrisTNG simulations to study the non-linear correlation functions and power spectra of baryons, dark matter, galaxies, and haloes over an exceptionally large range of scales. We find that baryonic effects increase the clustering of dark matter on small scales and damp the total matter power spectrum on scales up to k ˜ 10 h Mpc-1 by 20 per cent. The non-linear two-point correlation function of the stellar mass is close to a power-law over a wide range of scales and approximately invariant in time from very high redshift to the present. The two-point correlation function of the simulated galaxies agrees well with Sloan Digital Sky Survey at its mean redshift z ≃ 0.1, both as a function of stellar mass and when split according to galaxy colour, apart from a mild excess in the clustering of red galaxies in the stellar mass range of109-1010 h-2 M⊙. Given this agreement, the TNG simulations can make valuable theoretical predictions for the clustering bias of different galaxy samples. We find that the clustering length of the galaxy autocorrelation function depends strongly on stellar mass and redshift. Its power-law slope γ is nearly invariant with stellar mass, but declines from γ ˜ 1.8 at redshift z = 0 to γ ˜ 1.6 at redshift z ˜ 1, beyond which the slope steepens again. We detect significant scale dependences in the bias of different observational tracers of large-scale structure, extending well into the range of the baryonic acoustic oscillations and causing nominal (yet fortunately correctable) shifts of the acoustic peaks of around ˜ 5 per cent.
Boomerang RG flows in M-theory with intermediate scaling
NASA Astrophysics Data System (ADS)
Donos, Aristomenis; Gauntlett, Jerome P.; Rosen, Christopher; Sosa-Rodriguez, Omar
2017-07-01
We construct novel RG flows of D=11 supergravity that asymptotically approach AdS 4 × S 7 in the UV with deformations that break spatial translations in the dual field theory. In the IR the solutions return to exactly the same AdS 4 × S 7 vacuum, with a renormalisation of relative length scales, and hence we refer to the flows as `boomerang RG flows'. For sufficiently large deformations, on the way to the IR the solutions also approach two distinct intermediate scaling regimes, each with hyperscaling violation. The first regime is Lorentz invariant with dynamical exponent z = 1 while the second has z = 5/2. Neither ofthe two intermediatescaling regimesare associatedwith exact hyperscaling violation solutions of D = 11 supergravity. The RG flow solutions are constructed using the four dimensional N = 2 STU gauged supergravity theory with vanishing gauge fields, but non-vanishing scalar and pseudoscalar fields. In the ABJM dual field theory the flows are driven by spatially modulated deformation parameters for scalar and fermion bilinear operators.
Fluctuations of healthy and unhealthy heartbeat intervals
NASA Astrophysics Data System (ADS)
Lan, Boon Leong; Toda, Mikito
2013-04-01
We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.
Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.
2014-01-01
As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704
The Large-scale Magnetic Fields of Thin Accretion Disks
NASA Astrophysics Data System (ADS)
Cao, Xinwu; Spruit, Hendrik C.
2013-03-01
Large-scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large-scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared with the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number P m is around unity. In this work, we revisit this problem considering the angular momentum of the disk to be removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-beta at the midplane of order several hundred, and one for strong accreted fields, β ~ 1. We surmise that the first is relevant for the accretion of weak, possibly external, fields through the outer parts of the disk, while the latter one could explain the tendency, observed in full three-dimensional numerical simulations, of strong flux bundles at the centers of disk to stay confined in spite of strong magnetororational instability turbulence surrounding them.
Impacts devalue the potential of large-scale terrestrial CO2 removal through biomass plantations
NASA Astrophysics Data System (ADS)
Boysen, L. R.; Lucht, W.; Gerten, D.; Heck, V.
2016-09-01
Large-scale biomass plantations (BPs) are often considered a feasible and safe climate engineering proposal for extracting carbon from the atmosphere and, thereby, reducing global mean temperatures. However, the capacity of such terrestrial carbon dioxide removal (tCDR) strategies and their larger Earth system impacts remain to be comprehensively studied—even more so under higher carbon emissions and progressing climate change. Here, we use a spatially explicit process-based biosphere model to systematically quantify the potentials and trade-offs of a range of BP scenarios dedicated to tCDR, representing different assumptions about which areas are convertible. Based on a moderate CO2 concentration pathway resulting in a global mean warming of 2.5 °C above preindustrial level by the end of this century—similar to the Representative Concentration Pathway (RCP) 4.5—we assume tCDR to be implemented when a warming of 1.5 °C is reached in year 2038. Our results show that BPs can slow down the progression of increasing cumulative carbon in the atmosphere only sufficiently if emissions are reduced simultaneously like in the underlying RCP4.5 trajectory. The potential of tCDR to balance additional, unabated emissions leading towards a business-as-usual pathway alike RCP8.5 is therefore very limited. Furthermore, in the required large-scale applications, these plantations would induce significant trade-offs with food production and biodiversity and exert impacts on forest extent, biogeochemical cycles and biogeophysical properties.
NASA Astrophysics Data System (ADS)
Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari
2017-10-01
We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.
Engineering-Scale Demonstration of DuraLith and Ceramicrete Waste Forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josephson, Gary B.; Westsik, Joseph H.; Pires, Richard P.
2011-09-23
To support the selection of a waste form for the liquid secondary wastes from the Hanford Waste Immobilization and Treatment Plant, Washington River Protection Solutions (WRPS) has initiated secondary waste form testing on four candidate waste forms. Two of the candidate waste forms have not been developed to scale as the more mature waste forms. This work describes engineering-scale demonstrations conducted on Ceramicrete and DuraLith candidate waste forms. Both candidate waste forms were successfully demonstrated at an engineering scale. A preliminary conceptual design could be prepared for full-scale production of the candidate waste forms. However, both waste forms are stillmore » too immature to support a detailed design. Formulations for each candidate waste form need to be developed so that the material has a longer working time after mixing the liquid and solid constituents together. Formulations optimized based on previous lab studies did not have sufficient working time to support large-scale testing. The engineering-scale testing was successfully completed using modified formulations. Further lab development and parametric studies are needed to optimize formulations with adequate working time and assess the effects of changes in raw materials and process parameters on the final product performance. Studies on effects of mixing intensity on the initial set time of the waste forms are also needed.« less
Cross-flow turbines: physical and numerical model studies towards improved array simulations
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2015-12-01
Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.
Soviet military strategy towards 2010. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, J.M.
1989-11-01
This paper tries to identify significant current trends that may continue into the 21st century and shape Soviet military strategy. An arms control trend, stemming from the Soviet concept of reasonable sufficiency, seems slated to handicap the USSR severely in options for fighting and winning large-scale conventional and theater-nuclear wars. Moscow evidently feels the strategic nuclear sphere will be the key arena of military competition in the future. First, the USSR now shows a greater commitment to offensive counterforce than was true of the period before reasonable sufficiency. Second, Moscow's interest in the strategic nuclear sphere will be reinforced bymore » a long-term trend toward space warfare. However, it may be possible to soften the competition in this sphere through arms control. Prominent Soviets have already begun to suggest that, if the U.S. will limit its SDI ambitions to a thin defense, Moscow might actually prefer mutual comprehensive ABM deployments to continued adherence to the 1972 ABM Treaty.« less
Scalar field dark matter with spontaneous symmetry breaking and the 3.5 keV line
NASA Astrophysics Data System (ADS)
Cosme, Catarina; Rosa, João G.; Bertolami, O.
2018-06-01
We show that the present dark matter abundance can be accounted for by an oscillating scalar field that acquires both mass and a non-zero expectation value from interactions with the Higgs field. The dark matter scalar field can be sufficiently heavy during inflation, due to a non-minimal coupling to gravity, so as to avoid the generation of large isocurvature modes in the CMB anisotropies spectrum. The field begins oscillating after reheating, behaving as radiation until the electroweak phase transition and afterwards as non-relativistic matter. The scalar field becomes unstable, although sufficiently long-lived to account for dark matter, due to mass mixing with the Higgs boson, decaying mainly into photon pairs for masses below the MeV scale. In particular, for a mass of ∼7 keV, which is effectively the only free parameter, the model predicts a dark matter lifetime compatible with the recent galactic and extragalactic observations of a 3.5 keV X-ray line.
Large-scale energy budget of impulsive magnetic reconnection: Theory and simulation.
Kiehas, S A; Volkonskaya, N N; Semenov, V S; Erkaev, N V; Kubyshkin, I V; Zaitsev, I V
2017-03-01
We evaluate the large-scale energy budget of magnetic reconnection utilizing an analytical time-dependent impulsive reconnection model and a numerical 2-D MHD simulation. With the generalization to compressible plasma, we can investigate changes in the thermal, kinetic, and magnetic energies. We study these changes in three different regions: (a) the region defined by the outflowing plasma (outflow region, OR), (b) the region of compressed magnetic fields above/below the OR (traveling compression region, TCR), and (c) the region trailing the OR and TCR (wake). For incompressible plasma, we find that the decrease inside the OR is compensated by the increase in kinetic energy. However, for the general compressible case, the decrease in magnetic energy inside the OR is not sufficient to explain the increase in thermal and kinetic energy. Hence, energy from other regions needs to be considered. We find that the decrease in thermal and magnetic energy in the wake, together with the decrease in magnetic energy inside the OR, is sufficient to feed the increase in kinetic and thermal energies in the OR and the increase in magnetic and thermal energies inside the TCR. That way, the energy budget is balanced, but consequently, not all magnetic energy is converted into kinetic and thermal energies of the OR. Instead, a certain fraction gets transfered into the TCR. As an upper limit of the efficiency of reconnection (magnetic energy → kinetic energy) we find η eff =1/2. A numerical simulation is used to include a finite thickness of the current sheet, which shows the importance of the pressure gradient inside the OR for the conversion of kinetic energy into thermal energy.
Design of a pulse-type strain gauge balance for a long-test-duration hypersonic shock tunnel
NASA Astrophysics Data System (ADS)
Wang, Y.; Liu, Y.; Jiang, Z.
2016-11-01
When the measurement of aerodynamic forces is conducted in a hypersonic shock tunnel, the inertial forces lead to low-frequency vibrations of the model, and its motion cannot be addressed through digital filtering because a sufficient number of cycles cannot be obtained during a tunnel run. This finding implies restrictions on the model size and mass as the natural frequencies are inversely proportional to the length scale of the model. Therefore, the force measurement still has many problems, particularly for large and heavy models. Different structures of a strain gauge balance (SGB) are proposed and designed, and the measurement element is further optimized to overcome the difficulties encountered during the measurement of aerodynamic forces in a shock tunnel. The motivation for this study is to assess the structural performance of the SGB used in a long-test-duration JF12 hypersonic shock tunnel, which has more than 100 ms of test time. Force tests were conducted for a large-scale cone with a 10^° semivertex angle and a length of 0.75 m in the JF12 long-test-duration shock tunnel. The finite element method was used for the analysis of the vibrational characteristics of the Model-Balance-Sting System (MBSS) to ensure a sufficient number of cycles, particularly for the axial force signal during a shock tunnel run. The higher-stiffness SGB used in the test shows good performance, wherein the frequency of the MBSS increases because of the stiff construction of the balance. The experimental results are compared with the data obtained in another wind tunnel and exhibit good agreement at M = 7 and α =5°.
2007-02-19
This report summarizes the discussions and recommendations from a consultation held in New York City, USA (31 January-2 February 2006) organized by the joint World Health Organization-United Nations Programme on HIV/AIDS HIV Vaccine Initiative and the International AIDS Vaccine Initiative. The consultation discussed issues related to the design and implementation of phase IIB 'test of concept' trials (phase IIB-TOC), also referred to as 'proof of concept' trials, in evaluating candidate HIV vaccines and their implications for future approval and licensure. The results of a single phase IIB-TOC trial would not be expected to provide sufficient evidence of safety or efficacy required for licensure. In many instances, phase IIB-TOC trials may be undertaken relatively early in development, before manufacturing processes and capacity are developed sufficiently to distribute the vaccine on a large scale. However, experts at this meeting considered the pressure that could arise, particularly in regions hardest hit by AIDS, if a phase IIB-TOC trial showed high levels of efficacy. The group largely agreed that full-scale phase III trials would still be necessary to demonstrate that the vaccine candidate was safe and effective, but emphasized that governments and organizations conducting trials should consider these issues in advance. The recommendations from this meeting should be helpful for all organizations involved in HIV vaccine trials, in particular for the national regulatory authorities in assessing the utility of phase IIB-TOC trials in the overall HIV vaccine research and development process.
Compressible turbulent mixing: Effects of Schmidt number.
Ni, Qionglin
2015-05-01
We investigated by numerical simulations the effects of Schmidt number on passive scalar transport in forced compressible turbulence. The range of Schmidt number (Sc) was 1/25∼25. In the inertial-convective range the scalar spectrum seemed to obey the k(-5/3) power law. For Sc≫1, there appeared a k(-1) power law in the viscous-convective range, while for Sc≪1, a k(-17/3) power law was identified in the inertial-diffusive range. The scaling constant computed by the mixed third-order structure function of the velocity-scalar increment showed that it grew over Sc, and the effect of compressibility made it smaller than the 4/3 value from incompressible turbulence. At small amplitudes, the probability distribution function (PDF) of scalar fluctuations collapsed to the Gaussian distribution whereas, at large amplitudes, it decayed more quickly than Gaussian. At large scales, the PDF of scalar increment behaved similarly to that of scalar fluctuation. In contrast, at small scales it resembled the PDF of scalar gradient. Furthermore, the scalar dissipation occurring at large magnitudes was found to grow with Sc. Due to low molecular diffusivity, in the Sc≫1 flow the scalar field rolled up and got mixed sufficiently. However, in the Sc≪1 flow the scalar field lost the small-scale structures by high molecular diffusivity and retained only the large-scale, cloudlike structures. The spectral analysis found that the spectral densities of scalar advection and dissipation in both Sc≫1 and Sc≪1 flows probably followed the k(-5/3) scaling. This indicated that in compressible turbulence the processes of advection and dissipation except that of scalar-dilatation coupling might deferring to the Kolmogorov picture. It then showed that at high wave numbers, the magnitudes of spectral coherency in both Sc≫1 and Sc≪1 flows decayed faster than the theoretical prediction of k(-2/3) for incompressible flows. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence with Sc=1 lacked a conspicuous bump structure in its spectrum, but was more intermittent in the dissipative range.
Challenges of microtome‐based serial block‐face scanning electron microscopy in neuroscience
WANNER, A. A.; KIRSCHMANN, M. A.
2015-01-01
Summary Serial block‐face scanning electron microscopy (SBEM) is becoming increasingly popular for a wide range of applications in many disciplines from biology to material sciences. This review focuses on applications for circuit reconstruction in neuroscience, which is one of the major driving forces advancing SBEM. Neuronal circuit reconstruction poses exceptional challenges to volume EM in terms of resolution, field of view, acquisition time and sample preparation. Mapping the connections between neurons in the brain is crucial for understanding information flow and information processing in the brain. However, information on the connectivity between hundreds or even thousands of neurons densely packed in neuronal microcircuits is still largely missing. Volume EM techniques such as serial section TEM, automated tape‐collecting ultramicrotome, focused ion‐beam scanning electron microscopy and SBEM (microtome serial block‐face scanning electron microscopy) are the techniques that provide sufficient resolution to resolve ultrastructural details such as synapses and provides sufficient field of view for dense reconstruction of neuronal circuits. While volume EM techniques are advancing, they are generating large data sets on the terabyte scale that require new image processing workflows and analysis tools. In this review, we present the recent advances in SBEM for circuit reconstruction in neuroscience and an overview of existing image processing and analysis pipelines. PMID:25907464
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Peng, Chia-Yen; Wang, Dongdong; Chen, Jiun-Shyan
2012-01-01
A wheel experiencing sinkage and slippage events poses a high risk to rover missions as evidenced by recent mobility challenges on the Mars Exploration Rover (MER) project. Because several factors contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc., there are significant benefits to modeling these events to a sufficient degree of complexity. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree finite element approaches enable simulations that capture sufficient detail of wheel-soil interaction while remaining computationally feasible. This study demonstrates some of the large deformation modeling capability of meshfree methods and the realistic solutions obtained by accounting for the soil material properties. A benchmark wheel-soil interaction problem is developed and analyzed using a specific class of meshfree methods called Reproducing Kernel Particle Method (RKPM). The benchmark problem is also analyzed using a commercially available finite element approach with Lagrangian meshing for comparison. RKPM results are comparable to classical pressure-sinkage terramechanics relationships proposed by Bekker-Wong. Pending experimental calibration by future work, the meshfree modeling technique will be a viable simulation tool for trade studies assisting rover wheel design.
Low-Dose CT of the Paranasal Sinuses: Minimizing X-Ray Exposure with Spectral Shaping.
Wuest, Wolfgang; May, Matthias; Saake, Marc; Brand, Michael; Uder, Michael; Lell, Michael
2016-11-01
Shaping the energy spectrum of the X-ray beam has been shown to be beneficial in low-dose CT. This study's aim was to investigate dose and image quality of tin filtration at 100 kV for pre-operative planning in low-dose paranasal CT imaging in a large patient cohort. In a prospective trial, 129 patients were included. 64 patients were randomly assigned to the study protocol (100 kV with additional tin filtration, 150mAs, 192x0.6-mm slice collimation) and 65 patients to the standard low-dose protocol (100 kV, 50mAs, 128 × 0.6-mm slice collimation). To assess the image quality, subjective parameters were evaluated using a five-point scale. This scale was applied on overall image quality and contour delineation of critical anatomical structures. All scans were of diagnostic image quality. Bony structures were of good diagnostic image quality in both groups, soft tissues were of sufficient diagnostic image quality in the study group because of a high level of noise. Radiation exposure was very low in both groups, but significantly lower in the study group (CTDI vol 1.2 mGy vs. 4.4 mGy, p < 0.001). Spectral optimization (tin filtration at 100 kV) allows for visualization of the paranasal sinus with sufficient image quality at a very low radiation exposure. • Spectral optimization (tin filtration) is beneficial to low-dose parasinus CT • Tin filtration at 100 kV yields sufficient image quality for pre-operative planning • Diagnostic parasinus CT can be performed with an effective dose <0.05 mSv.
Minimal microwave anisotrophy from perturbations induced at late times
NASA Technical Reports Server (NTRS)
Jaffe, Andrew H.; Stebbins, Albert; Frieman, Joshua A.
1994-01-01
Aside from primordial gravitational instability of the cosmological fluid, various mechanisms have been proposed to generate large-scale structure at relatively late times, including, e.g., 'late-time' cosmological phase transitions. In these scenarios, it is envisioned that the universe is nearly homogeneous at the times of last scattering and that perturbations grow rapidly sometimes after the primordial plasma recombines. On this basis, it was suggested that large inhomogeneities could be generated while leaving relatively little imprint on the cosmic microwave background (MBR) anisotropy. In this paper, we calculate the minimal anisotropies possible in any 'late-time' scenario for structure formation, given the level of inhomogeneity observed at present. Since the growth of the inhomogeneity involves time-varying gravitational fields, these scenarios inevitably generate significant MBR anisotropy via the Sachs-Wolfe effect. Moreover, we show that the large-angle MBR anisotropy produced by the rapid post-recombination growth of inhomogeneity is generally greater than that produced by the same inhomogeneity growth via gravitational instability. In 'realistic' scenarios one can decrease the anisotropy compared to models with primordial adiabatic fluctuations, but only on very small angular scales. The value of any particular measure of the anisotropy can be made small in late-time models, but only by making the time-dependence of the gravitational field sufficiently 'pathological'.
Drought in the Horn of Africa: attribution of a damaging and repeating extreme event
NASA Astrophysics Data System (ADS)
Marthews, Toby; Otto, Friederike; Mitchell, Daniel; Dadson, Simon; Jones, Richard
2015-04-01
We have applied detection and attribution techniques to the severe drought that hit the Horn of Africa in 2014. The short rains failed in late 2013 in Kenya, South Sudan, Somalia and southern Ethiopia, leading to a very dry growing season January to March 2014, and subsequently to the current drought in many agricultural areas of the sub-region. We have made use of the weather@home project, which uses publicly-volunteered distributed computing to provide a large ensemble of simulations sufficient to sample regional climate uncertainty. Based on this, we have estimated the occurrence rates of the kinds of the rare and extreme events implicated in this large-scale drought. From land surface model runs based on these ensemble simulations, we have estimated the impacts of climate anomalies during this period and therefore we can reliably identify some factors of the ongoing drought as attributable to human-induced climate change. The UNFCCC's Adaptation Fund is attempting to support projects that bring about an adaptation to "the adverse effects of climate change", but in order to formulate such projects we need a much clearer way to assess how much climate change is human-induced and how much is a consequence of climate anomalies and large-scale teleconnections, which can only be provided by robust attribution techniques.
Nanoscale Dewetting Transition in Protein Complex Folding
Hua, Lan; Huang, Xuhui; Liu, Pu; Zhou, Ruhong; Berne, Bruce J.
2011-01-01
In a previous study, a surprising drying transition was observed to take place inside the nanoscale hydrophobic channel in the tetramer of the protein melittin. The goal of this paper is to determine if there are other protein complexes capable of displaying a dewetting transition during their final stage of folding. We searched the entire protein data bank (PDB) for all possible candidates, including protein tetramers, dimers, and two-domain proteins, and then performed the molecular dynamics (MD) simulations on the top candidates identified by a simple hydrophobic scoring function based on aligned hydrophobic surface areas. Our large scale MD simulations found several more proteins, including three tetramers, six dimers, and two two-domain proteins, which display a nanoscale dewetting transition in their final stage of folding. Even though the scoring function alone is not sufficient (i.e., a high score is necessary but not sufficient) in identifying the dewetting candidates, it does provide useful insights into the features of complex interfaces needed for dewetting. All top candidates have two features in common: (1) large aligned (matched) hydrophobic areas between two corresponding surfaces, and (2) large connected hydrophobic areas on the same surface. We have also studied the effect on dewetting of different water models and different treatments of the long-range electrostatic interactions (cutoff vs PME), and found the dewetting phenomena is fairly robust. This work presents a few proteins other than melittin tetramer for further experimental studies of the role of dewetting in the end stages of protein folding. PMID:17608515
Performance of ceramic superconductors in magnetic bearings
NASA Technical Reports Server (NTRS)
Kirtley, James L., Jr.; Downer, James R.
1993-01-01
Magnetic bearings are large-scale applications of magnet technology, quite similar in certain ways to synchronous machinery. They require substantial flux density over relatively large volumes of space. Large flux density is required to have satisfactory force density. Satisfactory dynamic response requires that magnetic circuit permeances not be too large, implying large air gaps. Superconductors, which offer large magnetomotive forces and high flux density in low permeance circuits, appear to be desirable in these situations. Flux densities substantially in excess of those possible with iron can be produced, and no ferromagnetic material is required. Thus the inductance of active coils can be made low, indicating good dynamic response of the bearing system. The principal difficulty in using superconductors is, of course, the deep cryogenic temperatures at which they must operate. Because of the difficulties in working with liquid helium, the possibility of superconductors which can be operated in liquid nitrogen is thought to extend the number and range of applications of superconductivity. Critical temperatures of about 98 degrees Kelvin were demonstrated in a class of materials which are, in fact, ceramics. Quite a bit of public attention was attracted to these new materials. There is a difficulty with the ceramic superconducting materials which were developed to date. Current densities sufficient for use in large-scale applications have not been demonstrated. In order to be useful, superconductors must be capable of carrying substantial currents in the presence of large magnetic fields. The possible use of ceramic superconductors in magnetic bearings is investigated and discussed and requirements that must be achieved by superconductors operating at liquid nitrogen temperatures to make their use comparable with niobium-titanium superconductors operating at liquid helium temperatures are identified.
Primordial black hole production in Critical Higgs Inflation
NASA Astrophysics Data System (ADS)
Ezquiaga, Jose María; García-Bellido, Juan; Ruiz Morales, Ester
2018-01-01
Primordial Black Holes (PBH) arise naturally from high peaks in the curvature power spectrum of near-inflection-point single-field inflation, and could constitute today the dominant component of the dark matter in the universe. In this letter we explore the possibility that a broad spectrum of PBH is formed in models of Critical Higgs Inflation (CHI), where the near-inflection point is related to the critical value of the RGE running of both the Higgs self-coupling λ (μ) and its non-minimal coupling to gravity ξ (μ). We show that, for a wide range of model parameters, a half-domed-shaped peak in the matter spectrum arises at sufficiently small scales that it passes all the constraints from large scale structure observations. The predicted cosmic microwave background spectrum at large scales is in agreement with Planck 2015 data, and has a relatively large tensor-to-scalar ratio that may soon be detected by B-mode polarization experiments. Moreover, the wide peak in the power spectrum gives an approximately lognormal PBH distribution in the range of masses 0.01- 100M⊙, which could explain the LIGO merger events, while passing all present PBH observational constraints. The stochastic background of gravitational waves coming from the unresolved black-hole-binary mergers could also be detected by LISA or PTA. Furthermore, the parameters of the CHI model are consistent, within 2σ, with the measured Higgs parameters at the LHC and their running. Future measurements of the PBH mass spectrum could allow us to obtain complementary information about the Higgs couplings at energies well above the EW scale, and thus constrain new physics beyond the Standard Model.
NASA Astrophysics Data System (ADS)
Katamzi, Zama; Bosco Habarulema, John
2017-04-01
Large scale traveling ionospheric disturbances (LSTIDs) are a key dynamic ionospheric process that transports energy and momentum vertically and horizontally during storms. These disturbances are observed as electron density irregularities in total electron content and other ionospheric parameters. This study reports on various explorations of LSTIDs characteristics, in particular horizontal and vertical propagation, during some major/severe storms of solar cycles 23-24. We have employed GNSS TEC to estimate horizontal propagation and radio occultation data from COSMIC/FORMOSAT-3 and SWARM satellites to estimate vertical motion. The work presented here reveals the evolution of the characterisation efficiency from using sparsely populated stations, resulting in limited spatial resolution through rudimentary analysis to more densely populated GNSS network leading to more accurate temporal and spatial determinations. For example, early observations of LSTIDs largely revealed unidirectional propagation whereas later studies have showed that one storm can induce multi-directional propagation, e.g. Halloween 2003 storm induced equatorward LSTIDs on a local scale whereas the 9 March 2012 storm induced simultaneous equatorward and poleward LSTIDs on a global scale. This later study, i.e. 9 March 2012 storm, revealed for the first time that ionospheric electrodynamics, specifically variations in ExB drift, is also an efficient generator of LSTIDs. Results from these studies also revealed constructive and destructive interference pattern of storm induced LSTIDs. Constellations of LEO satellites such as COSMIC/FORMOSAT-3 and SWARM have given sufficient spatial and temporal resolution to study vertical propagation of LSTIDs in addition to the meridional propagation given by GNSS TEC; the former (i.e. vertical velocities) were found to fall below 100 m/s.
Housing first on a large scale: Fidelity strengths and challenges in the VA's HUD-VASH program.
Kertesz, Stefan G; Austin, Erika L; Holmes, Sally K; DeRussy, Aerin J; Van Deusen Lukas, Carol; Pollio, David E
2017-05-01
Housing First (HF) combines permanent supportive housing and supportive services for homeless individuals and removes traditional treatment-related preconditions for housing entry. There has been little research describing strengths and shortfalls of HF implementation outside of research demonstration projects. The U.S. Department of Veterans Affairs (VA) has transitioned to an HF approach in a supportive housing program serving over 85,000 persons. This offers a naturalistic window to study fidelity when HF is adopted on a large scale. We operationalized HF into 20 criteria grouped into 5 domains. We assessed 8 VA medical centers twice (1 year apart), scoring each criterion using a scale ranging from 1 ( low fidelity ) to 4 ( high fidelity ). There were 2 HF domains (no preconditions and rapidly offering permanent housing) for which high fidelity was readily attained. There was uneven progress in prioritizing the most vulnerable clients for housing support. Two HF domains (sufficient supportive services and a modern recovery philosophy) had considerably lower fidelity. Interviews suggested that operational issues such as shortfalls in staffing and training likely hindered performance in these 2 domains. In this ambitious national HF program, the largest to date, we found substantial fidelity in focusing on permanent housing and removal of preconditions to housing entry. Areas of concern included the adequacy of supportive services and adequacy in deployment of a modern recovery philosophy. Under real-world conditions, large-scale implementation of HF is likely to require significant additional investment in client service supports to assure that results are concordant with those found in research studies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA,FAA,ONERA Swept-Wing Icing and Aerodynamics: Summary of Research and Current Status
NASA Technical Reports Server (NTRS)
Broeren, Andy
2015-01-01
NASA, FAA, ONERA, and other partner organizations have embarked on a significant, collaborative research effort to address the technical challenges associated with icing on large scale, three-dimensional swept wings. These are extremely complex phenomena important to the design, certification and safe operation of small and large transport aircraft. There is increasing demand to balance trade-offs in aircraft efficiency, cost and noise that tend to compete directly with allowable performance degradations over an increasing range of icing conditions. Computational fluid dynamics codes have reached a level of maturity that they are being proposed by manufacturers for use in certification of aircraft for flight in icing. However, sufficient high-quality data to evaluate their performance on iced swept wings are not currently available in the public domain and significant knowledge gaps remain.
Human Finger-Prick Induced Pluripotent Stem Cells Facilitate the Development of Stem Cell Banking
Tan, Hong-Kee; Toh, Cheng-Xu Delon; Ma, Dongrui; Yang, Binxia; Liu, Tong Ming; Lu, Jun; Wong, Chee-Wai; Tan, Tze-Kai; Li, Hu; Syn, Christopher; Tan, Eng-Lee; Lim, Bing; Lim, Yoon-Pin; Cook, Stuart A.
2014-01-01
Induced pluripotent stem cells (iPSCs) derived from somatic cells of patients can be a good model for studying human diseases and for future therapeutic regenerative medicine. Current initiatives to establish human iPSC (hiPSC) banking face challenges in recruiting large numbers of donors with diverse diseased, genetic, and phenotypic representations. In this study, we describe the efficient derivation of transgene-free hiPSCs from human finger-prick blood. Finger-prick sample collection can be performed on a “do-it-yourself” basis by donors and sent to the hiPSC facility for reprogramming. We show that single-drop volumes of finger-prick samples are sufficient for performing cellular reprogramming, DNA sequencing, and blood serotyping in parallel. Our novel strategy has the potential to facilitate the development of large-scale hiPSC banking worldwide. PMID:24646489
Statistical significance test for transition matrices of atmospheric Markov chains
NASA Technical Reports Server (NTRS)
Vautard, Robert; Mo, Kingtse C.; Ghil, Michael
1990-01-01
Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Alvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier
2011-01-01
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the "Smart Grid" which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called "MagicBox" equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.
Park, Hyeon Jin; Yang, Hyung Kook; Shin, Dong Wook; Kim, Yoon Yi; Kim, Young Ae; Yun, Young Ho; Nam, Byung Ho; Bhatia, Smita; Park, Byung Kiu; Ghim, Thad T; Kang, Hyoung Jin; Park, Kyung Duk; Shin, Hee Young; Ahn, Hyo Seop
2013-12-01
We verified the reliability and validity of the Korean version of the Minneapolis-Manchester Quality of Life Instrument-Adolescent Form (KMMQL-AF) among Korean childhood cancer survivors. A total of 107 childhood cancer patients undergoing cancer treatment and 98 childhood cancer survivors who completed cancer treatment were recruited. To assess the internal structure of the KMMQL-AF, we performed multi-trait scaling analyses and exploratory factor analysis. Additionally, we compared each domains of the KMMQL-AF with those of the Karnofsky Performance Status Scale and the Revised Children's Manifest Anxiety Scale (RCMAS). Internal consistency of the KMMQL-AF was sufficient (Cronbach's alpha: 0.78-0.92). In multi-trait scaling analyses, the KMMQL-AF showed sufficient construct validity. The "physical functioning" domain showed moderate correlation with Karnofsky scores and the "psychological functioning" domain showed moderate-to-high correlation with the RCMAS. The KMMQL-AF discriminated between subgroups of different adolescent cancer survivors depending on treatment completion. The KMMQL-AF is a sufficiently reliable and valid instrument for measuring quality of life among Korean childhood cancer survivors.
Performance and scaling of a novel locomotor structure: adhesive capacity of climbing gobiid fishes.
Maie, Takashi; Schoenfuss, Heiko L; Blob, Richard W
2012-11-15
Many species of gobiid fishes adhere to surfaces using a sucker formed from fusion of the pelvic fins. Juveniles of many amphidromous species use this pelvic sucker to scale waterfalls during migrations to upstream habitats after an oceanic larval phase. However, adults may still use suckers to re-scale waterfalls if displaced. If attachment force is proportional to sucker area and if growth of the sucker is isometric, then increases in the forces that climbing fish must resist might outpace adhesive capacity, causing climbing performance to decline through ontogeny. To test for such trends, we measured pressure differentials and adhesive suction forces generated by the pelvic sucker across wide size ranges in six goby species, including climbing and non-climbing taxa. Suction was achieved via two distinct growth strategies: (1) small suckers with isometric (or negatively allometric) scaling among climbing gobies and (2) large suckers with positively allometric growth in non-climbing gobies. Species using the first strategy show a high baseline of adhesive capacity that may aid climbing performance throughout ontogeny, with pressure differentials and suction forces much greater than expected if adhesion were a passive function of sucker area. In contrast, large suckers possessed by non-climbing species may help compensate for reduced pressure differentials, thereby producing suction sufficient to support body weight. Climbing Sicyopterus species also use oral suckers during climbing waterfalls, and these exhibited scaling patterns similar to those for pelvic suckers. However, oral suction force was considerably lower than that for pelvic suckers, reducing the ability for these fish to attach to substrates by the oral sucker alone.
Mayne, Darren J; Morgan, Geoffrey G; Jalaludin, Bin B; Bauman, Adrian E
2017-10-03
Individual-level studies support a positive relation between walkable built environments and participation in moderate-intensity walking. However, the utility of this evidence for population-level planning is less clear as it is derived at much finer spatial scales than those used for regional programming. The aims of this study were to: evaluate if individual-level relations between walkability and walking to improve health manifest at population-level spatial scales; assess the specificity of area-level walkability for walking relative to other moderate and vigorous physical activity (MVPA); describe geographic variation in walking and other MVPA; and quantify the contribution of walkability to this variation. Data on sufficient walking, sufficient MVPA, and high MVPA to improve health were analyzed for 95,837 Sydney respondents to the baseline survey of the 45 and Up Study between January 2006 and April 2010. We used conditional autoregressive models to create smoothed MVPA "disease maps" and assess relations between sufficient MVPA to improve health and area-level walkability adjusted for individual-level demographic, socioeconomic, and health factors, and area-level relative socioeconomic disadvantage. Within-cohort prevalence of meeting recommendations for sufficient walking, sufficient MVPA, and high MVPA were 31.7 (95% CI 31.4-32.0), 69.4 (95% CI 69.1-69.7), and 56.1 (95% CI 55.8-56.4) percent. Prevalence of sufficient walking was increased by 1.20 (95% CrI 1.12-1.29) and 1.07 (95% CrI 1.01-1.13) for high and medium-high versus low walkability postal areas, and for sufficient MVPA by 1.05 (95% CrI 1.01-1.08) for high versus low walkability postal areas. Walkability was not related to high MVPA. Postal area walkability explained 65.8 and 47.4 percent of residual geographic variation in sufficient walking and sufficient MVPA not attributable to individual-level factors. Walkability is associated with area-level prevalence and geographic variation in sufficient walking and sufficient MVPA to improve health in Sydney, Australia. Our study supports the use of walkability indexes at multiple spatial scales for informing population-level action to increase physical activity and the utility of spatial analysis for walkability research and planning.
Sauer, Jeremy A.; Munoz-Esparza, Domingo; Canfield, Jesse M.; ...
2016-06-24
In this study, the impact of atmospheric boundary layer (ABL) interactions with large-scale stably stratified flow over an isolated, two-dimensional hill is investigated using turbulence-resolving large-eddy simulations. The onset of internal gravity wave breaking and leeside flow response regimes of trapped lee waves and nonlinear breakdown (or hydraulic-jump-like state) as they depend on the classical inverse Froude number, Fr -1 = Nh/U g, is explored in detail. Here, N is the Brunt–Väisälä frequency, h is the hill height, and U g is the geostrophic wind. The results here demonstrate that the presence of a turbulent ABL influences mountain wave (MW) development in critical aspects, such as dissipation of trapped lee waves and amplified stagnation zone turbulence through Kelvin–Helmholtz instability. It is shown that the nature of interactions between the large-scale flow and the ABL is better characterized by a proposed inverse compensated Froude number, Frmore » $$-1\\atop{c}$$ = N(h - z i)/U g, where z i is the ABL height. In addition, it is found that the onset of the nonlinear-breakdown regime, Fr$$-1\\atop{c}$$ ≈ 1.0, is initiated when the vertical wavelength becomes comparable to the sufficiently energetic scales of turbulence in the stagnation zone and ABL, yielding an abrupt change in leeside flow response. Lastly, energy spectra are presented in the context of MW flows, supporting the existence of a clear transition in leeside flow response, and illustrating two distinct energy distribution states for the trapped-lee-wave and the nonlinear-breakdown regimes.« less
Relay discovery and selection for large-scale P2P streaming
Zhang, Chengwei; Wang, Angela Yunxian
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384
Relay discovery and selection for large-scale P2P streaming.
Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
Terrestrial record of the solar system's oscillation about the galactic plane
NASA Technical Reports Server (NTRS)
Stothers, R. B.
1985-01-01
A new study is presented of the observational evidence pertaining to the theory which attributes the episodic component of the earth's impact cratering record over the past 600 Myr to gravitational encounters between the solar system and interstellar clouds that cause comets to fall into the solar system and impact the earth. Contrary to a claim by Thaddeus and Chanan (1985), the vertical scale height of the clouds seems to be sufficently small and the sun's vertical trajectory sufficiently large for the modulating effect of the sun's galactovertical motion to be detectable in the terrestrial record of impact cratering with at least a 50 percent a priori probability.
NASA Applications of Molecular Nanotechnology
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Han, Jie; Jaffe, Richard; Levit, Creon; Merkle, Ralph; Srivastava, Deepak
1998-01-01
Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.
Statistical machine translation for biomedical text: are we there yet?
Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre
2011-01-01
In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.
Quinine (Cinchona) and the incurable malaria: India c. 1900-1930s.
Muraleedharan, V R
2000-06-01
The early decades of this century witnessed significant developments in the approaches to control of malaria in British India. These included both large-scale preventive measures and curative treatment methods (often referred to as "cinchona" or "quinine" policy). This paper identifies a number of factors that constrained the colonial government's capacity to control malaria through effective cinchona policy. The ideal of achieving "self-sufficiency" and having an efficient form of treatment and distribution within the reach of the masses in India (as originally intended in late 1850s) was far from being achieved. Both government's policy and medical profession seemed to have contributed equally to this failure.
Generation of dynamo magnetic fields in the primordial solar nebula
NASA Technical Reports Server (NTRS)
Stepinski, Tomasz F.
1992-01-01
The present treatment of dynamo-generated magnetic fields in the primordial solar nebula proceeds in view of the ability of the combined action of Keplerian rotation and helical convention to generate, via alpha-omega dynamo, large-scale magnetic fields in those parts of the nebula with sufficiently high, gas-and magnetic field coupling electrical conductivity. Nebular gas electrical conductivity and the radial distribution of the local dynamo number are calculated for both a viscous-accretion disk model and the quiescent-minimum mass nebula. It is found that magnetic fields can be easily generated and maintained by alpha-omega dynamos occupying the inner and outer parts of the nebula.
Mesoscale Effective Property Simulations Incorporating Conductive Binder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
Mesoscale Effective Property Simulations Incorporating Conductive Binder
Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.; ...
2017-07-26
Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less
A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2018-05-01
Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.
NASA Astrophysics Data System (ADS)
Eftekharzadeh, S.; Myers, A. D.; Hennawi, J. F.; Djorgovski, S. G.; Richards, G. T.; Mahabal, A. A.; Graham, M. J.
2017-06-01
We present the most precise estimate to date of the clustering of quasars on very small scales, based on a sample of 47 binary quasars with magnitudes of g < 20.85 and proper transverse separations of ˜25 h-1 kpc. Our sample of binary quasars, which is about six times larger than any previous spectroscopically confirmed sample on these scales, is targeted using a kernel density estimation (KDE) technique applied to Sloan Digital Sky Survey (SDSS) imaging over most of the SDSS area. Our sample is 'complete' in that all of the KDE target pairs with 17.0 ≲ R ≲ 36.2 h-1 kpc in our area of interest have been spectroscopically confirmed from a combination of previous surveys and our own long-slit observational campaign. We catalogue 230 candidate quasar pairs with angular separations of <8 arcsec, from which our binary quasars were identified. We determine the projected correlation function of quasars (\\bar{W}_p) in four bins of proper transverse scale over the range 17.0 ≲ R ≲ 36.2 h-1 kpc. The implied small-scale quasar clustering amplitude from the projected correlation function, integrated across our entire redshift range, is A = 24.1 ± 3.6 at ˜26.6 h-1 kpc. Our sample is the first spectroscopically confirmed sample of quasar pairs that is sufficiently large to study how quasar clustering evolves with redshift at ˜25 h-1 kpc. We find that empirical descriptions of how quasar clustering evolves with redshift at ˜25 h-1 Mpc also adequately describe the evolution of quasar clustering at ˜25 h-1 kpc.
Modeling emergent large-scale structures of barchan dune fields
NASA Astrophysics Data System (ADS)
Worman, S. L.; Murray, A.; Littlewood, R. C.; Andreotti, B.; Claudin, P.
2013-12-01
In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides ('calving'); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.
Flexible Organic Electronics for Use in Neural Sensing
Bink, Hank; Lai, Yuming; Saudari, Sangameshwar R.; Helfer, Brian; Viventi, Jonathan; Van der Spiegel, Jan; Litt, Brian; Kagan, Cherie
2016-01-01
Recent research in brain-machine interfaces and devices to treat neurological disease indicate that important network activity exists at temporal and spatial scales beyond the resolution of existing implantable devices. High density, active electrode arrays hold great promise in enabling high-resolution interface with the brain to access and influence this network activity. Integrating flexible electronic devices directly at the neural interface can enable thousands of multiplexed electrodes to be connected using many fewer wires. Active electrode arrays have been demonstrated using flexible, inorganic silicon transistors. However, these approaches may be limited in their ability to be cost-effectively scaled to large array sizes (8×8 cm). Here we show amplifiers built using flexible organic transistors with sufficient performance for neural signal recording. We also demonstrate a pathway for a fully integrated, amplified and multiplexed electrode array built from these devices. PMID:22255558
Relativistic magnetised perturbations: magnetic pressure versus magnetic tension
NASA Astrophysics Data System (ADS)
Tseneklidou, Dimitra; Tsagas, Christos G.; Barrow, John D.
2018-06-01
We study the linear evolution of magnetised cosmological perturbations in the post-recombination epoch. Using full general relativity and adopting the ideal magnetohydrodynamic approximation, we refine and extend the previous treatments. More specifically, this is the first relativistic study that accounts for the effects of the magnetic tension, in addition to those of the field’s pressure. Our solutions show that on sufficiently large scales, larger than the (purely magnetic) Jeans length, the perturbations evolve essentially unaffected by the magnetic presence. The magnetic pressure dominates on small scales, where it forces the perturbations to oscillate and decay. Close to the Jeans length, however, the field’s tension takes over and leads to a weak growth of the inhomogeneities. These solutions clearly demonstrate the opposing action of the aforementioned two magnetic agents, namely of the field’s pressure and tension, on the linear evolution of cosmological density perturbations.
Otjes, Simon
2018-01-01
In political science the economic left-right dimension plays a central role. A growing body of evidence shows that the economic policy preferences of a large segment of citizens do not scale sufficiently. Using Mokken scale analysis, this study determines the causes of this phenomenon. Differences in the extent to which the economic policy preferences of citizens fit the left-right dimension can be explained in terms of the interaction between individual level and political system-level variables: citizens who spend more attention to politicians with views that conform to the left-right dimension, have views that conform to the left-right dimension. There is also a role for the legacy of communist dictatorship: citizens who were socialised in democratic countries have views that fit the left-right dimension better than those socialised during communism.
NASA Technical Reports Server (NTRS)
Starr, David OC.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus M.; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric; Khvorostyanov, Vitaly;
2000-01-01
The GEWEX Cloud System Study (GCSS, GEWEX is the Global Energy and Water Cycle Experiment) is a community activity aiming to promote development of improved cloud parameterizations for application in the large-scale general circulation models (GCMs) used for climate research and for numerical weather prediction (Browning et al, 1994). The GCSS strategy is founded upon the use of cloud-system models (CSMs). These are "process" models with sufficient spatial and temporal resolution to represent individual cloud elements, but spanning a wide range of space and time scales to enable statistical analysis of simulated cloud systems. GCSS also employs single-column versions of the parametric cloud models (SCMs) used in GCMs. GCSS has working groups on boundary-layer clouds, cirrus clouds, extratropical layer cloud systems, precipitating deep convective cloud systems, and polar clouds.
This meeting: A biased observer's view
NASA Astrophysics Data System (ADS)
Heiles, Carl
1992-06-01
Letting yourself be nominated for a conference summary talk is considered by some to be a big mistake. It eliminates the possibility of making up the sleep lost at night, while partying, during the day, while sitting in the talks. It even forces you to look at all the poster papers. But at a meeting like this, with the wealth of observational data, it is definitely not a mistake: it was even worth missing some of the parties! My problem was to devise a way to be sufficiently selective so as to provide a reasonably coherent summary. I chose to emphasize the multitude of large-scale maps presented at the meeting. Many are relevant to the ``worm paradigm'' (Sec. 2), and the recent γ-ray and ROSAT results are relevant to the Hot Ionized Medium (Sec. 3). And finally, I was impressed by a number of well-crafted smaller-scale observations, which elucidate particular aspects of the interstellar medium (Sec. 4).
This meeting: A biased observer's view
NASA Astrophysics Data System (ADS)
Heiles, Carl
Letting yourself be nominated for a conference summary talk is considered by some to be a big mistake. It eliminates the possibility of making up the sleep lost at night, while partying, during the day, while sitting in the talks. It even forces you to look at all the poster papers. But at a meeting like this, with the wealth of observational data, it is definitely not a mistake: it was even worth missing some of the parties! My problem was to devise a way to be sufficiently selective so as to provide a reasonably coherent summary. I chose to emphasize the multitude of large-scale maps presented at the meeting. Many are relevant to the ``worm paradigm'' (Sec. 2), and the recent γ-ray and ROSAT results are relevant to the Hot Ionized Medium (Sec. 3). And finally, I was impressed by a number of well-crafted smaller-scale observations, which elucidate particular aspects of the interstellar medium (Sec. 4).
Numerical simulation of filling a magnetic flux tube with a cold plasma: Anomalous plasma effects
NASA Technical Reports Server (NTRS)
Singh, Nagendra; Leung, W. C.
1995-01-01
Large-scale models of plasmaspheric refilling have revealed that during the early stage of the refilling counterstreaming ion beams are a common feature. However, the instability of such ion beams and its effect on refilling remain unexplored. In order to learn the basic effects of ion beam instabilities on refilling, we have performed numerical simulations of the refilling of an artificial magnetic flux tube. (The shape and size of the tube are assumed so that the essential features of the refilling problem are kept in the simulation and at the same time the small scale processes driven by the ion beams are sufficiently resolved.) We have also studied the effect of commonly found equatorially trapped warm and/or hot plasma on the filling of a flux tube with a cold plasma. Three types of simulation runs have been performed.
Supersonic Retropropulsion Technology Development in NASA's Entry, Descent, and Landing Project
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Berry, Scott A.; Rhode, Matthew N.; Kelb, Bil; Korzun, Ashley; Dyakonov, Artem A.; Zarchi, Kerry A.; Schauerhamer, Daniel G.; Post, Ethan A.
2012-01-01
NASA's Entry, Descent, and Landing (EDL) space technology roadmap calls for new technologies to achieve human exploration of Mars in the coming decades [1]. One of those technologies, termed Supersonic Retropropulsion (SRP), involves initiation of propulsive deceleration at supersonic Mach numbers. The potential benefits afforded by SRP to improve payload mass and landing precision make the technology attractive for future EDL missions. NASA's EDL project spent two years advancing the technological maturity of SRP for Mars exploration [2-15]. This paper summarizes the technical accomplishments from the project and highlights challenges and recommendations for future SRP technology development programs. These challenges include: developing sufficiently large SRP engines for use on human-scale entry systems; testing and computationally modelling complex and unsteady SRP fluid dynamics; understanding the effects of SRP on entry vehicle stability and controllability; and demonstrating sub-scale SRP entry systems in Earth's atmosphere.
Molecular-scale properties of MoO3 -doped pentacene
NASA Astrophysics Data System (ADS)
Ha, Sieu D.; Meyer, Jens; Kahn, Antoine
2010-10-01
The mechanisms of molecular doping in organic electronic materials are explored through investigation of pentacene p -doped with molybdenum trioxide (MoO3) . Doping is confirmed with ultraviolet photoelectron spectroscopy. Isolated dopants are imaged at the molecular scale using scanning tunneling microscopy (STM) and effects due to localized holes are observed. The results demonstrate that donated charges are localized by the counterpotential of ionized dopants in MoO3 -doped pentacene, generalizing similar effects previously observed for pentacene doped with tetrafluoro-tetracyanoquinodimethane. Such localized hole effects are only observed for low molecular weight MoO3 species. It is shown that for larger MoO3 polymers and clusters, the ionized dopant potential is sufficiently large as to mask the effect of the localized hole in STM images. Current-voltage measurements recorded using scanning tunneling spectroscopy reveal that electron conductivity decreases in MoO3 -doped films, as expected for electron capture and p -doping.
High-resolution simulation of deep pencil beam surveys - analysis of quasi-periodicity
NASA Astrophysics Data System (ADS)
Weiss, A. G.; Buchert, T.
1993-07-01
We carry out pencil beam constructions in a high-resolution simulation of the large-scale structure of galaxies. The initial density fluctuations are taken to have a truncated power spectrum. All the models have {OMEGA} = 1. As an example we present the results for the case of "Hot-Dark-Matter" (HDM) initial conditions with scale-free n = 1 power index on large scales as a representative of models with sufficient large-scale power. We use an analytic approximation for particle trajectories of a self-gravitating dust continuum and apply a local dynamical biasing of volume elements to identify luminous matter in the model. Using this method, we are able to resolve formally a simulation box of 1200h^-1^ Mpc (e.g. for HDM initial conditions) down to the scale of galactic halos using 2160^3^ particles. We consider this as the minimal resolution necessary for a sensible simulation of deep pencil beam data. Pencil beam probes are taken for a given epoch using the parameters of observed beams. In particular, our analysis concentrates on the detection of a quasi-periodicity in the beam probes using several different methods. The resulting beam ensembles are analyzed statistically using number distributions, pair-count histograms, unnormalized pair-counts, power spectrum analysis and trial-period folding. Periodicities are classified according to their significance level in the power spectrum of the beams. The simulation is designed for application to parameter studies which prepare future observational projects. We find that a large percentage of the beams show quasi- periodicities with periods which cluster at a certain length scale. The periods found range between one and eight times the cutoff length in the initial fluctuation spectrum. At significance levels similar to those of the data of Broadhurst et al. (1990), we find about 15% of the pencil beams to show periodicities, about 30% of which are around the mean separation of rich clusters, while the distribution of scales reaches values of more than 200h^-1^ Mpc. The detection of periodicities larger than the typical void size must not be due to missing of "walls" (like the so called "Great Wall" seen in the CfA catalogue of galaxies), but can be due to different clustering properties of galaxies along the beams.
Tuberculosis and the role of war in the modern era.
Drobniewski, F A; Verlander, N Q
2000-12-01
Tuberculosis (TB) remains a major global health problem; historically, major wars have increased TB notifications. This study evaluated whether modern conflicts worldwide affected TB notifications between 1975 and 1995. Dates of conflicts were obtained and matched with national TB notification data reported to the World Health Organization. Overall notification rates were calculated pre and post conflict. Poisson regression analysis was applied to all conflicts with sufficient data for detailed trend analysis. Thirty-six conflicts were identified, for which 3-year population and notification data were obtained. Overall crude TB notification rates were 81.9 and 105.1/100,000 pre and post start of conflict in these countries. Sufficient data existed in 16 countries to apply Poisson regression analysis to model 5-year pre and post start of conflict trends. This analysis indicated that the risk of presenting with TB in any country 2.5 years after the outbreak of conflict relative to 2.5 years before the outbreak was 1.016 (95%CI 0.9435-1.095). The modelling suggested that in the modern era war may not significantly damage efforts to control TB in the long term. This might be due to the limited scale of most of these conflicts compared to the large-scale civilian disruption associated with 'world wars'. The management of TB should be considered in planning post-conflict refugee and reconstruction programmes.
Physics of Core-Collapse Supernovae in Three Dimensions: A Sneak Preview
NASA Astrophysics Data System (ADS)
Janka, Hans-Thomas; Melson, Tobias; Summa, Alexander
2016-10-01
Nonspherical mass motions are a generic feature of core-collapse supernovae, and hydrodynamic instabilities play a crucial role in the explosion mechanism. The first successful neutrino-driven explosions could be obtained with self-consistent, first-principles simulations in three spatial dimensions. But three-dimensional (3D) models tend to be less prone to explosion than the corresponding axisymmetric two-dimensional (2D) ones. The reason is that 3D turbulence leads to energy cascading from large to small spatial scales, the inverse of the 2D case, thus disfavoring the growth of buoyant plumes on the largest scales. Unless the inertia to explode simply reflects a lack of sufficient resolution in relevant regions, some important component of robust and sufficiently energetic neutrino-powered explosions may still be missing. Such a deficit could be associated with progenitor properties such as rotation, magnetic fields, or precollapse perturbations, or with microphysics that could cause enhancement of neutrino heating behind the shock. 3D simulations have also revealed new phenomena that are not present in 2D ones, such as spiral modes of the standing accretion shock instability (SASI) and a stunning dipolar lepton-number emission self-sustained asymmetry (LESA). Both impose time- and direction-dependent variations on the detectable neutrino signal. The understanding of these effects and of their consequences is still in its infancy.
A Unified Theory of Impact Crises and Mass Extinctions: Quantitative Tests
NASA Technical Reports Server (NTRS)
Rampino, Michael R.; Haggerty, Bruce M.; Pagano, Thomas C.
1997-01-01
Several quantitative tests of a general hypothesis linking impacts of large asteroids and comets with mass extinctions of life are possible based on astronomical data, impact dynamics, and geological information. The waiting of large-body impacts on the Earth derive from the flux of Earth-crossing asteroids and comets, and the estimated size of impacts capable of causing large-scale environmental disasters, predict that impacts of objects greater than or equal to 5 km in diameter (greater than or equal to 10 (exp 7) Mt TNT equivalent) could be sufficient to explain the record of approximately 25 extinction pulses in the last 540 Myr, with the 5 recorded major mass extinctions related to impacts of the largest objects of greater than or equal to 10 km in diameter (greater than or equal to 10(exp 8) Mt Events). Smaller impacts (approximately 10 (exp 6) Mt), with significant regional environmental effects, could be responsible for the lesser boundaries in the geologic record.
The plasma separation process as a pre-cursor for large scale radioisotope production
NASA Astrophysics Data System (ADS)
Stevenson, Nigel R.
2001-07-01
Radioisotope production generally employs either accelerators or reactors to convert stable (usually enriched) isotopes into the desired product species. Radioisotopes have applications in industry, environmental sciences, and most significantly in medicine. The production of many potentially useful radioisotopes is significantly hindered by the lack of availability or by the high cost of key enriched stable isotopes. To try and meet this demand, certain niche enrichment processes have been developed and commercialized. Calutrons, centrifuges, and laser separation processes are some of the devices and techniques being employed to produce large quantities of selective enriched stable isotopes. Nevertheless, the list of enriched stable isotopes in sufficient quantities remains rather limited and this continues to restrict the availability of many radioisotopes that otherwise could have a significant impact on society. The Plasma Separation Process is a newly available commercial technique for producing large quantities of a wide range of enriched isotopes and thereby holds promise of being able to open the door to producing new and exciting applications of radioisotopes in the future.
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Taylor, R T; Solis, M; Weathers, D B; Taylor, J W
1975-03-01
In a large-scale study in the Miragoane Valley of Haiti, designed to test the effects of aerial ultralow volume (ULV) malathion on epidemic Plasmodium falciparum malaria, spray operations resulted in an immediate and sharp decline in numbers of the vector, Anopheles albimanus. The adult population of this mosquito remained at less than 1% of previous levels until several weeks after a 50-day spray period (27 October-16 December 1972) during which six cycles were completed. The study area offered ideal conditions of wind, temperature, humidity, and mountain barriers. Mosquitoes in the area were highly susceptible to malathion. Results indicated that aerial ULV treatment with malathion can reduce A. albimanus populations rapidly and effectively when applications are made over an area as large as 20,000 acres. Preliminary results showed that effective control was not achieved in areas one-quarter that size; these areas were not sufficiently large, and infiltration of mosquitoes from adjacent untreated areas was possible.
NASA Astrophysics Data System (ADS)
Song, Yan; Wang, Xiaocha; Mi, Wenbo
2017-12-01
Exploring magnetic anisotropy (MA) in single-atom-doped two-dimensional materials provides a viable ground for realizing information storage and processing at ultimate length scales. Herein, the MA of 5 d transition-metal doped monolayer WSe2 is investigated by first-principles calculations. Large MA energy (MAE) is achieved in several doping systems. The direction of MA is determined by the dopant in-plane d states in the vicinity of the Fermi level in line with previous studies. An occupation rule that the parity of the occupation number of the in-plane d orbital of the dopant determines the preference between in-plane and out-of-plane anisotropy is found in this 5 d -doped system. Furthermore, this rule is understood by second-order perturbation theory and proved by charge-doping analysis. Considering relatively little research on two-dimensional MA and not sufficiently large MAE, suitable contact medium dopant pairs with large MAE and tunable MA pave the way to novel data storage paradigms.
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...
2017-11-14
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.
2017-11-01
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.
NASA Astrophysics Data System (ADS)
Bronstert, Axel; Heistermann, Maik; Francke, Till
2017-04-01
Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.
Voids and constraints on nonlinear clustering of galaxies
NASA Technical Reports Server (NTRS)
Vogeley, Michael S.; Geller, Margaret J.; Park, Changbom; Huchra, John P.
1994-01-01
Void statistics of the galaxy distribution in the Center for Astrophysics Redshift Survey provide strong constraints on galaxy clustering in the nonlinear regime, i.e., on scales R equal to or less than 10/h Mpc. Computation of high-order moments of the galaxy distribution requires a sample that (1) densely traces the large-scale structure and (2) covers sufficient volume to obtain good statistics. The CfA redshift survey densely samples structure on scales equal to or less than 10/h Mpc and has sufficient depth and angular coverage to approach a fair sample on these scales. In the nonlinear regime, the void probability function (VPF) for CfA samples exhibits apparent agreement with hierarchical scaling (such scaling implies that the N-point correlation functions for N greater than 2 depend only on pairwise products of the two-point function xi(r)) However, simulations of cosmological models show that this scaling in redshift space does not necessarily imply such scaling in real space, even in the nonlinear regime; peculiar velocities cause distortions which can yield erroneous agreement with hierarchical scaling. The underdensity probability measures the frequency of 'voids' with density rho less than 0.2 -/rho. This statistic reveals a paucity of very bright galaxies (L greater than L asterisk) in the 'voids.' Underdensities are equal to or greater than 2 sigma more frequent in bright galaxy samples than in samples that include fainter galaxies. Comparison of void statistics of CfA samples with simulations of a range of cosmological models favors models with Gaussian primordial fluctuations and Cold Dark Matter (CDM)-like initial power spectra. Biased models tend to produce voids that are too empty. We also compare these data with three specific models of the Cold Dark Matter cosmogony: an unbiased, open universe CDM model (omega = 0.4, h = 0.5) provides a good match to the VPF of the CfA samples. Biasing of the galaxy distribution in the 'standard' CDM model (omega = 1, b = 1.5; see below for definitions) and nonzero cosmological constant CDM model (omega = 0.4, h = 0.6 lambda(sub 0) = 0.6, b = 1.3) produce voids that are too empty. All three simulations match the observed VPF and underdensity probability for samples of very bright (M less than M asterisk = -19.2) galaxies, but produce voids that are too empty when compared with samples that include fainter galaxies.
Analysis of Radar and Optical Space Borne Data for Large Scale Topographical Mapping
NASA Astrophysics Data System (ADS)
Tampubolon, W.; Reinhardt, W.
2015-03-01
Normally, in order to provide high resolution 3 Dimension (3D) geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term "Rapid Mapping". In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS) imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs) from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar) or the usage of the GCPs in both, the optical and the radar satellite data processing. Finally this results will be used in the future as a reference for further geospatial data acquisitions to support topographical mapping in even larger scales up to the 1:10.000 map scale.
Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.
Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D
2018-01-01
Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales.
Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F.; Byrne, Maria; Malcolm, Hamish A.; Williams, Stefan B.; Steinberg, Peter D.
2018-01-01
Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate ‘no-take’ and ‘general-use’ (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5–10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales. PMID:29547656
NASA Astrophysics Data System (ADS)
Granger, Victoria; Fromentin, Jean-Marc; Bez, Nicolas; Relini, Giulio; Meynard, Christine N.; Gaertner, Jean-Claude; Maiorano, Porzia; Garcia Ruiz, Cristina; Follesa, Cristina; Gristina, Michele; Peristeraki, Panagiota; Brind'Amour, Anik; Carbonara, Pierluigi; Charilaou, Charis; Esteban, Antonio; Jadaud, Angélique; Joksimovic, Aleksandar; Kallianiotis, Argyris; Kolitari, Jerina; Manfredi, Chiara; Massuti, Enric; Mifsud, Roberta; Quetglas, Antoni; Refes, Wahid; Sbrana, Mario; Vrgoc, Nedo; Spedicato, Maria Teresa; Mérigot, Bastien
2015-01-01
Increasing human pressures and global environmental change may severely affect the diversity of species assemblages and associated ecosystem services. Despite the recent interest in phylogenetic and functional diversity, our knowledge on large spatio-temporal patterns of demersal fish diversity sampled by trawling remains still incomplete, notably in the Mediterranean Sea, one of the most threatened marine regions of the world. We investigated large spatio-temporal diversity patterns by analysing a dataset of 19,886 hauls from 10 to 800 m depth performed annually during the last two decades by standardised scientific bottom trawl field surveys across the Mediterranean Sea, within the MEDITS program. A multi-component (eight diversity indices) and multi-scale (local assemblages, biogeographic regions to basins) approach indicates that only the two most traditional components (species richness and evenness) were sufficient to reflect patterns in taxonomic, phylogenetic or functional richness and divergence. We also put into question the use of widely computed indices that allow comparing directly taxonomic, phylogenetic and functional diversity within a unique mathematical framework. In addition, demersal fish assemblages sampled by trawl do not follow a continuous decreasing longitudinal/latitudinal diversity gradients (spatial effects explained up to 70.6% of deviance in regression tree and generalised linear models), for any of the indices and spatial scales analysed. Indeed, at both local and regional scales species richness was relatively high in the Iberian region, Malta, the Eastern Ionian and Aegean seas, meanwhile the Adriatic Sea and Cyprus showed a relatively low level. In contrast, evenness as well as taxonomic, phylogenetic and functional divergences did not show regional hotspots. All studied diversity components remained stable over the last two decades. Overall, our results highlight the need to use complementary diversity indices through different spatial scales when developing conservation strategies and defining delimitations for protected areas.
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
Liao, Chi-Cheng; Chang, Chi-Ru; Hsu, Meng-Ting; Poo, Wak-Kim
2014-08-01
Sustainable harvest of natural products that meets the needs of local people has been viewed by many as an important means for sustaining conservation projects. Although plants often respond to tissue damage through compensatory growth, it may not secure long-term sustainability of the populations because many plants enhance individual well-being at the expense of propagation. Sustainability may further be threatened by infrequent, large-scale events, especially ill-documented ones. We studied the impacts of sprout harvesting on sprout growth in a dwarf bamboo (Pseudosasa usawai) population that has seemingly recovered from an infrequent, large-scale masting event. Experimental results suggest that although a single sprout harvest did not significantly alter the subsequent abundance and structure of sprouts, culm damage that accompanied sprout harvesting resulted in shorter, thinner, and fewer sprouts. Weaker recovery was found in windward, continually harvested, and more severely damaged sites. These findings suggest that sprout growth of damaged dwarf bamboos is likely non-compensatory, but is instead supported through physiological integration whose strength is determined by the well-being of the supplying ramets. Healthy culms closer to the damage also provided more resources than those farther away. Sustainable harvesting of sprouts could benefit from organized community efforts to limit the magnitude of culm damage, provide adequate spacing between harvested sites, and ensure sufficient time interval between harvests. Vegetation boundaries relatively resilient to infrequent, large-scale events are likely maintained by climatic factors and may be sensitive to climate change. Continual monitoring is, therefore, integral to the sustainability of harvesting projects.
Variability of the Magnetic Field Power Spectrum in the Solar Wind at Electron Scales
NASA Astrophysics Data System (ADS)
Roberts, Owen Wyn; Alexandrova, O.; Kajdič, P.; Turc, L.; Perrone, D.; Escoubet, C. P.; Walsh, A.
2017-12-01
At electron scales, the power spectrum of solar-wind magnetic fluctuations can be highly variable and the dissipation mechanisms of the magnetic energy into the various particle species is under debate. In this paper, we investigate data from the Cluster mission’s STAFF Search Coil magnetometer when the level of turbulence is sufficiently high that the morphology of the power spectrum at electron scales can be investigated. The Cluster spacecraft sample a disturbed interval of plasma where two streams of solar wind interact. Meanwhile, several discontinuities (coherent structures) are seen in the large-scale magnetic field, while at small scales several intermittent bursts of wave activity (whistler waves) are present. Several different morphologies of the power spectrum can be identified: (1) two power laws separated by a break, (2) an exponential cutoff near the Taylor shifted electron scales, and (3) strong spectral knees at the Taylor shifted electron scales. These different morphologies are investigated by using wavelet coherence, showing that, in this interval, a clear break and strong spectral knees are features that are associated with sporadic quasi parallel propagating whistler waves, even for short times. On the other hand, when no signatures of whistler waves at ∼ 0.1{--}0.2{f}{ce} are present, a clear break is difficult to find and the spectrum is often more characteristic of a power law with an exponential cutoff.
Clinical Neuropathy Scales in Neuropathy Associated with Impaired Glucose Tolerance
Zilliox, Lindsay A.; Ruby, Sandra K.; Singh, Sujal; Zhan, Min; Russell, James W.
2015-01-01
AIMS Disagreement exists on effective and sensitive outcome measures in neuropathy associated with impaired glucose tolerance (IGT). Nerve conduction studies and skin biopsies are costly, invasive and may have their problems with reproducibility and clinical applicability. A clinical measure of neuropathy that has sufficient sensitivity and correlates to invasive measures would enable significant future research. METHODS Data was collected prospectively on patients with IGT and symptomatic early neuropathy (neuropathy symptoms < 2 years) and normal controls. The seven scales that were examined were the Neuropathy Impairment Score of the Lower Limb (NIS-LL), Michigan Diabetic Neuropathy Score (MNDS), modified Toronto Clinical Neuropathy Scale (mTCNS), Total Neuropathy Score (Clinical) (TNSc), The Utah Early Neuropathy Scale (UENS), the Early Neuropathy Score (ENS), and the Neuropathy Disability Score (NDS). RESULTS All seven clinical scales were determined to be excellent in discriminating between patients with neuropathy from controls without neuropathy. The strongest discrimination was seen with the mTCNS. The best sensitivity and specificity for the range of scores obtained, as determined by using receiver operating characteristic curves, was seen for the mTCNS followed by the TNSc. Most scales show a stronger correlation with measures of large than small fiber neuropathy. CONCULSIONS All seven scales identify patients with neuropathy. For the purpose of screening potential patients for a clinical study, the mTCNS followed by the TNSc would be most helpful to select patients with neuropathy. PMID:25690405
Thermal runaway of metal nano-tips during intense electron emission
NASA Astrophysics Data System (ADS)
Kyritsakis, A.; Veske, M.; Eimre, K.; Zadin, V.; Djurabekova, F.
2018-06-01
When an electron emitting tip is subjected to very high electric fields, plasma forms even under ultra high vacuum conditions. This phenomenon, known as vacuum arc, causes catastrophic surface modifications and constitutes a major limiting factor not only for modern electron sources, but also for many large-scale applications such as particle accelerators, fusion reactors etc. Although vacuum arcs have been studied thoroughly, the physical mechanisms that lead from intense electron emission to plasma ignition are still unclear. In this article, we give insights to the atomic scale processes taking place in metal nanotips under intense field emission conditions. We use multi-scale atomistic simulations that concurrently include field-induced forces, electron emission with finite-size and space-charge effects, Nottingham and Joule heating. We find that when a sufficiently high electric field is applied to the tip, the emission-generated heat partially melts it and the field-induced force elongates and sharpens it. This initiates a positive feedback thermal runaway process, which eventually causes evaporation of large fractions of the tip. The reported mechanism can explain the origin of neutral atoms necessary to initiate plasma, a missing key process required to explain the ignition of a vacuum arc. Our simulations provide a quantitative description of in the conditions leading to runaway, which shall be valuable for both field emission applications and vacuum arc studies.
Decision dynamics of departure times: Experiments and modeling
NASA Astrophysics Data System (ADS)
Sun, Xiaoyan; Han, Xiao; Bao, Jian-Zhang; Jiang, Rui; Jia, Bin; Yan, Xiaoyong; Zhang, Boyu; Wang, Wen-Xu; Gao, Zi-You
2017-10-01
A fundamental problem in traffic science is to understand user-choice behaviors that account for the emergence of complex traffic phenomena. Despite much effort devoted to theoretically exploring departure time choice behaviors, relatively large-scale and systematic experimental tests of theoretical predictions are still lacking. In this paper, we aim to offer a more comprehensive understanding of departure time choice behaviors in terms of a series of laboratory experiments under different traffic conditions and feedback information provided to commuters. In the experiment, the number of recruited players is much larger than the number of choices to better mimic the real scenario, in which a large number of commuters will depart simultaneously in a relatively small time window. Sufficient numbers of rounds are conducted to ensure the convergence of collective behavior. Experimental results demonstrate that collective behavior is close to the user equilibrium, regardless of different scales and traffic conditions. Moreover, the amount of feedback information has a negligible influence on collective behavior but has a relatively stronger effect on individual choice behaviors. Reinforcement learning and Fermi learning models are built to reproduce the experimental results and uncover the underlying mechanism. Simulation results are in good agreement with the experimentally observed collective behaviors.
Baumann, Zofia; Mason, Robert P.; Conover, David O.; Balcom, Prentiss; Chen, Celia Y.; Buckman, Kate L.; Fisher, Nicholas S.; Baumann, Hannes
2016-01-01
Human exposure to the neurotoxic methylmercury (MeHg) occurs primarily via the consumption of marine fish, but the processes underlying large-scale spatial variations in fish MeHg concentrations [MeHg], which influence human exposure, are not sufficiently understood. We used the Atlantic silverside (Menidia menidia), an extensively studied model species and important forage fish, to examine latitudinal patterns in total Hg [Hg] and [MeHg]. Both [Hg] and [MeHg] significantly increased with latitude (0.014 and 0.048 μg MeHg g−1 dw per degree of latitude in juveniles and adults, respectively). Four known latitudinal trends in silverside traits help explain these patterns: latitudinal increase in MeHg assimilation efficiency, latitudinal decrease in MeHg efflux, latitudinal increase in weight loss due to longer and more severe winters, and latitudinal increase in food consumption as an adaptation to decreasing length of the growing season. Given the absence of a latitudinal pattern in particulate MeHg, a diet proxy for zooplanktivorous fish, we conclude that large-scale spatial variation in growth is the primary control of Hg bioaccumulation in this and potentially other fish species. PMID:28701819
Tang, Yuye; Chen, Xi; Yoo, Jejoong; Yethiraj, Arun; Cui, Qiang
2010-01-01
A hierarchical simulation framework that integrates information from all-atom simulations into a finite element model at the continuum level is established to study the mechanical response of a mechanosensitive channel of large conductance (MscL) in bacteria Escherichia Coli (E.coli) embedded in a vesicle formed by the dipalmitoylphosphatidycholine (DPPC) lipid bilayer. Sufficient structural details of the protein are built into the continuum model, with key parameters and material properties derived from molecular mechanics simulations. The multi-scale framework is used to analyze the gating of MscL when the lipid vesicle is subjective to nanoindentation and patch clamp experiments, and the detailed structural transitions of the protein are obtained explicitly as a function of external load; it is currently impossible to derive such information based solely on all-atom simulations. The gating pathways of E.coli-MscL qualitatively agree with results from previous patch clamp experiments. The gating mechanisms under complex indentation-induced deformation are also predicted. This versatile hierarchical multi-scale framework may be further extended to study the mechanical behaviors of cells and biomolecules, as well as to guide and stimulate biomechanics experiments. PMID:21874098
Interaction of monopoles, dipoles, and turbulence with a shear flow
NASA Astrophysics Data System (ADS)
Marques Rosas Fernandes, V. H.; Kamp, L. P. J.; van Heijst, G. J. F.; Clercx, H. J. H.
2016-09-01
Direct numerical simulations have been conducted to examine the evolution of eddies in the presence of large-scale shear flows. The numerical experiments consist of initial-value-problems in which monopolar and dipolar vortices as well as driven turbulence are superposed on a plane Couette or Poiseuille flow in a periodic two-dimensional channel. The evolution of the flow has been examined for different shear rates of the background flow and different widths of the channel. Results found for retro-grade and pro-grade monopolar vortices are consistent with those found in the literature. Boundary layer vorticity, however, can significantly modify the straining and erosion of monopolar vortices normally seen for unbounded domains. Dipolar vortices are shown to be much more robust coherent structures in a large-scale shear flow than monopolar eddies. An analytical model for their trajectories, which are determined by self-advection and advection and rotation by the shear flow, is presented. Turbulent kinetic energy is effectively suppressed by the shearing action of the background flow provided that the shear is linear (Couette flow) and of sufficient strength. Nonlinear shear as present in the Poiseuille flow seems to even increase the turbulence strength especially for high shear rates.
Regularization method for large eddy simulations of shock-turbulence interactions
NASA Astrophysics Data System (ADS)
Braun, N. O.; Pullin, D. I.; Meiron, D. I.
2018-05-01
The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.
Disturbances to Air-Layer Skin-Friction Drag Reduction at High Reynolds Numbers
NASA Astrophysics Data System (ADS)
Dowling, David; Elbing, Brian; Makiharju, Simo; Wiggins, Andrew; Perlin, Marc; Ceccio, Steven
2009-11-01
Skin friction drag on a flat surface may be reduced by more than 80% when a layer of air separates the surface from a flowing liquid compared to when such an air layer is absent. Past large-scale experiments utilizing the US Navy's Large Cavitation Channel and a flat-plate test model 3 m wide and 12.9 m long have demonstrated air layer drag reduction (ALDR) on both smooth and rough surfaces at water flow speeds sufficient to reach downstream-distance-based Reynolds numbers exceeding 100 million. For these experiments, the incoming flow conditions, surface orientation, air injection geometry, and buoyancy forces all favored air layer formation. The results presented here extend this prior work to include the effects that vortex generators and free stream flow unsteadiness have on ALDR to assess its robustness for application to ocean-going ships. Measurements include skin friction, static pressure, airflow rate, video of the flow field downstream of the injector, and profiles of the flowing air-water mixture when the injected air forms bubbles, when it is in transition to an air layer, and when the air layer is fully formed. From these, and the prior measurements, ALDR's viability for full-scale applications is assessed.
Air-Induced Drag Reduction at High Reynolds Numbers: Velocity and Void Fraction Profiles
NASA Astrophysics Data System (ADS)
Elbing, Brian; Mäkiharju, Simo; Wiggins, Andrew; Dowling, David; Perlin, Marc; Ceccio, Steven
2010-11-01
The injection of air into a turbulent boundary layer forming over a flat plate can reduce the skin friction. With sufficient volumetric fluxes an air layer can separate the solid surface from the flowing liquid, which can produce drag reduction in excess of 80%. Several large scale experiments have been conducted at the US Navy's Large Cavitation Channel on a 12.9 m long flat plate model investigating bubble drag reduction (BDR), air layer drag reduction (ALDR) and the transition between BDR and ALDR. The most recent experiment acquired phase velocities and void fraction profiles at three downstream locations (3.6, 5.9 and 10.6 m downstream from the model leading edge) for a single flow speed (˜6.4 m/s). The profiles were acquired with a combination of electrode point probes, time-of-flight sensors, Pitot tubes and an LDV system. Additional diagnostics included skin-friction sensors and flow-field image visualization. During this experiment the inlet flow was perturbed with vortex generators immediately upstream of the injection location to assess the robustness of the air layer. From these, and prior measurements, computational models can be refined to help assess the viability of ALDR for full-scale ship applications.
TESTING HOMOGENEITY WITH GALAXY STAR FORMATION HISTORIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyle, Ben; Jimenez, Raul; Tojeiro, Rita
2013-01-01
Observationally confirming spatial homogeneity on sufficiently large cosmological scales is of importance to test one of the underpinning assumptions of cosmology, and is also imperative for correctly interpreting dark energy. A challenging aspect of this is that homogeneity must be probed inside our past light cone, while observations take place on the light cone. The star formation history (SFH) in the galaxy fossil record provides a novel way to do this. We calculate the SFH of stacked luminous red galaxy (LRG) spectra obtained from the Sloan Digital Sky Survey. We divide the LRG sample into 12 equal-area contiguous sky patchesmore » and 10 redshift slices (0.2 < z < 0.5), which correspond to 120 blocks of volume {approx}0.04 Gpc{sup 3}. Using the SFH in a time period that samples the history of the universe between look-back times 11.5 and 13.4 Gyr as a proxy for homogeneity, we calculate the posterior distribution for the excess large-scale variance due to inhomogeneity, and find that the most likely solution is no extra variance at all. At 95% credibility, there is no evidence of deviations larger than 5.8%.« less
Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter
2014-01-13
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.
NASA Astrophysics Data System (ADS)
Drake, J. E.; Toelker, M. G.; Reich, P. B.
2016-12-01
Respiration drives the metabolism and growth of trees and represents a large and uncertain component of land surface feedbacks to climate change. A fixed scaling relationship between body mass and respiration has been described as a fundamental law across plants and animals, but this has been controversial. There is now ample evidence that trees adjust their respiration rates in response to temperature variation in their growth environment through physiological acclimation. Is acclimation sufficiently large to alter the scaling relationship between respiration and mass? Here, we make continuous measurements of in-situ respiration rates complemented with repeated measurements at a defined set temperature of 15°C for leaves and the entire aboveground component of Eucalyptus parramattensis and E. tereticornis trees growing in the field in warming experiments (ambient vs. +3°C) using 12 whole tree chambers in Australia. We report thousands of repeated measurements as trees grew from 1 to 9-m-tall, allowing a concurrent evaluation of physiological acclimation and metabolic scaling. Trees adjusted the respiration rates of leaves and whole-crowns in relation to the air temperature of the preceding three days, such that: (1) respiration rate per unit mass was reduced by warming when measured at a common temperature, and (2) in-situ whole-crown respiration rates per unit mass were equivalent across ambient and warmed trees (i.e., homeostatic respiration). Acclimation appeared to modify the scaling between respiration and mass, as the slope and intercept of this relationship were affected by recent air temperature. This suggests that metabolic scaling is not fixed, although the overall allometric scaling slope was consistent with the theoretical value of 0.75 (95% CI of 0.5 to 0.78). We suggest that considering acclimation and tree mass together provides new insight into a dynamic scaling of tree respiration, with implications for land surface feedbacks under climate warming.
Anderson, Eric C
2012-11-08
Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.
Lobe-cleft instability in the buoyant gravity current generated by estuarine outflow
NASA Astrophysics Data System (ADS)
Horner-Devine, Alexander R.; Chickadel, C. Chris
2017-05-01
Gravity currents represent a broad class of geophysical flows including turbidity currents, powder avalanches, pyroclastic flows, sea breeze fronts, haboobs, and river plumes. A defining feature in many gravity currents is the formation of three-dimensional lobes and clefts along the front and researchers have sought to understand these ubiquitous geophysical structures for decades. The prevailing explanation is based largely on early laboratory and numerical model experiments at much smaller scales, which concluded that lobes and clefts are generated due to hydrostatic instability exclusively in currents propagating over a nonslip boundary. Recent studies suggest that frontal dynamics change as the flow scale increases, but no measurements have been made that sufficiently resolve the flow structure in full-scale geophysical flows. Here we use thermal infrared and acoustic imaging of a river plume to reveal the three-dimensional structure of lobes and clefts formed in a geophysical gravity current front. The observed lobes and clefts are generated at the front in the absence of a nonslip boundary, contradicting the prevailing explanation. The observed flow structure is consistent with an alternative formation mechanism, which predicts that the lobe scale is inherited from subsurface vortex structures.
Size effects under homogeneous deformation of single crystals: A discrete dislocation analysis
NASA Astrophysics Data System (ADS)
Guruprasad, P. J.; Benzerga, A. A.
Mechanism-based discrete dislocation plasticity is used to investigate the effect of size on micron scale crystal plasticity under conditions of macroscopically homogeneous deformation. Long-range interactions among dislocations are naturally incorporated through elasticity. Constitutive rules are used which account for key short-range dislocation interactions. These include junction formation and dynamic source and obstacle creation. Two-dimensional calculations are carried out which can handle high dislocation densities and large strains up to 0.1. The focus is laid on the effect of dimensional constraints on plastic flow and hardening processes. Specimen dimensions ranging from hundreds of nanometers to tens of microns are considered. Our findings show a strong size-dependence of flow strength and work-hardening rate at the micron scale. Taylor-like hardening is shown to be insufficient as a rationale for the flow stress scaling with specimen dimensions. The predicted size effect is associated with the emergence, at sufficient resolution, of a signed dislocation density. Heuristic correlations between macroscopic flow stress and macroscopic measures of dislocation density are sought. Most accurate among those is a correlation based on two state variables: the total dislocation density and an effective, scale-dependent measure of signed density.
Scholten, Saskia; Margraf, Jürgen
2018-01-01
The Sexual Excitation Sexual/Inhibition Inventory for Women and Men (SESII-W/M) and the Sexual Excitation Scales/Sexual Inhibition Scales short form (SIS/SES-SF) are two self-report questionnaires for assessing sexual excitation (SE) and sexual inhibition (SI). According to the dual control model of sexual response, SE and SI differ between individuals and influence the occurrence of sexual arousal in given situations. Extreme levels of SE and SI are postulated to be associated with sexual difficulties or risky sexual behaviors. The present study was designed to assess the psychometric properties of the German versions of both questionnaires utilizing a large population-based sample of 2,708 participants (Mage = 51.19, SD = 14.03). Overall, psychometric evaluation of the two instruments yielded good convergent and discriminant validity and mediocre to good internal consistency. The original 30-item version of the SESII-W/M did not show a sufficient model fit. For a 24-item version of the SESII-W/M partial strong measurement invariance across gender, and strong measurement invariance across relationship status, age, and educational levels were established. The original structure (14 items, 3 factors) of the SIS/SES-SF was not replicated. However, a 4-factor model including 13 items showed a good model fit and strong measurement invariance across the before-mentioned participant groups. For both questionnaires, partial strong measurement invariance with the original American versions of the scales was found. As some factors showed unsatisfactory internal consistency and the factor structure of the original scales could not be replicated, scores on several SE- and SI-factors should be interpreted with caution. However, most analyses indicated sufficient psychometric quality of the German SESII-W/M and SIS/SES-SF and their use can be recommended in German-speaking samples. More research with diverse samples (i.e., different sexual orientations, individuals with sexual difficulties) is needed to ensure the replicability of the factor solutions presented in this study. PMID:29529045
Kong, Kyoungchul; Lee, Hye -Sung; Park, Myeonghun
2014-04-01
We suggest top quark decays as a venue to search for light dark force carriers. Top quark is the heaviest particle in the standard model whose decays are relatively poorly measured, allowing sufficient room for exotic decay modes from new physics. A very light (GeV scale) dark gauge boson (Z') is a recently highlighted hypothetical particle that can address some astrophysical anomalies as well as the 3.6 σ deviation in the muon g-2 measurement. We present and study a possible scenario that top quark decays as t → b W + Z's. This is the same as the dominant topmore » quark decay (t → b W) accompanied by one or multiple dark force carriers. The Z' can be easily boosted, and it can decay into highly collimated leptons (lepton-jet) with large branching ratio. In addition, we discuss the implications for the Large Hadron Collider experiments including the analysis based on the lepton-jets.« less
Advances in the Biology and Chemistry of Sialic Acids
Chen, Xi; Varki, Ajit
2010-01-01
Sialic acids are a subset of nonulosonic acids, which are nine-carbon alpha-keto aldonic acids. Natural existing sialic acid-containing structures are presented in different sialic acid forms, various sialyl linkages, and on diverse underlying glycans. They play important roles in biological, pathological, and immunological processes. Sialobiology has been a challenging and yet attractive research area. Recent advances in chemical and chemoenzymatic synthesis as well as large-scale E. coli cell-based production have provided a large library of sialoside standards and derivatives in amounts sufficient for structure-activity relationship studies. Sialoglycan microarrays provide an efficient platform for quick identification of preferred ligands for sialic acid-binding proteins. Future research on sialic acid will continue to be at the interface of chemistry and biology. Research efforts will not only lead to a better understanding of the biological and pathological importance of sialic acids and their diversity, but could also lead to the development of therapeutics. PMID:20020717
Single-copy entanglement in critical quantum spin chains
NASA Astrophysics Data System (ADS)
Eisert, J.; Cramer, M.
2005-10-01
We consider the single-copy entanglement as a quantity to assess quantum correlations in the ground state in quantum many-body systems. We show for a large class of models that already on the level of single specimens of spin chains, criticality is accompanied with the possibility of distilling a maximally entangled state of arbitrary dimension from a sufficiently large block deterministically, with local operations and classical communication. These analytical results—which refine previous results on the divergence of block entropy as the rate at which maximally entangled pairs can be distilled from many identically prepared chains—are made quantitative for general isotropic translationally invariant spin chains that can be mapped onto a quasifree fermionic system, and for the anisotropic XY model. For the XX model, we provide the asymptotic scaling of ˜(1/6)log2(L) , and contrast it with the block entropy.
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Álvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier
2011-01-01
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency. PMID:22247680
Adiabatic quantum-flux-parametron cell library adopting minimalist design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeuchi, Naoki, E-mail: takeuchi-naoki-kx@ynu.jp; Yamanashi, Yuki; Yoshikawa, Nobuyuki
We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells inmore » the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.« less
Adiabatic quantum-flux-parametron cell library adopting minimalist design
NASA Astrophysics Data System (ADS)
Takeuchi, Naoki; Yamanashi, Yuki; Yoshikawa, Nobuyuki
2015-05-01
We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells in the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.
Likelihood inference of non-constant diversification rates with incomplete taxon sampling.
Höhna, Sebastian
2014-01-01
Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.
Fast online generalized multiscale finite element method using constraint energy minimization
NASA Astrophysics Data System (ADS)
Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat
2018-02-01
Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.
Evidence for Tropopause Layer Moistening by Convection During CRYSTAL-FACE
NASA Technical Reports Server (NTRS)
Ackerman, A.; Fridlind, A.; Jensen, E.; Miloshevich, L.; Heymsfield, G.; McGill, M.
2003-01-01
Measurements and analysis of the impact of deep convection on tropopause layer moisture are easily confounded by difficulties making precise observations with sufficient spatial coverage before and after convective events and difficulties distinguishing between changes due to local convection versus large-scale advection. The interactions between cloud microphysics and dynamics in the convective transport of moisture into the tropopause layer also result in a sufficiently complex and poorly characterized system to allow for considerable freedom in theoretical models of stratosphere-troposphere exchange. In this work we perform detailed large-eddy simulations with an explicit cloud microphysics model to study the impact of deep convection on tropopause layer moisture profiles observed over southern Florida during CRYSTALFACE. For four days during the campaign (July 11, 16, 28, and 29) we initialize a 100-km square domain with temperature and moisture profiles measured prior to convection at the PARSL ground site, and initiate convection with a warm bubble that produces an anvil at peak elevations in agreement with lidar and radar observations on that day. Comparing the moisture field after the anvils decay with the initial state, we find that convection predominantly moistens the tropopause layer (as defined by minimum temperature and minimum potential temperature lapse rate), although some drying is also predicted in localized layers. We will also present results of sensitivity tests designed to separate the roles of cloud microphysics and dynamics.
NASA Astrophysics Data System (ADS)
Gat, Amir; Friedman, Yonathan
2017-11-01
The characteristic time of low-Reynolds number fluid-structure interaction scales linearly with the ratio of fluid viscosity to solid Young's modulus. For sufficiently large values of Young's modulus, both time- and length-scales of the viscous-elastic dynamics may be similar to acoustic time- and length-scales. However, the requirement of dominant viscous effects limits the validity of such regimes to micro-configurations. We here study the dynamics of an acoustic plane wave impinging on the surface of a layered sphere, immersed within an inviscid fluid, and composed of an inner elastic sphere, a creeping fluid layer and an external elastic shell. We focus on configurations with similar viscous-elastic and acoustic time- and length-scales, where the viscous-elastic speed of interaction between the creeping layer and the elastic regions is similar to the speed of sound. By expanding the linearized spherical Reynolds equation into the relevant spectral series solution for the hyperbolic elastic regions, a global stiffness matrix of the layered elastic sphere was obtained. This work relates viscous-elastic dynamics to acoustic scattering and may pave the way to the design of novel meta-materials with unique acoustic properties. ISF 818/13.
Runaway electrons as a source of impurity and reduced fusion yield in the dense plasma focus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lerner, Eric J.; Yousefi, Hamid R.
2014-10-15
Impurities produced by the vaporization of metals in the electrodes may be a major cause of reduced fusion yields in high-current dense plasma focus devices. We propose here that a major, but hitherto-overlooked, cause of such impurities is vaporization by runaway electrons during the breakdown process at the beginning of the current pulse. This process is sufficient to account for the large amount of erosion observed in many dense plasma focus devices on the anode very near to the insulator. The erosion is expected to become worse with lower pressures, typical of machines with large electrode radii, and would explainmore » the plateauing of fusion yield observed in such machines at higher peak currents. Such runaway electron vaporization can be eliminated by the proper choice of electrode material, by reducing electrode radii and thus increasing fill gas pressure, or by using pre-ionization to eliminate the large fields that create runaway electrons. If these steps are combined with monolithic electrodes to eliminate arcing erosion, large reductions in impurities and large increases in fusion yield may be obtained, as the I{sup 4} scaling is extended to higher currents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazil, Jan; Feingold, Graham; Yamaguchi, Takanobu
Observed and projected trends in large-scale wind speed over the oceans prompt the question: how do marine stratocumulus clouds and their radiative properties respond to changes in large-scale wind speed? Wind speed drives the surface fluxes of sensible heat, moisture, and momentum and thereby acts on cloud liquid water path (LWP) and cloud radiative properties. We present an investigation of the dynamical response of non-precipitating, overcast marine stratocumulus clouds to different wind speeds over the course of a diurnal cycle, all else equal. In cloud-system resolving simulations, we find that higher wind speed leads to faster boundary layer growth and strongermore » entrainment. The dynamical driver is enhanced buoyant production of turbulence kinetic energy (TKE) from latent heat release in cloud updrafts. LWP is enhanced during the night and in the morning at higher wind speed, and more strongly suppressed later in the day. Wind speed hence accentuates the diurnal LWP cycle by expanding the morning–afternoon contrast. The higher LWP at higher wind speed does not, however, enhance cloud top cooling because in clouds with LWP ≳50 gm –2, longwave emissions are insensitive to LWP. This leads to the general conclusion that in sufficiently thick stratocumulus clouds, additional boundary layer growth and entrainment due to a boundary layer moistening arises by stronger production of TKE from latent heat release in cloud updrafts, rather than from enhanced longwave cooling. Here, we find that large-scale wind modulates boundary layer decoupling. At nighttime and at low wind speed during daytime, it enhances decoupling in part by faster boundary layer growth and stronger entrainment and in part because shear from large-scale wind in the sub-cloud layer hinders vertical moisture transport between the surface and cloud base. With increasing wind speed, however, in decoupled daytime conditions, shear-driven circulation due to large-scale wind takes over from buoyancy-driven circulation in transporting moisture from the surface to cloud base and thereby reduces decoupling and helps maintain LWP. Furthermore, the total (shortwave + longwave) cloud radiative effect (CRE) responds to changes in LWP and cloud fraction, and higher wind speed translates to a stronger diurnally averaged total CRE. However, the sensitivity of the diurnally averaged total CRE to wind speed decreases with increasing wind speed.« less
Kazil, Jan; Feingold, Graham; Yamaguchi, Takanobu
2016-05-12
Observed and projected trends in large-scale wind speed over the oceans prompt the question: how do marine stratocumulus clouds and their radiative properties respond to changes in large-scale wind speed? Wind speed drives the surface fluxes of sensible heat, moisture, and momentum and thereby acts on cloud liquid water path (LWP) and cloud radiative properties. We present an investigation of the dynamical response of non-precipitating, overcast marine stratocumulus clouds to different wind speeds over the course of a diurnal cycle, all else equal. In cloud-system resolving simulations, we find that higher wind speed leads to faster boundary layer growth and strongermore » entrainment. The dynamical driver is enhanced buoyant production of turbulence kinetic energy (TKE) from latent heat release in cloud updrafts. LWP is enhanced during the night and in the morning at higher wind speed, and more strongly suppressed later in the day. Wind speed hence accentuates the diurnal LWP cycle by expanding the morning–afternoon contrast. The higher LWP at higher wind speed does not, however, enhance cloud top cooling because in clouds with LWP ≳50 gm –2, longwave emissions are insensitive to LWP. This leads to the general conclusion that in sufficiently thick stratocumulus clouds, additional boundary layer growth and entrainment due to a boundary layer moistening arises by stronger production of TKE from latent heat release in cloud updrafts, rather than from enhanced longwave cooling. Here, we find that large-scale wind modulates boundary layer decoupling. At nighttime and at low wind speed during daytime, it enhances decoupling in part by faster boundary layer growth and stronger entrainment and in part because shear from large-scale wind in the sub-cloud layer hinders vertical moisture transport between the surface and cloud base. With increasing wind speed, however, in decoupled daytime conditions, shear-driven circulation due to large-scale wind takes over from buoyancy-driven circulation in transporting moisture from the surface to cloud base and thereby reduces decoupling and helps maintain LWP. Furthermore, the total (shortwave + longwave) cloud radiative effect (CRE) responds to changes in LWP and cloud fraction, and higher wind speed translates to a stronger diurnally averaged total CRE. However, the sensitivity of the diurnally averaged total CRE to wind speed decreases with increasing wind speed.« less
Spectroscopic Measurement Techniques for Aerospace Flows
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Bathel, Brett F.; Johansen, Craig T.; Cutler, Andrew D.; Hurley, Samantha
2014-01-01
The conditions that characterize aerospace flows are so varied, that a single diagnostic technique is not sufficient for its measurement. Fluid dynamists use knowledge of similarity to help categorize and focus on different flow conditions. For example, the Reynolds number represents the ratio of inertial to viscous forces in a flow. When the velocity scales, length scales, and gas density are large and the magnitude of the molecular viscosity is low, the Reynolds number becomes large. This corresponds to large scale vehicles (e.g Airbus A380), fast moving objects (e.g. artillery projectiles), vehicles in dense fluids (e.g. submarine in water), or flows with low dynamic viscosity (e.g. skydiver in air). In each of these cases, the inertial forces dominate viscous forces, and unsteady turbulent fluctuations in the flow variables are observed. In contrast, flows with small length scales (e.g. dispersion of micro-particles in a solid rocket nozzle), slow moving objects (e.g. micro aerial vehicles), flows with low density gases (e.g. atmospheric re-entry), or fluids with a large magnitude of viscosity (e.g. engine coolant flow), all have low Reynolds numbers. In these cases, viscous forces become very important and often the flows can be steady and laminar. The Mach number, which is the ratio of the velocity to the speed of sound in the medium, also helps to differentiate types of flows. At very low Mach numbers, acoustic waves travel much faster than the object, and the flow can be assumed to be incompressible (e.g. Cessna 172 aircraft). As the object speed approaches the speed of sound, the gas density can become variable (e.g. flow over wing of Learjet 85). When the object speed is higher than the speed of sound (Ma > 1), the presences of shock waves and other gas dynamic features can become important to the vehicle performance (e.g. SR-71 Blackbird). In the hypersonic flow regime (Ma > 5), large changes in temperature begin to affect flow properties, causing real-gas effects to occur (e.g. X-43 Scramjet). At even higher Mach numbers, chemistry and nonequilibrium effects come into play (e.g. Startdust re-entry capsule), further complicating the measurement. These limits can be predicted by calculating the ratio of chemical and thermal relaxation time to the flow time scales. Other non-dimensional numbers can be used to further differentiate types of aerospace flows.
Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors
NASA Astrophysics Data System (ADS)
Kerr, Benjamin; Riley, Margaret A.; Feldman, Marcus W.; Bohannan, Brendan J. M.
2002-07-01
One of the central aims of ecology is to identify mechanisms that maintain biodiversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dispersal, movement, and interaction occur over small spatial scales. In particular, this may be the case for non-transitive communities, that is, those without strict competitive hierarchies. The classic non-transitive system involves a community of three competing species satisfying a relationship similar to the children's game rock-paper-scissors, where rock crushes scissors, scissors cuts paper, and paper covers rock. Such relationships have been demonstrated in several natural systems. Some models predict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Here, we test these predictions empirically using a non-transitive model community containing three populations of Escherichia coli. We find that diversity is rapidly lost in our experimental community when dispersal and interaction occur over relatively large spatial scales, whereas all populations coexist when ecological processes are localized.
NASA Technical Reports Server (NTRS)
Jones, William H.
1985-01-01
The Combined Aerodynamic and Structural Dynamic Problem Emulating Routines (CASPER) is a collection of data-base modification computer routines that can be used to simulate Navier-Stokes flow through realistic, time-varying internal flow fields. The Navier-Stokes equation used involves calculations in all three dimensions and retains all viscous terms. The only term neglected in the current implementation is gravitation. The solution approach is of an interative, time-marching nature. Calculations are based on Lagrangian aerodynamic elements (aeroelements). It is assumed that the relationships between a particular aeroelement and its five nearest neighbor aeroelements are sufficient to make a valid simulation of Navier-Stokes flow on a small scale and that the collection of all small-scale simulations makes a valid simulation of a large-scale flow. In keeping with these assumptions, it must be noted that CASPER produces an imitation or simulation of Navier-Stokes flow rather than a strict numerical solution of the Navier-Stokes equation. CASPER is written to operate under the Parallel, Asynchronous Executive (PAX), which is described in a separate report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Iver; Siemon, John
The initial three atomization attempts resulted in “freeze-outs” within the pour tubes in the pilot-scale system and yielded no powder. Re-evaluation of the alloy liquidus temperatures and melting characteristics, in collaboration with Alcoa, showed further superheat to be necessary to allow the liquid metal to flow through the pour tube to the atomization nozzle. A subsequent smaller run on the experimental atomization system verified these parameters and was successful, as were all successive runs on the larger pilot scale system. One alloy composition froze-out part way through the atomization on both pilot scale runs. SEM images showed needle formation andmore » phase segregations within the microstructure. Analysis of the pour tube freeze-out microstructures showed that large needles formed within the pour tube during the atomization experiment, which eventually blocked the melt stream. Alcoa verified the needle formation in this alloy using theoretical modeling of phase solidification. Sufficient powder of this composition was still generated to allow powder characterization and additive manufacturing trials at Alcoa.« less
NASA Astrophysics Data System (ADS)
Siebenmorgen, R.; Voshchinnikov, N. V.; Bagnulo, S.; Cox, N. L. J.; Cami, J.; Peest, C.
2018-03-01
It is well known that the dust properties of the diffuse interstellar medium exhibit variations towards different sight-lines on a large scale. We have investigated the variability of the dust characteristics on a small scale, and from cloud-to-cloud. We use low-resolution spectro-polarimetric data obtained in the context of the Large Interstellar Polarisation Survey (LIPS) towards 59 sight-lines in the Southern Hemisphere, and we fit these data using a dust model composed of silicate and carbon particles with sizes from the molecular to the sub-micrometre domain. Large (≥6 nm) silicates of prolate shape account for the observed polarisation. For 32 sight-lines we complement our data set with UVES archive high-resolution spectra, which enable us to establish the presence of single-cloud or multiple-clouds towards individual sight-lines. We find that the majority of these 35 sight-lines intersect two or more clouds, while eight of them are dominated by a single absorbing cloud. We confirm several correlations between extinction and parameters of the Serkowski law with dust parameters, but we also find previously undetected correlations between these parameters that are valid only in single-cloud sight-lines. We find that interstellar polarisation from multiple-clouds is smaller than from single-cloud sight-lines, showing that the presence of a second or more clouds depolarises the incoming radiation. We find large variations of the dust characteristics from cloud-to-cloud. However, when we average a sufficiently large number of clouds in single-cloud or multiple-cloud sight-lines, we always retrieve similar mean dust parameters. The typical dust abundances of the single-cloud cases are [C]/[H] = 92 ppm and [Si]/[H] = 20 ppm.
Knispel, Alexis L; McLachlan, Stéphane M
2010-01-01
Genetically modified herbicide-tolerant (GMHT) oilseed rape (OSR; Brassica napus L.) was approved for commercial cultivation in Canada in 1995 and currently represents over 95% of the OSR grown in western Canada. After a decade of widespread cultivation, GMHT volunteers represent an increasing management problem in cultivated fields and are ubiquitous in adjacent ruderal habitats, where they contribute to the spread of transgenes. However, few studies have considered escaped GMHT OSR populations in North America, and even fewer have been conducted at large spatial scales (i.e. landscape scales). In particular, the contribution of landscape structure and large-scale anthropogenic dispersal processes to the persistence and spread of escaped GMHT OSR remains poorly understood. We conducted a multi-year survey of the landscape-scale distribution of escaped OSR plants adjacent to roads and cultivated fields. Our objective was to examine the long-term dynamics of escaped OSR at large spatial scales and to assess the relative importance of landscape and localised factors to the persistence and spread of these plants outside of cultivation. From 2005 to 2007, we surveyed escaped OSR plants along roadsides and field edges at 12 locations in three agricultural landscapes in southern Manitoba where GMHT OSR is widely grown. Data were analysed to examine temporal changes at large spatial scales and to determine factors affecting the distribution of escaped OSR plants in roadside and field edge habitats within agricultural landscapes. Additionally, we assessed the potential for seed dispersal between escaped populations by comparing the relative spatial distribution of roadside and field edge OSR. Densities of escaped OSR fluctuated over space and time in both roadside and field edge habitats, though the proportion of GMHT plants was high (93-100%). Escaped OSR was positively affected by agricultural landscape (indicative of cropping intensity) and by the presence of an adjacent field planted to OSR. Within roadside habitats, escaped OSR was also strongly associated with large-scale variables, including road surface (indicative of traffic intensity) and distance to the nearest grain elevator. Conversely, within field edges, OSR density was affected by localised crop management practices such as mowing, soil disturbance and herbicide application. Despite the proximity of roadsides and field edges, there was little evidence of spatial aggregation among escaped OSR populations in these two habitats, especially at very fine spatial scales (i.e. <100 m), suggesting that natural propagule exchange is infrequent. Escaped OSR populations were persistent at large spatial and temporal scales, and low density in a given landscape or year was not indicative of overall extinction. As a result of ongoing cultivation and transport of OSR crops, escaped GMHT traits will likely remain predominant in agricultural landscapes. While escaped OSR in field edge habitats generally results from local seeding and management activities occurring at the field-scale, distribution patterns within roadside habitats are determined in large part by seed transport occurring at the landscape scale and at even larger regional scales. Our findings suggest that these large-scale anthropogenic dispersal processes are sufficient to enable persistence despite limited natural seed dispersal. This widespread dispersal is likely to undermine field-scale management practices aimed at eliminating escaped and in-field GMHT OSR populations. Agricultural transport and landscape-scale cropping patterns are important determinants of the distribution of escaped GM crops. At the regional level, these factors ensure ongoing establishment and spread of escaped GMHT OSR despite limited local seed dispersal. Escaped populations thus play an important role in the spread of transgenes and have substantial implications for the coexistence of GM and non-GM production systems. Given the large-scale factors driving the spread of escaped transgenes, localised co-existence measures may be impracticable where they are not commensurate with regional dispersal mechanisms. To be effective, strategies aimed at reducing contamination from GM crops should be multi-scale in approach and be developed and implemented at both farm and landscape levels of organisation. Multiple stakeholders should thus be consulted, including both GM and non-GM farmers, as well as seed developers, processors, transporters and suppliers. Decisions to adopt GM crops require thoughtful and inclusive consideration of the risks and responsibilities inherent in this new technology.
A large scale laboratory cage trial of Aedes densonucleosis virus (AeDNV).
Wise de Valdez, Megan R; Suchman, Erica L; Carlson, Jonathan O; Black, William C
2010-05-01
Aedes aegypti (L.) (Diptera: Culicidae) the primary vector of dengue viruses (DENV1-4), oviposit in and around human dwellings, including sites difficult to locate, making control of this mosquito challenging. We explored the efficacy and sustainability of Aedes Densonucleosis Virus (AeDNV) as a biocontrol agent for Ae. aegypti in and among oviposition sites in large laboratory cages (> 92 m3) as a prelude to field trials. Select cages were seeded with AeDNV in a single oviposition site (OPS) with unseeded OPSs established at varied distances. Quantitative real-time polymerase chain reaction was used to track dispersal and accumulation of AeDNV among OPSs. All eggs were collected weekly from each cage and counted. We asked: (1) Is AeDNV dispersed over varying distances and can it accumulate and persist in novel OPSs? (2) Are egg densities reduced in AeDNV treated populations? AeDNV was dispersed to and sustained in novel OPSs. Virus accumulation in OPSs was positively correlated with egg densities and proximity to the initial infection source affected the timing of dispersal and maintenance of viral titers. AeDNV did not significantly reduce Ae. aegypti egg densities. The current study documents that adult female Ae. aegypti oviposition behavior leads to successful viral dispersal from treated to novel containers in large-scale cages; however, the AeDNV titers reached were not sufficient to reduce egg densities.
Large-scale production and isolation of Candida biofilm extracellular matrix.
Zarnowski, Robert; Sanchez, Hiram; Andes, David R
2016-12-01
The extracellular matrix of biofilm is unique to the biofilm lifestyle, and it has key roles in community survival. A complete understanding of the biochemical nature of the matrix is integral to the understanding of the roles of matrix components. This knowledge is a first step toward the development of novel therapeutics and diagnostics to address persistent biofilm infections. Many of the assay methods needed for refined matrix composition analysis require milligram amounts of material that is separated from the cellular components of these complex communities. The protocol described here explains the large-scale production and isolation of the Candida biofilm extracellular matrix. To our knowledge, the proposed procedure is the only currently available approach in the field that yields milligram amounts of biofilm matrix. This procedure first requires biofilms to be seeded in large-surface-area roller bottles, followed by cell adhesion and biofilm maturation during continuous movement of the medium across the surface of the rotating bottle. The formed matrix is then separated from the entire biomass using sonication, which efficiently removes the matrix without perturbing the fungal cell wall. Subsequent filtration, dialysis and lyophilization steps result in a purified matrix product sufficient for biochemical, structural and functional assays. The overall protocol takes ∼11 d to complete. This protocol has been used for Candida species, but, using the troubleshooting guide provided, it could be adapted for other fungi or bacteria.
Accurate prediction of personalized olfactory perception from large-scale chemoinformatic features.
Li, Hongyang; Panwar, Bharat; Omenn, Gilbert S; Guan, Yuanfang
2018-02-01
The olfactory stimulus-percept problem has been studied for more than a century, yet it is still hard to precisely predict the odor given the large-scale chemoinformatic features of an odorant molecule. A major challenge is that the perceived qualities vary greatly among individuals due to different genetic and cultural backgrounds. Moreover, the combinatorial interactions between multiple odorant receptors and diverse molecules significantly complicate the olfaction prediction. Many attempts have been made to establish structure-odor relationships for intensity and pleasantness, but no models are available to predict the personalized multi-odor attributes of molecules. In this study, we describe our winning algorithm for predicting individual and population perceptual responses to various odorants in the DREAM Olfaction Prediction Challenge. We find that random forest model consisting of multiple decision trees is well suited to this prediction problem, given the large feature spaces and high variability of perceptual ratings among individuals. Integrating both population and individual perceptions into our model effectively reduces the influence of noise and outliers. By analyzing the importance of each chemical feature, we find that a small set of low- and nondegenerative features is sufficient for accurate prediction. Our random forest model successfully predicts personalized odor attributes of structurally diverse molecules. This model together with the top discriminative features has the potential to extend our understanding of olfactory perception mechanisms and provide an alternative for rational odorant design.
Can AIDS prevention move to sufficient scale?
Slutkin, G
1993-05-01
Much has been learned about which AIDS prevention interventions are effective and what an AIDS prevention program should look like. It is also clear that important program issues must be worked out at the country level if effective interventions are to be had. Programs with successful interventions and approaches in most countries, however, have yet to be implemented on a sufficiently large scale. While some national programs are beginning to use proven interventions and are moving toward implementing full-scale national AIDS programs, most AIDS prevention programs do not incorporate condom marketing, are not using mass media and advertising in a well-programmed way, do not have peer projects to reach most at-risk populations, and do not have systems in place to diagnose and treat persons with sexually transmitted diseases (STD). Far more planning and resources for AIDS prevention are needed from national and international public and private sectors. International efforts by the World Health Organization (WHO), UNICEF, UNDP, UNESCO, UNFPA, and the World Bank have increased markedly over the past few years. Bilaterally, the US, Sweden, United Kingdom, Canada, Netherlands, Norway, Denmark, Japan, Germany, France, and other countries are contributing to WHO/GPA and to direct bilateral AIDS prevention activities. USAID happens to be the largest single contributor to WHO/GPA and is also the largest bilateral program with its $168 millions AIDSCAP funded over 5 years. AIDSCAP integrates condom distribution and marketing, STD prevention and control, behavioral change and communication strategies through person-to-person and mass media approaches, and strong evaluation components. AIDSCAP can help fulfill the need to demonstrate that programs can be developed on a country-wide level by showing how behavior can be changed in a broad geographical area.
A modified three-term PRP conjugate gradient algorithm for optimization models.
Wu, Yanlin
2017-01-01
The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.
NASA Astrophysics Data System (ADS)
Di Vittorio, Alan V.; Negrón-Juárez, Robinson I.; Higuchi, Niro; Chambers, Jeffrey Q.
2014-03-01
Debate continues over the adequacy of existing field plots to sufficiently capture Amazon forest dynamics to estimate regional forest carbon balance. Tree mortality dynamics are particularly uncertain due to the difficulty of observing large, infrequent disturbances. A recent paper (Chambers et al 2013 Proc. Natl Acad. Sci. 110 3949-54) reported that Central Amazon plots missed 9-17% of tree mortality, and here we address ‘why’ by elucidating two distinct mortality components: (1) variation in annual landscape-scale average mortality and (2) the frequency distribution of the size of clustered mortality events. Using a stochastic-empirical tree growth model we show that a power law distribution of event size (based on merged plot and satellite data) is required to generate spatial clustering of mortality that is consistent with forest gap observations. We conclude that existing plots do not sufficiently capture losses because their placement, size, and longevity assume spatially random mortality, while mortality is actually distributed among differently sized events (clusters of dead trees) that determine the spatial structure of forest canopies.
Best practices for germ-free derivation and gnotobiotic zebrafish husbandry
Melancon, E.; De La Torre Canny, S. Gomez; Sichel, S.; Kelly, M.; Wiles, T.J.; Rawls, J.F.; Eisen, J.S.; Guillemin, K.
2017-01-01
All animals are ecosystems with resident microbial communities, referred to as microbiota, which play profound roles in host development, physiology, and evolution. Enabled by new DNA sequencing technologies, there is a burgeoning interest in animal–microbiota interactions, but dissecting the specific impacts of microbes on their hosts is experimentally challenging. Gnotobiology, the study of biological systems in which all members are known, enables precise experimental analysis of the necessity and sufficiency of microbes in animal biology by deriving animals germ-free (GF) and inoculating them with defined microbial lineages. Mammalian host models have long dominated gnotobiology, but we have recently adapted gnotobiotic approaches to the zebrafish (Danio rerio), an important aquatic model. Zebrafish offer several experimental attributes that enable rapid, large-scale gnotobiotic experimentation with high replication rates and exquisite optical resolution. Here we describe detailed protocols for three procedures that form the foundation of zebrafish gnotobiology: derivation of GF embryos, microbial association of GF animals, and long-term, GF husbandry. Our aim is to provide sufficient guidance in zebrafish gnotobiotic methodology to expand and enrich this exciting field of research. PMID:28129860
Phylogenomic Reconstruction of the Oomycete Phylogeny Derived from 37 Genomes
McCarthy, Charley G. P.
2017-01-01
ABSTRACT The oomycetes are a class of microscopic, filamentous eukaryotes within the Stramenopiles-Alveolata-Rhizaria (SAR) supergroup which includes ecologically significant animal and plant pathogens, most infamously the causative agent of potato blight Phytophthora infestans. Single-gene and concatenated phylogenetic studies both of individual oomycete genera and of members of the larger class have resulted in conflicting conclusions concerning species phylogenies within the oomycetes, particularly for the large Phytophthora genus. Genome-scale phylogenetic studies have successfully resolved many eukaryotic relationships by using supertree methods, which combine large numbers of potentially disparate trees to determine evolutionary relationships that cannot be inferred from individual phylogenies alone. With a sufficient amount of genomic data now available, we have undertaken the first whole-genome phylogenetic analysis of the oomycetes using data from 37 oomycete species and 6 SAR species. In our analysis, we used established supertree methods to generate phylogenies from 8,355 homologous oomycete and SAR gene families and have complemented those analyses with both phylogenomic network and concatenated supermatrix analyses. Our results show that a genome-scale approach to oomycete phylogeny resolves oomycete classes and individual clades within the problematic Phytophthora genus. Support for the resolution of the inferred relationships between individual Phytophthora clades varies depending on the methodology used. Our analysis represents an important first step in large-scale phylogenomic analysis of the oomycetes. IMPORTANCE The oomycetes are a class of eukaryotes and include ecologically significant animal and plant pathogens. Single-gene and multigene phylogenetic studies of individual oomycete genera and of members of the larger classes have resulted in conflicting conclusions concerning interspecies relationships among these species, particularly for the Phytophthora genus. The onset of next-generation sequencing techniques now means that a wealth of oomycete genomic data is available. For the first time, we have used genome-scale phylogenetic methods to resolve oomycete phylogenetic relationships. We used supertree methods to generate single-gene and multigene species phylogenies. Overall, our supertree analyses utilized phylogenetic data from 8,355 oomycete gene families. We have also complemented our analyses with superalignment phylogenies derived from 131 single-copy ubiquitous gene families. Our results show that a genome-scale approach to oomycete phylogeny resolves oomycete classes and clades. Our analysis represents an important first step in large-scale phylogenomic analysis of the oomycetes. PMID:28435885
Local short-duration precipitation extremes in Sweden: observations, forecasts and projections
NASA Astrophysics Data System (ADS)
Olsson, Jonas; Berg, Peter; Simonsson, Lennart
2015-04-01
Local short-duration precipitation extremes (LSPEs) are a key driver of hydrological hazards, notably in steep catchments with thin soils and in urban environments. The triggered floodings, landslides, etc., have large consequences for society in terms of both economy and health. Accurate estimations of LSPEs on both climatological time-scales (past, present, future) and in real-time is thus of great importance for improved hydrological predictions as well as design of constructions and infrastructure affected by hydrological fluxes. Analysis of LSPEs is, however, associated with various limitations and uncertainties. These are to a large degree associated with the small-scale nature of the meteorological processes behind LSPEs and the associated requirements on observation sensors as well as model descriptions. Some examples of causes for the limitations involved are given in the following. - Observations: High-resolution data sets available for LSPE analyses are often limited to either relatively long series from one or a few stations or relatively short series from larger station networks. Radar data have excellent resolutions in both time and space but the estimated local precipitation intensity is still highly uncertain. New and promising techniques (e.g. microwave links) are still in their infancy. - Weather forecasts (short-range): Although forecasts with the required spatial resolution for potential generation of LSPEs (around 2-4 km) are becoming operationally available, the actual forecast precision of LSPEs is largely unknown. Forecasted LSPEs may be displaced in time or, more critically, in space which strongly affects the possibility to assess hydrological risk. - Climate projections: The spatial resolution of the current RCM generation (around 25 km) is not sufficient for proper description of LSPEs. Statistical post-processing (i.e. downscaling) is required which adds substantial uncertainty to the final result. Ensemble generation of sufficiently high-resolution RCM projections is not yet computationally feasible. In this presentation, examples of recent research in Sweden related to these aspects will be given with some main findings shown and discussed. Finally, some ongoing and future research directions will be outlined (the former hopefully accompanied by some brand-new results).
NASA Astrophysics Data System (ADS)
Druzhinin, O.; Troitskaya, Yu; Zilitinkevich, S.
2018-01-01
The detailed knowledge of turbulent exchange processes occurring in the atmospheric marine boundary layer are of primary importance for their correct parameterization in large-scale prognostic models. These processes are complicated, especially at sufficiently strong wind forcing conditions, by the presence of sea-spray drops which are torn off the crests of sufficiently steep surface waves by the wind gusts. Natural observations indicate that mass fraction of sea-spray drops increases with wind speed and their impact on the dynamics of the air in the vicinity of the sea surface can become quite significant. Field experiments, however, are limited by insufficient accuracy of the acquired data and are in general costly and difficult. Laboratory modeling presents another route to investigate the spray-mediated exchange processes in much more detail as compared to the natural experiments. However, laboratory measurements, contact as well as Particle Image Velocimetry (PIV) methods, also suffer from inability to resolve the dynamics of the near-surface air-flow, especially in the surface wave troughs. In this report, we present a first attempt to use Direct Numerical Simulation (DNS) as tool for investigation of the drops-mediated momentum, heat and moisture transfer in a turbulent, droplet-laden air flow over a wavy water surface. DNS is capable of resolving the details of the transfer processes and do not involve any closure assumptions typical of Large-Eddy and Reynolds Averaged Navier-Stokes (LES and RANS) simulations. Thus DNS provides a basis for improving parameterizations in LES and RANS closure models and further development of large-scale prognostic models. In particular, we discuss numerical results showing the details of the modification of the air flow velocity, temperature and relative humidity fields by multidisperse, evaporating drops. We use Eulerian-Lagrangian approach where the equations for the air-flow fields are solved in a Eulerian frame whereas the drops dymanics equations are solved in a Largangain frame. The effects of air flow and drops on the water surface wave are neglected. A point-force approximation is employed to model the feed-back contributions by the drops to the air momentum, heat and moisture transfer.
NASA Technical Reports Server (NTRS)
King, I. R.; Fassett, C. I.; Thomson, B. J.; Minton, D. A.; Watters, W. A.
2017-01-01
When sufficiently large impact craters form on the Moon, rocks and unweathered materials are excavated from beneath the regolith and deposited into their blocky ejecta. This enhances the rockiness and roughness of the proximal ejecta surrounding fresh impact craters. The interior of fresh craters are typically also rough, due to blocks, breccia, and impact melt. Thus, both the interior and proximal ejecta of fresh craters are usually radar bright and have high circular polarization ratios (CPR). Beyond the proximal ejecta, radar-dark halos are observed around some fresh craters, suggesting that distal ejecta is finer-grained than background regolith. The radar signatures of craters fade with time as the regolith grows.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.
Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing
2017-01-01
Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.
NASA Technical Reports Server (NTRS)
1975-01-01
The retention of granular catalyst in a metal foam matrix was demonstrated to greatly increase the life capability of hydrazine monopropellant reactors. Since nickel foam used in previous tests was found to become degraded after long-term exposure the cause of degradation was examined and metal foams of improved durability were developed. The most durable foam developed was a rhodium-coated nickel foam. An all-platinum foam was found to be incompatible in a hot ammonia (hydrazine) environment. It is recommended to scale up the manufacturing process for the improved foam to produce samples sufficiently large for space shuttle APU gas generator testing.
Lifting primordial non-Gaussianity above the noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com
2016-08-01
Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less
Flaw tolerance promoted by dissipative deformation mechanisms between material building blocks
NASA Astrophysics Data System (ADS)
Verho, Tuukka; Buehler, Markus J.
2014-09-01
Novel high-performance composite materials often draw inspiration from natural materials such as bone or mollusc shells. A prime feature of such composites is that they are, like their natural counterparts, quasibrittle. They are tolerant to material flaws up to a certain characteristic flaw-tolerant size scale, exhibiting high strength and toughness, but start to behave in a brittle manner when sufficiently large flaws are present. Here, we establish that better flaw tolerance can be achieved by maximizing fracture toughness relative to the maximum elastic energy available in the material, and we demonstrate this concept with simple two-dimensional coarse-grained simulations where the transition from brittle to quasibrittle behaviour is examined.
A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search
NASA Astrophysics Data System (ADS)
Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd
2017-08-01
A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.
An HTRF® Assay for the Protein Kinase ATM.
Adams, Phillip; Clark, Jonathan; Hawdon, Simon; Hill, Jennifer; Plater, Andrew
2017-01-01
Ataxia telangiectasia mutated (ATM) is a serine/threonine kinase that plays a key role in the regulation of DNA damage pathways and checkpoint arrest. In recent years, there has been growing interest in ATM as a therapeutic target due to its association with cancer cell survival following genotoxic stress such as radio- and chemotherapy. Large-scale targeted drug screening campaigns have been hampered, however, by technical issues associated with the production of sufficient quantities of purified ATM and the availability of a suitable high-throughput assay. Using a purified, functionally active recombinant ATM and one of its physiological substrates, p53, we have developed an in vitro FRET-based activity assay that is suitable for high-throughput drug screening.
Hale, Lauren; Hale, Benjamin
2010-06-01
Based on theoretical and empirical work, we argue that autonomy is likely an important underlying source of healthy sleep. The implication is that 'treatment' for sleep problems cannot be understood as an individual-level behavioral problem but must instead be addressed in concert with larger scale social factors that may be inhibiting high-quality sufficient sleep in large segments of the population. When sleep is understood as a proxy for health, the implications extend even further. Policies and interventions that facilitate the autonomy of individuals therefore may not only help reduce individual sleep problems but also have broader consequences for ameliorating social disparities in health.
Observed Differences between North American Snow Extent and Snow Depth Variability
NASA Astrophysics Data System (ADS)
Ge, Y.; Gong, G.
2006-12-01
Snow extent and snow depth are two related characteristics of a snowpack, but they need not be mutually consistent. Differences between these two variables at local scales are readily apparent. However at larger scales which interact with atmospheric circulation and climate, snow extent is typically the variable used, while snow depth is often assumed to be minor and/or mutually consistent compared to snow extent, though this is rarely verified. In this study, a new regional/continental-scale gridded dataset derived from field observations is utilized to quantitatively evaluate the relationship between snow extent and snow depth over North America. Various statistical methods are applied to assess the mutual consistency of monthly snow depth vs. snow extent, including correlations, composites and principal components. Results indicate that snow depth variations are significant in their own rights, and that depth and extent anomalies are largely unrelated, especially over broad high latitude regions north of the snowline. In the vicinity of the snowline, where precipitation and ablation can affect both snow extent and snow depth, the two variables vary concurrently, especially in autumn and spring. It is also found that deeper winter snow translates into larger snow-covered area in the subsequent spring/summer season, which suggests a possible influence of winter snow depth on summer climate. The observed lack of mutual consistency at continental/regional scales suggests that snowpack depth variations may be of sufficiently large magnitude, spatial scope and temporal duration to influence regional-hemispheric climate, in a manner unrelated to the more extensively studied snow extent variations.
Motion-based prediction is sufficient to solve the aperture problem
Perrinet, Laurent U; Masson, Guillaume S
2012-01-01
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489
Movement reveals scale dependence in habitat selection of a large ungulate
Northrup, Joseph; Anderson, Charles R.; Hooten, Mevin B.; Wittemyer, George
2016-01-01
Ecological processes operate across temporal and spatial scales. Anthropogenic disturbances impact these processes, but examinations of scale dependence in impacts are infrequent. Such examinations can provide important insight to wildlife–human interactions and guide management efforts to reduce impacts. We assessed spatiotemporal scale dependence in habitat selection of mule deer (Odocoileus hemionus) in the Piceance Basin of Colorado, USA, an area of ongoing natural gas development. We employed a newly developed animal movement method to assess habitat selection across scales defined using animal-centric spatiotemporal definitions ranging from the local (defined from five hour movements) to the broad (defined from weekly movements). We extended our analysis to examine variation in scale dependence between night and day and assess functional responses in habitat selection patterns relative to the density of anthropogenic features. Mule deer displayed scale invariance in the direction of their response to energy development features, avoiding well pads and the areas closest to roads at all scales, though with increasing strength of avoidance at coarser scales. Deer displayed scale-dependent responses to most other habitat features, including land cover type and habitat edges. Selection differed between night and day at the finest scales, but homogenized as scale increased. Deer displayed functional responses to development, with deer inhabiting the least developed ranges more strongly avoiding development relative to those with more development in their ranges. Energy development was a primary driver of habitat selection patterns in mule deer, structuring their behaviors across all scales examined. Stronger avoidance at coarser scales suggests that deer behaviorally mediated their interaction with development, but only to a degree. At higher development densities than seen in this area, such mediation may not be possible and thus maintenance of sufficient habitat with lower development densities will be a critical best management practice as development expands globally.
Identifying, characterizing and predicting spatial patterns of lacustrine groundwater discharge
NASA Astrophysics Data System (ADS)
Tecklenburg, Christina; Blume, Theresa
2017-10-01
Lacustrine groundwater discharge (LGD) can significantly affect lake water balances and lake water quality. However, quantifying LGD and its spatial patterns is challenging because of the large spatial extent of the aquifer-lake interface and pronounced spatial variability. This is the first experimental study to specifically study these larger-scale patterns with sufficient spatial resolution to systematically investigate how landscape and local characteristics affect the spatial variability in LGD. We measured vertical temperature profiles around a 0.49 km2 lake in northeastern Germany with a needle thermistor, which has the advantage of allowing for rapid (manual) measurements and thus, when used in a survey, high spatial coverage and resolution. Groundwater inflow rates were then estimated using the heat transport equation. These near-shore temperature profiles were complemented with sediment temperature measurements with a fibre-optic cable along six transects from shoreline to shoreline and radon measurements of lake water samples to qualitatively identify LGD patterns in the offshore part of the lake. As the hydrogeology of the catchment is sufficiently homogeneous (sandy sediments of a glacial outwash plain; no bedrock control) to avoid patterns being dominated by geological discontinuities, we were able to test the common assumptions that spatial patterns of LGD are mainly controlled by sediment characteristics and the groundwater flow field. We also tested the assumption that topographic gradients can be used as a proxy for gradients of the groundwater flow field. Thanks to the extensive data set, these tests could be carried out in a nested design, considering both small- and large-scale variability in LGD. We found that LGD was concentrated in the near-shore area, but alongshore variability was high, with specific regions of higher rates and higher spatial variability. Median inflow rates were 44 L m-2 d-1 with maximum rates in certain locations going up to 169 L m-2 d-1. Offshore LGD was negligible except for two local hotspots on steep steps in the lake bed topography. Large-scale groundwater inflow patterns were correlated with topography and the groundwater flow field, whereas small-scale patterns correlated with grain size distributions of the lake sediment. These findings confirm results and assumptions of theoretical and modelling studies more systematically than was previously possible with coarser sampling designs. However, we also found that a significant fraction of the variance in LGD could not be explained by these controls alone and that additional processes need to be considered. While regression models using these controls as explanatory variables had limited power to predict LGD rates, the results nevertheless encourage the use of topographic indices and sediment heterogeneity as an aid for targeted campaigns in future studies of groundwater discharge to lakes.
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2010 CFR
2010-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
Using Magnitude Estimation Scaling in Business Communication Research.
ERIC Educational Resources Information Center
Sturges, David L.
1990-01-01
Critically analyzes magnitude estimation scaling for its potential use in business communication research. Finds that the 12-15 percent increase in explained variance by magnitude estimation over categorical scaling methods may be useful in theory building but may not be sufficient to justify its added expense in applied business communication…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongyi; Sivapalan, Murugesu; Tian, Fuqiang
Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found,more » first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be ‘‘behavioral,’’ in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the ‘‘behavioral’’ climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.« less
On the Grain-modified Magnetic Diffusivities in Protoplanetary Disks
NASA Astrophysics Data System (ADS)
Xu, Rui; Bai, Xue-Ning
2016-03-01
Weakly ionized protoplanetary disks (PPDs) are subject to nonideal magnetohydrodynamic (MHD) effects, including ohmic resistivity, the Hall effect, and ambipolar diffusion (AD), and the resulting magnetic diffusivities ({η }{{O}},{η }{{H}}, and {η }{{A}}) largely control the disk gas dynamics. The presence of grains not only strongly reduces the disk ionization fraction, but also modifies the scalings of {η }{{H}} and {η }{{A}} with magnetic field strength. We analytically derive asymptotic expressions of {η }{{H}} and {η }{{A}} in both the strong and weak field limits and show that toward a strong field, {η }{{H}} can change sign (at a threshold field strength {B}{{th}}), mimicking a flip of field polarity, and AD is substantially reduced. Applied to PPDs, we find that when small ˜0.1 (0.01)μm grains are sufficiently abundant (mass ratio ˜0.01 (10-4)), {η }{{H}} can change sign up to ˜2-3 scale heights above the midplane at a modest field strength (plasma β ˜ 100) over a wide range of disk radii. The reduction of AD is also substantial toward the AD-dominated outer disk and may activate the magnetorotational instability. We further perform local nonideal MHD simulations of the inner disk (within 10 au) and show that, with sufficiently abundant small grains, the magnetic field amplification due to the Hall-shear instability saturates at a very low level near the threshold field strength {B}{{th}}. Together with previous studies, we conclude by discussing the grain-abundance-dependent phenomenology of PPD gas dynamics.
Anisotropic extinction distortion of the galaxy correlation function
NASA Astrophysics Data System (ADS)
Fang, Wenjuan; Hui, Lam; Ménard, Brice; May, Morgan; Scranton, Ryan
2011-09-01
Similar to the magnification of the galaxies’ fluxes by gravitational lensing, the extinction of the fluxes by comic dust, whose existence is recently detected by [B. Ménard, R. Scranton, M. Fukugita, and G. Richards, Mon. Not. R. Astron. Soc.MNRAA40035-8711 405, 1025 (2010)DOI: 10.1111/j.1365-2966.2010.16486.x.], also modifies the distribution of a flux-selected galaxy sample. We study the anisotropic distortion by dust extinction to the 3D galaxy correlation function, including magnification bias and redshift distortion at the same time. We find the extinction distortion is most significant along the line of sight and at large separations, similar to that by magnification bias. The correction from dust extinction is negative except at sufficiently large transverse separations, which is almost always opposite to that from magnification bias (we consider a number count slope s>0.4). Hence, the distortions from these two effects tend to reduce each other. At low z (≲1), the distortion by extinction is stronger than that by magnification bias, but at high z, the reverse holds. We also study how dust extinction affects probes in real space of the baryon acoustic oscillations (BAO) and the linear redshift distortion parameter β. We find its effect on BAO is negligible. However, it introduces a positive scale-dependent correction to β that can be as large as a few percent. At the same time, we also find a negative scale-dependent correction from magnification bias, which is up to percent level at low z, but to ˜40% at high z. These corrections are non-negligible for precision cosmology, and should be considered when testing General Relativity through the scale-dependence of β.
Progress towards Continental River Dynamics modeling
NASA Astrophysics Data System (ADS)
Yu, Cheng-Wei; Zheng, Xing; Liu, Frank; Maidment, Daivd; Hodges, Ben
2017-04-01
The high-resolution National Water Model (NWM), launched by U.S. National Oceanic and Atmospheric Administration (NOAA) in August 2016, has shown it is possible to provide real-time flow prediction in rivers and streams across the entire continental United States. The next step for continental-scale modeling is moving from reduced physics (e.g. Muskingum-Cunge) to full dynamic modeling with the Saint-Venant equations. The Simulation Program for River Networks (SPRNT) provides a computational approach for the Saint-Venant equations, but obtaining sufficient channel bathymetric data and hydraulic roughness is seen as a critical challenge. However, recent work has shown the Height Above Nearest Drainage (HAND) method can be applied with the National Elevation Dataset (NED) to provide automated estimation of effective channel bathymetry suitable for large-scale hydraulic simulations. The present work examines the use of SPRNT with the National Hydrography Dataset (NHD) and HAND-derived bathymetry for automated generation of rating curves that can be compared to existing data. The approach can, in theory, be applied to every stream reach in the NHD and thus provide flood guidance where none is available. To test this idea we generated 2000+ rating curves in two catchments in Texas and Alabama (USA). Field data from the USGS and flood records from an Austin, Texas flood in May 2015 were used as validation. Large-scale implementation of this idea requires addressing several critical difficulties associated with numerical instabilities, including ill-posed boundary conditions generated in automated model linkages and inconsistencies in the river geometry. A key to future progress is identifying efficient approaches to isolate numerical instability contributors in a large time-space varying solution. This research was supported in part by the National Science Foundation under grant number CCF-1331610.
Gao, Chunsheng; Xin, Pengfei; Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining
2014-01-01
Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis.
NASA Astrophysics Data System (ADS)
Martini, Ivan; Ambrosetti, Elisa; Sandrelli, Fabio
2017-04-01
Aggradation, progradation and retrogradation are the main patterns that define the large-scale architecture of Gilbert-type deltas. These patterns are governed by the ratio between the variation in accommodation space and sediment supply experienced during delta growth. Sediment supply variations are difficult to estimate in ancient settings; hence, it is rarely possible to assess its significance in the large-scale stratigraphic architecture of Gilbert-type deltas. This paper presents a stratigraphic analysis of a Pliocene deltaic complex composed of two coeval and narrowly spaced deltaic branches. The two branches recorded the same tectonic- and climate-induced accommodation space variations. As a result, this deltaic complex represents a natural laboratory for testing the effects of sediment supply variations on the stratigraphic architecture of Gilbert-type deltas. The field data suggest that a sediment supply which is able to counteract the accommodation generated over time promotes the aggradational/progradational attitude of Gilbert-type deltas, as well as the development of thick foreset deposits. By contrast, if the sediment supply is not sufficient for counterbalancing the generated accommodation, an aggradational/retrogradational stratigraphic architecture is promoted. In this case, the deltaic system is forced to withdraw during the different phases of generation of accommodation, with the subsequent flooding of previously deposited sub-horizontal topset deposits (i.e., the delta plain). The subsequent deltaic progradation occurs above these deposits and, consequently, the available space for foresets growth is limited to the water depth between the base-level and the older delta plain. This leads to the vertical stacking of relatively thin deltaic deposits with an overall aggradatational/retrogradational attitude.
Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining
2014-01-01
Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis. PMID:25329551
NASA Astrophysics Data System (ADS)
de Assis, Thiago A.; Dall’Agnol, Fernando F.
2018-05-01
Numerical simulations are important when assessing the many characteristics of field emission related phenomena. In small simulation domains, the electrostatic effect from the boundaries is known to influence the calculated apex field enhancement factor (FEF) of the emitter, but no established dependence has been reported at present. In this work, we report the dependence of the lateral size, L, and the height, H, of the simulation domain on the apex-FEF of a single conducting ellipsoidal emitter. Firstly, we analyze the error, ε, in the calculation of the apex-FEF as a function of H and L. Importantly, our results show that the effects of H and L on ε are scale invariant, allowing one to predict ε for ratios L/h and H/h, where h is the height of the emitter. Next, we analyze the fractional change of the apex-FEF, δ, from a single emitter, , and a pair, . We show that small relative errors in (i.e. ), due to the finite domain size, are sufficient to alter the functional dependence , where c is the distance from the emitters in the pair. We show that obeys a recently proposed power law decay (Forbes 2016 J. Appl. Phys. 120 054302), at sufficiently large distances in the limit of infinite domain size (, say), which is not observed when using a long time established exponential decay (Bonard et al 2001 Adv. Mater. 13 184) or a more sophisticated fitting formula proposed recently by Harris et al (2015 AIP Adv. 5 087182). We show that the inverse-third power law functional dependence is respected for various systems like infinity arrays and small clusters of emitters with different shapes. Thus, , with m = 3, is suggested to be a universal signature of the charge-blunting effect in small clusters or arrays, at sufficient large distances between emitters with any shape. These results improve the physical understanding of the field electron emission theory to accurately characterize emitters in small clusters or arrays.
The ability to understand and manage ecological changes caused by anthropogenic stressors is often impeded by a lack of sufficient information to resolve pattern and change with sufficient resolution and extent. Increasingly, different types of environmental data are available t...
Investigation of Professional Self Sufficiency Levels of Physical Education and Sports Teachers
ERIC Educational Resources Information Center
Saracaoglu, Asuman Seda; Ozsaker, Murat; Varol, Rana
2012-01-01
The present research aimed at detecting professional self sufficiency levels of physical education and sports teachers who worked in Izmir Province and at investigating them in terms of some variables. For data collection, Teacher's Sense of Efficacy Scale-developed by Moran and Woolfolk-Hoy (2001) and Turkish validity and reliability studies…
A comparison of refuse attenuation in laboratory and field scale lysimeters.
Youcai, Zhao; Luochun, Wang; Renhua, Hua; Dimin, Xu; Guowei, Gu
2002-01-01
For this study, small and middle scale laboratory lysimeters, and a large scale field lysimeter in situ in Shanghai Refuse Landfill, with refuse weights of 187,600 and 10,800,000 kg, respectively, were created. These lysimeters are compared in terms of leachate quality (pH, concentrations of COD, BOD and NH3-N), refuse composition (biodegradable matter and volatile solid) and surface settlement for a monitoring period of 0-300 days. The objectives of this study were to explore both the similarities and disparities between laboratory and field scale lysimeters, and to compare degradation behaviors of refuse at the intensive reaction phase in the different scale lysimeters. Quantitative relationships of leachate quality and refuse composition with placement time show that degradation behaviors of refuse seem to depend heavily on the scales of the lysimeters and the parameters of concern, especially in the starting period of 0-6 months. However, some similarities exist between laboratory and field lysimeters after 4-6 months of placement because COD and BOD concentrations in leachate in the field lysimeter decrease regularly in a parallel pattern with those in the laboratory lysimeters. NH3-N, volatile solid (VS) and biodegradable matter (BDM) also gradually decrease in parallel in this intensive reaction phase for all scale lysimeters as refuse ages. Though the concrete data are different among the different scale lysimeters, it may be considered that laboratory lysimeters with sufficient scale are basically applicable for a rough simulation of a real landfill, especially for illustrating the degradation pattern and mechanism. Settlement of refuse surface is roughly proportional to the initial refuse height.
Menke, S.B.; Holway, D.A.; Fisher, R.N.; Jetz, W.
2009-01-01
Aim: Species distribution models (SDMs) or, more specifically, ecological niche models (ENMs) are a useful and rapidly proliferating tool in ecology and global change biology. ENMs attempt to capture associations between a species and its environment and are often used to draw biological inferences, to predict potential occurrences in unoccupied regions and to forecast future distributions under environmental change. The accuracy of ENMs, however, hinges critically on the quality of occurrence data. ENMs often use haphazardly collected data rather than data collected across the full spectrum of existing environmental conditions. Moreover, it remains unclear how processes affecting ENM predictions operate at different spatial scales. The scale (i.e. grain size) of analysis may be dictated more by the sampling regime than by biologically meaningful processes. The aim of our study is to jointly quantify how issues relating to region and scale affect ENM predictions using an economically important and ecologically damaging invasive species, the Argentine ant (Linepithema humile). Location: California, USA. Methods: We analysed the relationship between sampling sufficiency, regional differences in environmental parameter space and cell size of analysis and resampling environmental layers using two independently collected sets of presence/absence data. Differences in variable importance were determined using model averaging and logistic regression. Model accuracy was measured with area under the curve (AUC) and Cohen's kappa. Results: We first demonstrate that insufficient sampling of environmental parameter space can cause large errors in predicted distributions and biological interpretation. Models performed best when they were parametrized with data that sufficiently sampled environmental parameter space. Second, we show that altering the spatial grain of analysis changes the relative importance of different environmental variables. These changes apparently result from how environmental constraints and the sampling distributions of environmental variables change with spatial grain. Conclusions: These findings have clear relevance for biological inference. Taken together, our results illustrate potentially general limitations for ENMs, especially when such models are used to predict species occurrences in novel environments. We offer basic methodological and conceptual guidelines for appropriate sampling and scale matching. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing.
NASA Astrophysics Data System (ADS)
Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.
2017-12-01
California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness of different water management strategies and how policy interventions will facilitate drought adaptation in California.
The one-loop matter bispectrum in the Effective Field Theory of Large Scale Structures
Angulo, Raul E.; Foreman, Simon; Schmittfull, Marcel; ...
2015-10-14
With this study, given the importance of future large scale structure surveys for delivering new cosmological information, it is crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbative scheme to compute the clustering of dark matter in the weakly nonlinear regime in an expansion in k/k NL, where k is the wavenumber of interest and k NL is the wavenumber associated to the nonlinear scale. It has been recently shown that the EFTofLSS matches to 1% level the dark matter power spectrum at redshift zero up to k ≃more » 0.3 h Mpc –1 and k ≃ 0.6 h Mpc –1 at one and two loops respectively, using only one counterterm that is fit to data. Similar results have been obtained for the momentum power spectrum at one loop. This is a remarkable improvement with respect to former analytical techniques. Here we study the prediction for the equal-time dark matter bispectrum at one loop. We find that at this order it is sufficient to consider the same counterterm that was measured in the power spectrum. Without any remaining free parameter, and in a cosmology for which kNL is smaller than in the previously considered cases (σ 8=0.9), we find that the prediction from the EFTofLSS agrees very well with N-body simulations up to k ≃ 0.25 h Mpc –1, given the accuracy of the measurements, which is of order a few percent at the highest k's of interest. While the fit is very good on average up to k ≃ 0.25 h Mpc –1, the fit performs slightly worse on equilateral configurations, in agreement with expectations that for a given maximum k, equilateral triangles are the most nonlinear.« less
Running Out of Time: Why Elephants Don't Gallop
NASA Astrophysics Data System (ADS)
Noble, Julian V.
2001-11-01
The physics of high speed running implies that galloping becomes impossible for sufficiently large animals. Some authors have suggested that because the strength/weight ratio decreases with size and eventually renders large animals excessively liable to injury when they attempt to gallop. This paper suggests that large animals cannot move their limbs sufficiently rapidly to take advantage of leaving the ground, hence are restricted to walking gaits. >From this point of view the relatively low strength/weight ratio of elephants follows from their inability to gallop, rather than causing it.
Precise stellar surface gravities from the time scales of convectively driven brightness variations
Kallinger, Thomas; Hekker, Saskia; García, Rafael A.; Huber, Daniel; Matthews, Jaymie M.
2016-01-01
A significant part of the intrinsic brightness variations in cool stars of low and intermediate mass arises from surface convection (seen as granulation) and acoustic oscillations (p-mode pulsations). The characteristics of these phenomena are largely determined by the stars’ surface gravity (g). Detailed photometric measurements of either signal can yield an accurate value of g. However, even with ultraprecise photometry from NASA’s Kepler mission, many stars are too faint for current methods or only moderate accuracy can be achieved in a limited range of stellar evolutionary stages. This means that many of the stars in the Kepler sample, including exoplanet hosts, are not sufficiently characterized to fully describe the sample and exoplanet properties. We present a novel way to measure surface gravities with accuracies of about 4%. Our technique exploits the tight relation between g and the characteristic time scale of the combined granulation and p-mode oscillation signal. It is applicable to all stars with a convective envelope, including active stars. It can measure g in stars for which no other analysis is now possible. Because it depends on the time scale (and no other properties) of the signal, our technique is largely independent of the type of measurement (for example, photometry or radial velocity measurements) and the calibration of the instrumentation used. However, the oscillation signal must be temporally resolved; thus, it cannot be applied to dwarf stars observed by Kepler in its long-cadence mode. PMID:26767193
Precise stellar surface gravities from the time scales of convectively driven brightness variations.
Kallinger, Thomas; Hekker, Saskia; García, Rafael A; Huber, Daniel; Matthews, Jaymie M
2016-01-01
A significant part of the intrinsic brightness variations in cool stars of low and intermediate mass arises from surface convection (seen as granulation) and acoustic oscillations (p-mode pulsations). The characteristics of these phenomena are largely determined by the stars' surface gravity (g). Detailed photometric measurements of either signal can yield an accurate value of g. However, even with ultraprecise photometry from NASA's Kepler mission, many stars are too faint for current methods or only moderate accuracy can be achieved in a limited range of stellar evolutionary stages. This means that many of the stars in the Kepler sample, including exoplanet hosts, are not sufficiently characterized to fully describe the sample and exoplanet properties. We present a novel way to measure surface gravities with accuracies of about 4%. Our technique exploits the tight relation between g and the characteristic time scale of the combined granulation and p-mode oscillation signal. It is applicable to all stars with a convective envelope, including active stars. It can measure g in stars for which no other analysis is now possible. Because it depends on the time scale (and no other properties) of the signal, our technique is largely independent of the type of measurement (for example, photometry or radial velocity measurements) and the calibration of the instrumentation used. However, the oscillation signal must be temporally resolved; thus, it cannot be applied to dwarf stars observed by Kepler in its long-cadence mode.
A quasi-static approach to structure formation in black hole universes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durk, Jessie; Clifton, Timothy, E-mail: j.durk@qmul.ac.uk, E-mail: t.clifton@qmul.ac.uk
Motivated by the existence of hierarchies of structure in the Universe, we present four new families of exact initial data for inhomogeneous cosmological models at their maximum of expansion. These data generalise existing black hole lattice models to situations that contain clusters of masses, and hence allow the consequences of cosmological structures to be considered in a well-defined and non-perturbative fashion. The degree of clustering is controlled by a parameter λ, in such a way that for λ ∼ 0 or 1 we have very tightly clustered masses, whilst for λ ∼ 0.5 all masses are separated by cosmological distancemore » scales. We study the consequences of structure formation on the total net mass in each of our clusters, as well as calculating the cosmological consequences of the interaction energies both within and between clusters. The locations of the shared horizons that appear around groups of black holes, when they are brought sufficiently close together, are also identified and studied. We find that clustering can have surprisingly large effects on the scale of the cosmology, with models that contain thousands of black holes sometimes being as little as 30% of the size of comparable Friedmann models with the same total proper mass. This deficit is comparable to what might be expected to occur from neglecting gravitational interaction energies in Friedmann cosmology, and suggests that these quantities may have a significant influence on the properties of the large-scale cosmology.« less
Schutyser, M A I; Briels, W J; Boom, R M; Rinzema, A
2004-05-20
The development of mathematical models facilitates industrial (large-scale) application of solid-state fermentation (SSF). In this study, a two-phase model of a drum fermentor is developed that consists of a discrete particle model (solid phase) and a continuum model (gas phase). The continuum model describes the distribution of air in the bed injected via an aeration pipe. The discrete particle model describes the solid phase. In previous work, mixing during SSF was predicted with the discrete particle model, although mixing simulations were not carried out in the current work. Heat and mass transfer between the two phases and biomass growth were implemented in the two-phase model. Validation experiments were conducted in a 28-dm3 drum fermentor. In this fermentor, sufficient aeration was provided to control the temperatures near the optimum value for growth during the first 45-50 hours. Several simulations were also conducted for different fermentor scales. Forced aeration via a single pipe in the drum fermentors did not provide homogeneous cooling in the substrate bed. Due to large temperature gradients, biomass yield decreased severely with increasing size of the fermentor. Improvement of air distribution would be required to avoid the need for frequent mixing events, during which growth is hampered. From these results, it was concluded that the two-phase model developed is a powerful tool to investigate design and scale-up of aerated (mixed) SSF fermentors. Copyright 2004 Wiley Periodicals, Inc.
Park, Y; Subramanian, K; Verfaillie, C M; Hu, W S
2010-10-01
Many potential applications of stem cells require large quantities of cells, especially those involving large organs such as the liver. For such applications, a scalable reactor system is desirable to ensure a reliable supply of sufficient quantities of differentiation competent or differentiated cells. We employed a microcarrier culture system for the expansion of undifferentiated rat multipotent adult progenitor cells (rMAPC) as well as for directed differentiation of these cells to hepatocyte-like cells. During the 4-day expansion culture, cell concentration increased by 85-fold while expression level of pluripotency markers were maintained, as well as the MAPC differentiation potential. Directed differentiation into hepatocyte-like cells on the microcarriers themselves gave comparable results as observed with cells cultured in static cultures. The cells expressed several mature hepatocyte-lineage genes and asialoglycoprotein receptor-1 (ASGPR-1) surface protein, and secreted albumin and urea. Microcarrier culture thus offers the potential of large-scale expansion and differentiation of stem cells in a more controlled bioreactor environment. Copyright © 2010 Elsevier B.V. All rights reserved.
Wetzler, Nadav; Lay, Thorne; Brodsky, Emily E.; Kanamori, Hiroo
2018-01-01
Fault slip during plate boundary earthquakes releases a portion of the shear stress accumulated due to frictional resistance to relative plate motions. Investigation of 101 large [moment magnitude (Mw) ≥ 7] subduction zone plate boundary mainshocks with consistently determined coseismic slip distributions establishes that 15 to 55% of all master event–relocated aftershocks with Mw ≥ 5.2 are located within the slip regions of the mainshock ruptures and few are located in peak slip regions, allowing for uncertainty in the slip models. For the preferred models, cumulative deficiency of aftershocks within the central three-quarters of the scaled slip regions ranges from 15 to 45%, increasing with the total number of observed aftershocks. The spatial gradients of the mainshock coseismic slip concentrate residual shear stress near the slip zone margins and increase stress outside the slip zone, driving both interplate and intraplate aftershock occurrence near the periphery of the mainshock slip. The shear stress reduction in large-slip regions during the mainshock is generally sufficient to preclude further significant rupture during the aftershock sequence, consistent with large-slip areas relocking and not rupturing again for a substantial time. PMID:29487902
Commissioning and initial experience with the ALICE on-line
NASA Astrophysics Data System (ADS)
Altini, V.; Anticic, T.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Kiss, T.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soós, C.; Vande Vyvre, P.; von Haller, B.; ALICE Collaboration
2010-04-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ions and to accommodate very different requirements originated from the 18 sub-detectors. This paper will present the large scale tests conducted to assess the standalone DAQ performances, the interfaces with the other online systems and the extensive commissioning performed in order to be fully prepared for physics data taking. It will review the experience accumulated since May 2007 during the standalone commissioning of the main detectors and the global cosmic runs and the lessons learned from this exposure on the "battle field". It will also discuss the test protocol followed to integrate and validate each sub-detector with the online systems and it will conclude with the first results of the LHC injection tests and startup in September 2008. Several papers of the same conference present in more details some elements of the ALICE DAQ system.
Collaborative visual analytics of radio surveys in the Big Data era
NASA Astrophysics Data System (ADS)
Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.
2017-06-01
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Language, culture, and task shifting--an emerging challenge for global mental health.
Swartz, Leslie; Kilian, Sanja; Twesigye, Justus; Attah, Dzifa; Chiliza, Bonginkosi
2014-01-01
Language is at the heart of mental health care. Many high-income countries have sophisticated interpreter services, but in low- and middle-income countries there are not sufficient professional services, let alone interpreter services, and task shifting is used. In this article, we discuss this neglected issue in the context of low- and middle-income countries, where task shifting has been suggested as a solution to the problem of scarce mental health resources. The large diversity of languages in low- and middle-income countries, exacerbated by wide-scale migration, has implications for the scale-up of services. We suggest that it would be useful for those who are working innovatively to develop locally delivered mental health programmes in low- and middle-income countries to explore and report on issues of language and how these have been addressed. We need to know more about local challenges, but also about local solutions which seem to work, and for this we need more information from the field than is currently available.
Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.
Bohley, Christian; Heuer, Jana; Stannarius, Ralf
2005-12-01
We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.
Local and regional factors affecting atmospheric mercury speciation at a remote location
Manolopoulos, H.; Schauer, J.J.; Purcell, M.D.; Rudolph, T.M.; Olson, M.L.; Rodger, B.; Krabbenhoft, D.P.
2007-01-01
Atmospheric concentrations of elemental (Hg0), reactive gaseous (RGM), and particulate (PHg) mercury were measured at two remote sites in the midwestern United States. Concurrent measurements of Hg0, PHg, and RGM obtained at Devil's Lake and Mt. Horeb, located approximately 65 km apart, showed that Hg0 and PHg concentrations were affected by regional, as well as local sources, while RGM was mainly impacted by local sources. Plumes reaching the Devil's Lake site from a nearby coal-fired power plant significantly impacted SO2 and RGM concentrations at Devil's Lake, but had little impact on Hg0. Our findings suggest that traditional modeling approaches to assess sources of mercury deposited that utilize source emissions and large-scale grids may not be sufficient to predict mercury deposition at sensitive locations due to the importance of small-scale sources and processes. We suggest the use of a receptor-based monitoring to better understand mercury source-receptor relationships. ?? 2007 NRC Canada.
Liu, Yijin; Meirer, Florian; Krest, Courtney M.; ...
2016-08-30
To understand how hierarchically structured functional materials operate, analytical tools are needed that can reveal small structural and chemical details in large sample volumes. Often, a single method alone is not sufficient to get a complete picture of processes happening at multiple length scales. Here we present a correlative approach combining three-dimensional X-ray imaging techniques at different length scales for the analysis of metal poisoning of an individual catalyst particle. The correlative nature of the data allowed establishing a macro-pore network model that interprets metal accumulations as a resistance to mass transport and can, by tuning the effect of metalmore » deposition, simulate the response of the network to a virtual ageing of the catalyst particle. In conclusion, the developed approach is generally applicable and provides an unprecedented view on dynamic changes in a material’s pore space, which is an essential factor in the rational design of functional porous materials.« less
NASA Technical Reports Server (NTRS)
Bernstein, W.
1981-01-01
The possible use of Chamber A for the replication or simulation of space plasma physics processes which occur in the geosynchronous Earth orbit (GEO) environment is considered. It is shown that replication is not possible and that scaling of the environmental conditions is required for study of the important instability processes. Rules for such experimental scaling are given. At the present time, it does not appear technologically feasible to satisfy these requirements in Chamber A. It is, however, possible to study and qualitatively evaluate the problem of vehicle charging at GEO. In particular, Chamber A is sufficiently large that a complete operational spacecraft could be irradiated by beams and charged to high potentials. Such testing would contribute to the assessment of the operational malfunctions expected at GEO and their possible correction. However, because of the many tabulated limitations in such a testing programs, its direct relevance to conditions expected in the geo environment remains questionable.
Pushing CHARA to its Limit: A Pathway Toward 80X80 Pixel Images of Stellar Surfaces
NASA Astrophysics Data System (ADS)
Norris, Ryan
2018-04-01
Imagine a future with 80x80 pixel images of stellar surfaces. With a maximum baseline of 330 m, the CHARA Array is already capable of achieving 0.5 mas resolution, sufficient for imaging the red supergiant Betelgeuse (d = 42.3 mas ) at such a scale. However several issues have hampered attempts to image the largest stars at CHARA, including a lack of baselines shorter than 34 m and instrument sensitivities unable to measure the faintest fringes. Here we discuss what is needed to achieve imaging of large stars at CHARA. We will present suggestions for future telescope placement, describing the advantages of a short baseline, while also considering the needs of other imaging targets that might benefit from additional baselines. We will also present developments in image reconstruction methods that can improve the resolution of images today, albeit of smaller targets and at a lesser scale. Of course, there will be example images, created using simulated oifits data and state of the art reconstruction techniques!
Chaotic nature of the spin-glass phase
NASA Technical Reports Server (NTRS)
Bray, A. J.; Moore, M. A.
1987-01-01
The microscopic structure of the ordered phase of spin glasses is investigated theoretically in the framework of the T = 0 fixed-point model (McMillan, 1984; Fisher and Huse, 1986; and Bray and Moore, 1986). The sensitivity of the ground state to changes in the interaction strengths at T = 0 is explored, and it is found that for sufficiently large length scales the ground state is unstable against arbitrarily weak perturbations to the bonds. Explicit results are derived for d = 1, and the implications for d = 2 and d = 3 are considered in detail. It is concluded that there is no hidden order pattern for spin glasses at all T less than T(C), the ordered-phase spin correlations being chaotic functions of spin separation at fixed temperature or of temperature (for a given pair of spins) at scale lengths L greater than (T delta T) exp -1/zeta, where zeta = d(s)/2 - y, d(s) is the interfacial fractal dimension, and -y is the thermal eigenvalue at T = 0.
Wang, Dongxing; Zhu, Wenqi; Best, Michael D.; Camden, Jon P.; Crozier, Kenneth B.
2013-01-01
The ability to detect molecules at low concentrations is highly desired for applications that range from basic science to healthcare. Considerable interest also exists for ultrathin materials with high optical absorption, e.g. for microbolometers and thermal emitters. Metal nanostructures present opportunities to achieve both purposes. Metal nanoparticles can generate gigantic field enhancements, sufficient for the Raman spectroscopy of single molecules. Thin layers containing metal nanostructures (“metasurfaces”) can achieve near-total power absorption at visible and near-infrared wavelengths. Thus far, however, both aims (i.e. single molecule Raman and total power absorption) have only been achieved using metal nanostructures produced by techniques (high resolution lithography or colloidal synthesis) that are complex and/or difficult to implement over large areas. Here, we demonstrate a metasurface that achieves the near-perfect absorption of visible-wavelength light and enables the Raman spectroscopy of single molecules. Our metasurface is fabricated using thin film depositions, and is of unprecedented (wafer-scale) extent. PMID:24091825
Large-scale production and properties of human plasma-derived activated Factor VII concentrate.
Tomokiyo, K; Yano, H; Imamura, M; Nakano, Y; Nakagaki, T; Ogata, Y; Terano, T; Miyamoto, S; Funatsu, A
2003-01-01
An activated Factor VII (FVIIa) concentrate, prepared from human plasma on a large scale, has to date not been available for clinical use for haemophiliacs with antibodies against FVIII and FIX. In the present study, we attempted to establish a large-scale manufacturing process to obtain plasma-derived FVIIa concentrate with high recovery and safety, and to characterize its biochemical and biological properties. FVII was purified from human cryoprecipitate-poor plasma, by a combination of anion exchange and immunoaffinity chromatography, using Ca2+-dependent anti-FVII monoclonal antibody. To activate FVII, a FVII preparation that was nanofiltered using a Bemberg Microporous Membrane-15 nm was partially converted to FVIIa by autoactivation on an anion-exchange resin. The residual FVII in the FVII and FVIIa mixture was completely activated by further incubating the mixture in the presence of Ca2+ for 18 h at 10 degrees C, without any additional activators. For preparation of the FVIIa concentrate, after dialysis of FVIIa against 20 mm citrate, pH 6.9, containing 13 mm glycine and 240 mm NaCl, the FVIIa preparation was supplemented with 2.5% human albumin (which was first pasteurized at 60 degrees C for 10 h) and lyophilized in vials. To inactivate viruses contaminating the FVIIa concentrate, the lyophilized product was further heated at 65 degrees C for 96 h in a water bath. Total recovery of FVII from 15 000 l of plasma was approximately 40%, and the FVII preparation was fully converted to FVIIa with trace amounts of degraded products (FVIIabeta and FVIIagamma). The specific activity of the FVIIa was approximately 40 U/ micro g. Furthermore, virus-spiking tests demonstrated that immunoaffinity chromatography, nanofiltration and dry-heating effectively removed and inactivated the spiked viruses in the FVIIa. These results indicated that the FVIIa concentrate had both high specific activity and safety. We established a large-scale manufacturing process of human plasma-derived FVIIa concentrate with a high yield, making it possible to provide sufficient FVIIa concentrate for use in haemophiliacs with inhibitory antibodies.
NASA Astrophysics Data System (ADS)
Konrad, C. P.; Olden, J.
2013-12-01
Dams impose a host of impacts on freshwater and estuary ecosystems. In recent decades, dam releases for ecological outcomes have been increasingly implemented to mitigate for these impacts and are gaining global scope. Many are designed and conducted using an experimental framework. A recent review of large-scale flow experiments (FE) evaluates their effectiveness and identifies ways to enhance their scientific and management value. At least 113 large-scale flow experiments affecting 98 river systems globally have been documented over the last 50 years. These experiments span a range of flow manipulations from single pulse events to comprehensive changes in flow regime across all seasons and different water year types. Clear articulation of experimental objectives, while not universally practiced, was crucial for achieving management outcomes and changing dam operating policies. We found a strong disparity between the recognized ecological importance of a multi faceted flow regimes and discrete flow events that characterized 80% of FEs. Over three quarters of FEs documented both abiotic and biotic outcomes, but only one third examined multiple trophic groups, thus limiting how this information informs future dam management. Large-scale flow experiments represent a unique opportunity for integrated biophysical investigations for advancing ecosystem science. Nonetheless, they must remain responsive to site-specific issues regarding water management, evolving societal values and changing environmental conditions and, in particular, can characterize the incremental benefits from and necessary conditions for changing dam operations to improve ecological outcomes. This type of information is essential for understanding the full context of value based trade-offs in benefits and costs from different dam operations that can serve as an empirical basis for societal decisions regarding water and ecosystem management. FE may be the best approach available to managers for resolving critical uncertainties that impede decision making in adaptive settings, for example, when we lack sufficient understanding to model biophysical responses to alternative operations. Integrated long term monitoring of biotic abiotic responses and defining clear management based objectives highlight ways for improving the efficiency and value of FEs.
Results of Large-Scale Spacecraft Flammability Tests
NASA Technical Reports Server (NTRS)
Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian;
2017-01-01
For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot gas expansion. These results clearly demonstrate the unique features of purely forced flow in microgravity on flame spread, the dependence of flame behavior on the scale of the experiment, and the importance of full-scale testing for spacecraft fire safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boxx, I.; Stoehr, M.; Meier, W.
This paper presents observations and analysis of the time-dependent behavior of a 10 kW partially pre-mixed, swirl-stabilized methane-air flame exhibiting self-excited thermo-acoustic oscillations. This analysis is based on a series of measurements wherein particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) of the OH radical were performed simultaneously at 5 kHz repetition rate over durations of 0.8 s. Chemiluminescence imaging of the OH{sup *} radical was performed separately, also at 5 kHz over 0.8 s acquisition runs. These measurements were of sufficient sampling frequency and duration to extract usable spatial and temporal frequency information on the medium to large-scalemore » flow-field and heat-release characteristics of the flame. This analysis is used to more fully characterize the interaction between the self-excited thermo-acoustic oscillations and the dominant flow-field structure of this flame, a precessing vortex core (PVC) present in the inner recirculation zone. Interpretation of individual measurement sequences yielded insight into various physical phenomena and the underlying mechanisms driving flame dynamics. It is observed for this flame that location of the reaction zone tracks large-scale fluctuations in axial velocity and also conforms to the passage of large-scale vortical structures through the flow-field. Local extinction of the reaction zone in regions of persistently high principal compressive strain is observed. Such extinctions, however, are seen to be self healing and thus do not induce blowout. Indications of auto-ignition in regions of unburned gas near the exit are also observed. Probable auto-ignition events are frequently observed coincident with the centers of large-scale vortical structures, suggesting the phenomenon is linked to the enhanced mixing and longer residence times associated with fluid at the core of the PVC as it moves through the flame. (author)« less
Louys, Julien; Corlett, Richard T; Price, Gilbert J; Hawkins, Stuart; Piper, Philip J
2014-01-01
Alarm over the prospects for survival of species in a rapidly changing world has encouraged discussion of translocation conservation strategies that move beyond the focus of ‘at-risk’ species. These approaches consider larger spatial and temporal scales than customary, with the aim of recreating functioning ecosystems through a combination of large-scale ecological restoration and species introductions. The term ‘rewilding’ has come to apply to this large-scale ecosystem restoration program. While reintroductions of species within their historical ranges have become standard conservation tools, introductions within known paleontological ranges—but outside historical ranges—are more controversial, as is the use of taxon substitutions for extinct species. Here, we consider possible conservation translocations for nine large-bodied taxa in tropical Asia-Pacific. We consider the entire spectrum of conservation translocation strategies as defined by the IUCN in addition to rewilding. The taxa considered are spread across diverse taxonomic and ecological spectra and all are listed as ‘endangered’ or ‘critically endangered’ by the IUCN in our region of study. They all have a written and fossil record that is sufficient to assess past changes in range, as well as ecological and environmental preferences, and the reasons for their decline, and they have all suffered massive range restrictions since the late Pleistocene. General principles, problems, and benefits of translocation strategies are reviewed as case studies. These allowed us to develop a conservation translocation matrix, with taxa scored for risk, benefit, and feasibility. Comparisons between taxa across this matrix indicated that orangutans, tapirs, Tasmanian devils, and perhaps tortoises are the most viable taxa for translocations. However, overall the case studies revealed a need for more data and research for all taxa, and their ecological and environmental needs. Rewilding the Asian-Pacific tropics remains a controversial conservation strategy, and would be difficult in what is largely a highly fragmented area geographically. PMID:25540698
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
Design of sEMG assembly to detect external anal sphincter activity: a proof of concept.
Shiraz, Arsam; Leaker, Brian; Mosse, Charles Alexander; Solomon, Eskinder; Craggs, Michael; Demosthenous, Andreas
2017-10-31
Conditional trans-rectal stimulation of the pudendal nerve could provide a viable solution to treat hyperreflexive bladder in spinal cord injury. A set threshold of the amplitude estimate of the external anal sphincter surface electromyography (sEMG) may be used as the trigger signal. The efficacy of such a device should be tested in a large scale clinical trial. As such, a probe should remain in situ for several hours while patients attend to their daily routine; the recording electrodes should be designed to be large enough to maintain good contact while observing design constraints. The objective of this study was to arrive at a design for intra-anal sEMG recording electrodes for the subsequent clinical trials while deriving the possible recording and processing parameters. Having in mind existing solutions and based on theoretical and anatomical considerations, a set of four multi-electrode probes were designed and developed. These were tested in a healthy subject and the measured sEMG traces were recorded and appropriately processed. It was shown that while comparatively large electrodes record sEMG traces that are not sufficiently correlated with the external anal sphincter contractions, smaller electrodes may not maintain a stable electrode tissue contact. It was shown that 3 mm wide and 1 cm long electrodes with 5 mm inter-electrode spacing, in agreement with Nyquist sampling, placed 1 cm from the orifice may intra-anally record a sEMG trace sufficiently correlated with external anal sphincter activity. The outcome of this study can be used in any biofeedback, treatment or diagnostic application where the activity of the external anal sphincter sEMG should be detected for an extended period of time.
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
NASA Astrophysics Data System (ADS)
Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.
2018-07-01
Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.
Theory and phenomenology of Planckian interacting massive particles as dark matter
NASA Astrophysics Data System (ADS)
Garny, Mathias; Palessandro, Andrea; Sandora, McCullen; Sloth, Martin S.
2018-02-01
Planckian Interacting Dark Matter (PIDM) is a minimal scenario of dark matter assuming only gravitational interactions with the standard model and with only one free parameter, the PIDM mass. PIDM can be successfully produced by gravitational scattering in the thermal plasma of the Standard Model sector after inflation in the PIDM mass range from TeV up to the GUT scale, if the reheating temperature is sufficiently high. The minimal assumption of a GUT scale PIDM mass can be tested in the future by measurements of the primordial tensor-to-scalar ratio. While large primordial tensor modes would be in tension with the QCD axion as dark matter in a large mass range, it would favour the PIDM as a minimal alternative to WIMPs. Here we generalise the previously studied scalar PIDM scenario to the case of fermion, vector and tensor PIDM scenarios, and show that the phenomenology is nearly identical, independent of the spin of the PIDM. We also consider the specific realisation of the PIDM as the Kaluza-Klein excitation of the graviton in orbifold compactifications of string theory, as well as in models of monodromy inflation and in Higgs inflation. Finally we discuss the possibility of indirect detection of PIDM through non-perturbative decay.
Powering up with indirect reciprocity in a large-scale field experiment.
Yoeli, Erez; Hoffman, Moshe; Rand, David G; Nowak, Martin A
2013-06-18
A defining aspect of human cooperation is the use of sophisticated indirect reciprocity. We observe others, talk about others, and act accordingly. We help those who help others, and we cooperate expecting that others will cooperate in return. Indirect reciprocity is based on reputation, which spreads by communication. A crucial aspect of indirect reciprocity is observability: reputation effects can support cooperation as long as peoples' actions can be observed by others. In evolutionary models of indirect reciprocity, natural selection favors cooperation when observability is sufficiently high. Complimenting this theoretical work are experiments where observability promotes cooperation among small groups playing games in the laboratory. Until now, however, there has been little evidence of observability's power to promote large-scale cooperation in real world settings. Here we provide such evidence using a field study involving 2413 subjects. We collaborated with a utility company to study participation in a program designed to prevent blackouts. We show that observability triples participation in this public goods game. The effect is over four times larger than offering a $25 monetary incentive, the company's previous policy. Furthermore, as predicted by indirect reciprocity, we provide evidence that reputational concerns are driving our observability effect. In sum, we show how indirect reciprocity can be harnessed to increase cooperation in a relevant, real-world public goods game.
NASA Astrophysics Data System (ADS)
Hamelin, Elizabeth I.; Blake, Thomas A.; Perez, Jonas W.; Crow, Brian S.; Shaner, Rebecca L.; Coleman, Rebecca M.; Johnson, Rudolph C.
2016-05-01
Public health response to large scale chemical emergencies presents logistical challenges for sample collection, transport, and analysis. Diagnostic methods used to identify and determine exposure to chemical warfare agents, toxins, and poisons traditionally involve blood collection by phlebotomists, cold transport of biomedical samples, and costly sample preparation techniques. Use of dried blood spots, which consist of dried blood on an FDA-approved substrate, can increase analyte stability, decrease infection hazard for those handling samples, greatly reduce the cost of shipping/storing samples by removing the need for refrigeration and cold chain transportation, and be self-prepared by potentially exposed individuals using a simple finger prick and blood spot compatible paper. Our laboratory has developed clinical assays to detect human exposures to nerve agents through the analysis of specific protein adducts and metabolites, for which a simple extraction from a dried blood spot is sufficient for removing matrix interferents and attaining sensitivities on par with traditional sampling methods. The use of dried blood spots can bridge the gap between the laboratory and the field allowing for large scale sample collection with minimal impact on hospital resources while maintaining sensitivity, specificity, traceability, and quality requirements for both clinical and forensic applications.
Lewison, Rebecca L.; Crowder, Larry B.; Wallace, Bryan P.; Moore, Jeffrey E.; Cox, Tara; Zydelis, Ramunas; McDonald, Sara; DiMatteo, Andrew; Dunn, Daniel C.; Kot, Connie Y.; Bjorkland, Rhema; Kelez, Shaleyla; Soykan, Candan; Stewart, Kelly R.; Sims, Michelle; Boustany, Andre; Read, Andrew J.; Halpin, Patrick; Nichols, W. J.; Safina, Carl
2014-01-01
Recent research on ocean health has found large predator abundance to be a key element of ocean condition. Fisheries can impact large predator abundance directly through targeted capture and indirectly through incidental capture of nontarget species or bycatch. However, measures of the global nature of bycatch are lacking for air-breathing megafauna. We fill this knowledge gap and present a synoptic global assessment of the distribution and intensity of bycatch of seabirds, marine mammals, and sea turtles based on empirical data from the three most commonly used types of fishing gears worldwide. We identify taxa-specific hotspots of bycatch intensity and find evidence of cumulative impacts across fishing fleets and gears. This global map of bycatch illustrates where data are particularly scarce—in coastal and small-scale fisheries and ocean regions that support developed industrial fisheries and millions of small-scale fishers—and identifies fishing areas where, given the evidence of cumulative hotspots across gear and taxa, traditional species or gear-specific bycatch management and mitigation efforts may be necessary but not sufficient. Given the global distribution of bycatch and the mitigation success achieved by some fleets, the reduction of air-breathing megafauna bycatch is both an urgent and achievable conservation priority. PMID:24639512
NASA Astrophysics Data System (ADS)
Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.
2017-11-01
Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.
Lewison, Rebecca L; Crowder, Larry B; Wallace, Bryan P; Moore, Jeffrey E; Cox, Tara; Zydelis, Ramunas; McDonald, Sara; DiMatteo, Andrew; Dunn, Daniel C; Kot, Connie Y; Bjorkland, Rhema; Kelez, Shaleyla; Soykan, Candan; Stewart, Kelly R; Sims, Michelle; Boustany, Andre; Read, Andrew J; Halpin, Patrick; Nichols, W J; Safina, Carl
2014-04-08
Recent research on ocean health has found large predator abundance to be a key element of ocean condition. Fisheries can impact large predator abundance directly through targeted capture and indirectly through incidental capture of nontarget species or bycatch. However, measures of the global nature of bycatch are lacking for air-breathing megafauna. We fill this knowledge gap and present a synoptic global assessment of the distribution and intensity of bycatch of seabirds, marine mammals, and sea turtles based on empirical data from the three most commonly used types of fishing gears worldwide. We identify taxa-specific hotspots of bycatch intensity and find evidence of cumulative impacts across fishing fleets and gears. This global map of bycatch illustrates where data are particularly scarce--in coastal and small-scale fisheries and ocean regions that support developed industrial fisheries and millions of small-scale fishers--and identifies fishing areas where, given the evidence of cumulative hotspots across gear and taxa, traditional species or gear-specific bycatch management and mitigation efforts may be necessary but not sufficient. Given the global distribution of bycatch and the mitigation success achieved by some fleets, the reduction of air-breathing megafauna bycatch is both an urgent and achievable conservation priority.
2014-01-01
Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292
Potassium iodide as a thyroid blocker--Three Mile Island to today.
Halperin, J A
1989-05-01
The Three Mile Island (TMI) nuclear emergency in the U.S. in March 1979 marked the first occasion when use of potassium iodide (KI) was considered for thyroid blocking of the population in the vicinity of a potentially serious release of fission products from a nuclear power reactor. In face of a demand that could not be satisfied by commercial supplies of low-dose KI drug products from the U.S. pharmaceutical industry, the Food and Drug Administration directed the manufacture and stockpiling of sufficient quantities of saturated solution of potassium iodide (SSKI) to provide protection for one million people in the event of a large-scale release of radioiodines. Although the drug was not used, the experience of producing, stockpiling, and making ready for use a large quantity of the drug resulted in significant public policy, regulatory, and logistical issues. A number of these issues have been resolved through scientific debate and consensus, development of official guidance regarding the proper role of KI in nuclear emergencies, and the approval of New Drug Applications for KI products specifically intended for thyroid blocking in nuclear emergencies. Other issues regarding broad-scale implementation of the guidelines remain today. This paper traces the history of the development and implementation of the use of KI from pre-TMI to the present.
Seasonal dependence of large-scale Birkeland currents
NASA Technical Reports Server (NTRS)
Fujii, R.; Iijima, T.; Potemra, T. A.; Sugiura, M.
1981-01-01
Seasonal variations of large-scale Birkeland currents are examined in a study of the source mechanisms and the closure of the three-dimensional current systems in the ionosphere. Vector magnetic field data acquired by the TRIAD satellite in the Northern Hemisphere were analyzed for the statistics of single sheet and double sheet Birkeland currents during 555 passes during the summer and 408 passes during the winter. The single sheet currents are observed more frequently in the dayside of the auroral zone, and more often in summer than in winter. The intensities of both the single and double dayside currents are found to be greater in the summer than in the winter by a factor of two, while the intensities of the double sheet Birkeland currents on the nightside do not show a significant difference from summer to winter. Both the single and double sheet currents are found at higher latitudes in the summer than in the winter on the dayside. Results suggest that the Birkeland current intensities are controlled by the ionospheric conductivity in the polar region, and that the currents close via the polar cap when the conductivity there is sufficiently high. It is also concluded that an important source of these currents must be a voltage generator in the magnetosphere.
A Materials Approach to Collective Behavior
NASA Astrophysics Data System (ADS)
Ouellette, Nicholas
Aggregations of social animals, such as flocks of birds, schools of fish, or swarms of insects, are beautiful, natural examples of self-organized behavior far from equilibrium. Understanding these systems, however, has proved to be quite challenging. Determining the rules of interaction from empirical measurements of animals is a difficult inverse problem. Thus, researchers tend to focus on the macroscopic behavior of the group instead. Because so many of these systems display large-scale ordered patterns, it has become the norm in modeling animal aggregations to focus on this order. Large-scale patterns alone, however, are not sufficient information to characterize all the dynamics of animal aggregations, and do not provide stringent enough conditions to benchmark models. Instead, I will argue that we should borrow ideas from materials characterization to describe the macroscopic state of an animal group in terms of its response to external stimuli. I will illustrate these ideas with recent experiments on mating swarms of the non-biting midge Chironomus riparius, where we have developed methods to apply controlled perturbations and measure the detailed swarm response. Our results allow us to begin to describe swarms in terms of state variables and response functions, bringing them into the purview of theories of active matter. These results also point towards new, more detailed ways of characterizing and hopefully comparing collective behavior in animal groups.
Canosa, Joel
2018-01-01
The aim of this study is the application of a software tool to the design of stripping columns to calculate the removal of trihalomethanes (THMs) from drinking water. The tool also allows calculating the rough capital cost of the column and the decrease in carcinogenic risk indeces associated with the elimination of THMs and, thus, the investment to save a human life. The design of stripping columns includes the determination, among other factors, of the height (HOG), the theoretical number of plates (NOG), and the section (S) of the columns based on the study of pressure drop. These results have been compared with THM stripping literature values, showing that simulation is sufficiently conservative. Three case studies were chosen to apply the developed software. The first case study was representative of small-scale application to a community in Córdoba (Spain) where chloroform is predominant and has a low concentration. The second case study was of an intermediate scale in a region in Venezuela, and the third case study was representative of large-scale treatment of water in the Barcelona metropolitan region (Spain). Results showed that case studies with larger scale and higher initial risk offer the best capital investment to decrease the risk. PMID:29562670
NASA Astrophysics Data System (ADS)
Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John
The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, X. L.; Xue, Z. K.; Wang, J. C.
Solar flares and coronal mass ejections are the most powerful explosions in the Sun. They are major sources of potentially destructive space weather conditions. However, the possible causes of their initiation remain controversial. Using high-resolution data observed by the New Solar Telescope of Big Bear Solar Observatory, supplemented by Solar Dynamics Observatory observations, we present unusual observations of a small-scale emerging flux rope near a large sunspot, whose eruption produced an M-class flare and a coronal mass ejection. The presence of the small-scale flux rope was indicated by static nonlinear force-free field extrapolation as well as data-driven magnetohydrodynamics modeling ofmore » the dynamic evolution of the coronal three-dimensional magnetic field. During the emergence of the flux rope, rotation of satellite sunspots at the footpoints of the flux rope was observed. Meanwhile, the Lorentz force, magnetic energy, vertical current, and transverse fields were increasing during this phase. The free energy from the magnetic flux emergence and twisting magnetic fields is sufficient to power the M-class flare. These observations present, for the first time, the complete process, from the emergence of the small-scale flux rope, to the production of solar eruptions.« less
NASA Astrophysics Data System (ADS)
Adare, A.; Afanasiev, S.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Akimoto, R.; Al-Bataineh, H.; Alexander, J.; Alfred, M.; Al-Ta'Ani, H.; Angerami, A.; Aoki, K.; Apadula, N.; Aphecetche, L.; Aramaki, Y.; Armendariz, R.; Aronson, S. H.; Asai, J.; Asano, H.; Aschenauer, E. C.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Baksay, G.; Baksay, L.; Baldisseri, A.; Bandara, N. S.; Bannier, B.; Barish, K. N.; Barnes, P. D.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Batsouli, S.; Baublis, V.; Baumann, C.; Baumgart, S.; Bazilevsky, A.; Beaumier, M.; Beckman, S.; Belikov, S.; Belmont, R.; Bennett, R.; Berdnikov, A.; Berdnikov, Y.; Bickley, A. A.; Blau, D. S.; Boissevain, J. G.; Bok, J. S.; Borel, H.; Boyle, K.; Brooks, M. L.; Bryslawskyj, J.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Camacho, C. M.; Campbell, S.; Castera, P.; Chang, B. S.; Charvet, J.-L.; Chen, C.-H.; Chernichenko, S.; Chi, C. Y.; Chiba, J.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choi, S.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chung, P.; Churyn, A.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cleven, C. R.; Cole, B. A.; Comets, M. P.; Connors, M.; Constantin, P.; Csanád, M.; Csörgő, T.; Dahms, T.; Dairaku, S.; Danchev, I.; Danley, T. W.; Das, K.; Datta, A.; Daugherity, M. S.; David, G.; Deaton, M. B.; Deblasio, K.; Dehmelt, K.; Delagrange, H.; Denisov, A.; D'Enterria, D.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Ding, L.; Dion, A.; Diss, P. B.; Do, J. H.; Donadelli, M.; D'Orazio, L.; Drapier, O.; Drees, A.; Drees, K. A.; Dubey, A. K.; Durham, J. M.; Durum, A.; Dutta, D.; Dzhordzhadze, V.; Edwards, S.; Efremenko, Y. V.; Egdemir, J.; Ellinghaus, F.; Emam, W. S.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Eyser, K. O.; Fadem, B.; Feege, N.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fujiwara, K.; Fukao, Y.; Fusayasu, T.; Gadrat, S.; Gainey, K.; Gal, C.; Gallus, P.; Garg, P.; Garishvili, A.; Garishvili, I.; Ge, H.; Giordano, F.; Glenn, A.; Gong, H.; Gong, X.; Gonin, M.; Gosset, J.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Guo, L.; Gustafsson, H.-Å.; Hachiya, T.; Hadj Henni, A.; Haegemann, C.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Hamilton, H. F.; Han, R.; Han, S. Y.; Hanks, J.; Harada, H.; Hartouni, E. P.; Haruna, K.; Hasegawa, S.; Haseler, T. O. S.; Hashimoto, K.; Haslum, E.; Hayano, R.; He, X.; Heffner, M.; Hemmick, T. K.; Hester, T.; Hiejima, H.; Hill, J. C.; Hobbs, R.; Hohlmann, M.; Hollis, R. S.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hori, Y.; Hornback, D.; Hoshino, T.; Hotvedt, N.; Huang, J.; Huang, S.; Ichihara, T.; Ichimiya, R.; Ide, J.; Iinuma, H.; Ikeda, Y.; Imai, K.; Imrek, J.; Inaba, M.; Inoue, Y.; Iordanova, A.; Isenhower, D.; Isenhower, L.; Ishihara, M.; Isobe, T.; Issah, M.; Isupov, A.; Ivanishchev, D.; Jacak, B. V.; Javani, M.; Jezghani, M.; Jia, J.; Jiang, X.; Jin, J.; Jinnouchi, O.; Johnson, B. M.; Joo, K. S.; Jouan, D.; Jumper, D. S.; Kajihara, F.; Kametani, S.; Kamihara, N.; Kamin, J.; Kanda, S.; Kaneta, M.; Kaneti, S.; Kang, B. H.; Kang, J. H.; Kang, J. S.; Kanou, H.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kawashima, M.; Kazantsev, A. V.; Kempel, T.; Key, J. A.; Khachatryan, V.; Khanzadeev, A.; Kijima, K. M.; Kikuchi, J.; Kim, B. I.; Kim, C.; Kim, D. H.; Kim, D. J.; Kim, E.; Kim, E.-J.; Kim, G. W.; Kim, H. J.; Kim, K.-B.; Kim, M.; Kim, S. H.; Kim, Y.-J.; Kim, Y. K.; Kimelman, B.; Kinney, E.; Kiriluk, K.; Kiss, Á.; Kistenev, E.; Kitamura, R.; Kiyomichi, A.; Klatsky, J.; Klay, J.; Klein-Boesing, C.; Kleinjan, D.; Kline, P.; Koblesky, T.; Kochenda, L.; Kochetkov, V.; Komatsu, Y.; Komkov, B.; Konno, M.; Koster, J.; Kotchetkov, D.; Kotov, D.; Kozlov, A.; Král, A.; Kravitz, A.; Krizek, F.; Kubart, J.; Kunde, G. J.; Kurihara, N.; Kurita, K.; Kurosawa, M.; Kweon, M. J.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, B.; Lee, D. M.; Lee, J.; Lee, K.; Lee, K. B.; Lee, K. S.; Lee, M. K.; Lee, S.; Lee, S. H.; Lee, S. R.; Lee, T.; Leitch, M. J.; Leite, M. A. L.; Leitgab, M.; Leitner, E.; Lenzi, B.; Lewis, B.; Li, X.; Liebing, P.; Lim, S. H.; Linden Levy, L. A.; Liška, T.; Litvinenko, A.; Liu, H.; Liu, M. X.; Love, B.; Luechtenborg, R.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Makek, M.; Malakhov, A.; Malik, M. D.; Manion, A.; Manko, V. I.; Mannel, E.; Mao, Y.; Mašek, L.; Masui, H.; Masumoto, S.; Matathias, F.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Means, N.; Meles, A.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Mikeš, P.; Miki, K.; Miller, T. E.; Milov, A.; Mioduszewski, S.; Mishra, D. K.; Mishra, M.; Mitchell, J. T.; Mitrovski, M.; Miyachi, Y.; Miyasaka, S.; Mizuno, S.; Mohanty, A. K.; Mohapatra, S.; Montuenga, P.; Moon, H. J.; Moon, T.; Morino, Y.; Morreale, A.; Morrison, D. P.; Motschwiller, S.; Moukhanova, T. V.; Mukhopadhyay, D.; Murakami, T.; Murata, J.; Mwai, A.; Nagae, T.; Nagamiya, S.; Nagashima, K.; Nagata, Y.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakagomi, H.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Nattrass, C.; Nederlof, A.; Netrakanti, P. K.; Newby, J.; Nguyen, M.; Nihashi, M.; Niida, T.; Nishimura, S.; Norman, B. E.; Nouicer, R.; Novák, T.; Novitzky, N.; Nyanin, A. S.; O'Brien, E.; Oda, S. X.; Ogilvie, C. A.; Ohnishi, H.; Oka, M.; Okada, K.; Omiwade, O. O.; Onuki, Y.; Orjuela Koop, J. D.; Osborn, J. D.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pal, D.; Palounek, A. P. T.; Pantuev, V.; Papavassiliou, V.; Park, B. H.; Park, I. H.; Park, J.; Park, J. S.; Park, S.; Park, S. K.; Park, W. J.; Pate, S. F.; Patel, L.; Patel, M.; Pei, H.; Peng, J.-C.; Pereira, H.; Perepelitsa, D. V.; Perera, G. D. N.; Peresedov, V.; Peressounko, D. Yu.; Perry, J.; Petti, R.; Pinkenburg, C.; Pinson, R.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Purwar, A. K.; Qu, H.; Rak, J.; Rakotozafindrabe, A.; Ramson, B. J.; Ravinovich, I.; Read, K. F.; Rembeczki, S.; Reuter, M.; Reygers, K.; Reynolds, D.; Riabov, V.; Riabov, Y.; Richardson, E.; Rinn, T.; Roach, D.; Roche, G.; Rolnick, S. D.; Romana, A.; Rosati, M.; Rosen, C. A.; Rosendahl, S. S. E.; Rosnet, P.; Rowan, Z.; Rubin, J. G.; Rukoyatkin, P.; Ružička, P.; Rykov, V. L.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sakai, S.; Sakashita, K.; Sakata, H.; Sako, H.; Samsonov, V.; Sano, M.; Sano, S.; Sarsour, M.; Sato, S.; Sato, T.; Sawada, S.; Schaefer, B.; Schmoll, B. K.; Sedgwick, K.; Seele, J.; Seidl, R.; Semenov, A. Yu.; Semenov, V.; Sen, A.; Seto, R.; Sett, P.; Sexton, A.; Sharma, D.; Shein, I.; Shevel, A.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Skutnik, S.; Slunečka, M.; Snowball, M.; Soldatov, A.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Sparks, N. A.; Staley, F.; Stankus, P. W.; Stenlund, E.; Stepanov, M.; Ster, A.; Stoll, S. P.; Sugitate, T.; Suire, C.; Sukhanov, A.; Sumita, T.; Sun, J.; Sziklai, J.; Tabaru, T.; Takagi, S.; Takagui, E. M.; Takahara, A.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tarján, P.; Tennant, E.; Themann, H.; Thomas, T. L.; Tieulent, R.; Timilsina, A.; Todoroki, T.; Togawa, M.; Toia, A.; Tojo, J.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, C. L.; Towell, R.; Towell, R. S.; Tram, V.-N.; Tserruya, I.; Tsuchimoto, Y.; Tsuji, T.; Vale, C.; Valle, H.; van Hecke, H. W.; Vargyas, M.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Vinogradov, A. A.; Virius, M.; Vossen, A.; Vrba, V.; Vznuzdaev, E.; Wagner, M.; Walker, D.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; Wessels, J.; White, A. S.; White, S. N.; Winter, D.; Wolin, S.; Wood, J. P.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Xia, B.; Xie, W.; Xue, L.; Yalcin, S.; Yamaguchi, Y. L.; Yamaura, K.; Yang, R.; Yanovich, A.; Yasin, Z.; Ying, J.; Yokkaichi, S.; Yoo, J. H.; Yoon, I.; You, Z.; Young, G. R.; Younus, I.; Yu, H.; Yushmanov, I. E.; Zajc, W. A.; Zaudtke, O.; Zelenski, A.; Zhang, C.; Zhou, S.; Zimamyi, J.; Zolin, L.; Zou, L.; Phenix Collaboration
2016-02-01
Measurements of the fractional momentum loss (Sloss≡δ pT/pT ) of high-transverse-momentum-identified hadrons in heavy-ion collisions are presented. Using π0 in Au +Au and Cu +Cu collisions at √{sNN}=62.4 and 200 GeV measured by the PHENIX experiment at the Relativistic Heavy Ion Collider and and charged hadrons in Pb +Pb collisions measured by the ALICE experiment at the Large Hadron Collider, we studied the scaling properties of Sloss as a function of a number of variables: the number of participants, Npart, the number of quark participants, Nqp, the charged-particle density, d Nch/d η , and the Bjorken energy density times the equilibration time, ɛBjτ0 . We find that the pT, where Sloss has its maximum, varies both with centrality and collision energy. Above the maximum, Sloss tends to follow a power-law function with all four scaling variables. The data at √{sNN}=200 GeV and 2.76 TeV, for sufficiently high particle densities, have a common scaling of Sloss with d Nch/d η and ɛBjτ0 , lending insight into the physics of parton energy loss.
Self-Heating Dark Matter via Semiannihilation
NASA Astrophysics Data System (ADS)
Kamada, Ayuki; Kim, Hee Jung; Kim, Hyungjin; Sekiguchi, Toyokazu
2018-03-01
The freeze-out of dark matter (DM) depends on the evolution of the DM temperature. The DM temperature does not have to follow the standard model one, when the elastic scattering is not sufficient to maintain the kinetic equilibrium. We study the temperature evolution of the semiannihilating DM, where a pair of the DM particles annihilate into one DM particle and another particle coupled to the standard model sector. We find that the kinetic equilibrium is maintained solely via semiannihilation until the last stage of the freeze-out. After the freeze-out, semiannihilation converts the mass deficit to the kinetic energy of DM, which leads to nontrivial evolution of the DM temperature. We argue that the DM temperature redshifts like radiation as long as the DM self-interaction is efficient. We dub this novel temperature evolution as self-heating. Notably, the structure formation is suppressed at subgalactic scales like keV-scale warm DM but with GeV-scale self-heating DM if the self-heating lasts roughly until the matter-radiation equality. The long duration of the self-heating requires the large self-scattering cross section, which in turn flattens the DM density profile in inner halos. Consequently, self-heating DM can be a unified solution to apparent failures of cold DM to reproduce the observed subgalactic scale structure of the Universe.
Thermodynamic scaling of the shear viscosity of Mie n-6 fluids and their binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delage-Santacreu, Stephanie; Galliero, Guillaume, E-mail: guillaume.galliero@univ-pau.fr; Hoang, Hai
2015-05-07
In this work, we have evaluated the applicability of the so-called thermodynamic scaling and the isomorph frame to describe the shear viscosity of Mie n-6 fluids of varying repulsive exponents (n = 8, 12, 18, 24, and 36). Furthermore, the effectiveness of the thermodynamic scaling to deal with binary mixtures of Mie n-6 fluids has been explored as well. To generate the viscosity database of these fluids, extensive non-equilibrium molecular dynamics simulations have been performed for various thermodynamic conditions. Then, a systematic approach has been used to determine the gamma exponent value (γ) characteristic of the thermodynamic scaling approach formore » each system. In addition, the applicability of the isomorph theory with a density dependent gamma has been confirmed in pure fluids. In both pure fluids and mixtures, it has been found that the thermodynamic scaling with a constant gamma is sufficient to correlate the viscosity data on a large range of thermodynamic conditions covering liquid and supercritical states as long as the density is not too high. Interestingly, it has been obtained that, in pure fluids, the value of γ is directly proportional to the repulsive exponent of the Mie potential. Finally, it has been found that the value of γ in mixtures can be deduced from those of the pure component using a simple logarithmic mixing rule.« less
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
3-D flow and scour near a submerged wing dike: ADCP measurements on the Missouri River
Jamieson, E.C.; Rennie, C.D.; Jacobson, R.B.; Townsend, R.D.
2011-01-01
Detailed mapping of bathymetry and three-dimensional water velocities using a boat-mounted single-beam sonar and acoustic Doppler current profiler (ADCP) was carried out in the vicinity of two submerged wing dikes located in the Lower Missouri River near Columbia, Missouri. During high spring flows the wing dikes become submerged, creating a unique combination of vertical flow separation and overtopping (plunging) flow conditions, causing large-scale three-dimensional turbulent flow structures to form. On three different days and for a range of discharges, sampling transects at 5 and 20 m spacing were completed, covering the area adjacent to and upstream and downstream from two different wing dikes. The objectives of this research are to evaluate whether an ADCP can identify and measure large-scale flow features such as recirculating flow and vortex shedding that develop in the vicinity of a submerged wing dike; and whether or not moving-boat (single-transect) data are sufficient for resolving complex three-dimensional flow fields. Results indicate that spatial averaging from multiple nearby single transects may be more representative of an inherently complex (temporally and spatially variable) three-dimensional flow field than repeated single transects. Results also indicate a correspondence between the location of calculated vortex cores (resolved from the interpolated three-dimensional flow field) and the nearby scour holes, providing new insight into the connections between vertically oriented coherent structures and local scour, with the unique perspective of flow and morphology in a large river.
The Panchromatic Comparative Exoplanetary Treasury Program
NASA Astrophysics Data System (ADS)
Sing, David
2016-10-01
HST has played the definitive role in the characterization of exoplanets and from the first planets available, we have learned that their atmospheres are incredibly diverse. The large number of transiting planets now available has prompted a new era of atmospheric studies, where wide scale comparative planetology is now possible. The atmospheric chemistry of cloud/haze formation and atmospheric mass-loss are a major outstanding issues in the field of exoplanets, and we seek to make progress gaining insight into their underlying physical process through comparative studies. Here we propose to use Hubble's full spectroscopic capabilities to produce the first large-scale, simultaneous UVOIR comparative study of exoplanets. With full wavelength coverage, an entire planet's atmosphere can be probed simultaneously and with sufficient numbers of planets, we can statistically compare their features with physical parameters for the first time. This panchromatic program will build a lasting HST legacy, providing the UV and blue-optical spectra unavailable to JWST. From these observations, chemistry over a wide range of physical environments will be probed, from the hottest condensates to much cooler planets where photochemical hazes could be present. Constraints on aerosol size and composition will help unlock our understanding of clouds and how they are suspended at such high altitudes. Notably, there have been no large transiting UV HST programs, and this panchromatic program will provide a fundamental legacy contribution to atmospheric escape of small exoplanets, where the mass loss can be significant and have a major impact on the evolution of the planet itself.
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
An Integrative Account of Constraints on Cross-Situational Learning
Yurovsky, Daniel; Frank, Michael C.
2015-01-01
Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052
Self-assembled ordered structures in thin films of HAT5 discotic liquid crystal.
Morales, Piero; Lagerwall, Jan; Vacca, Paolo; Laschat, Sabine; Scalia, Giusy
2010-05-20
Thin films of the discotic liquid crystal hexapentyloxytriphenylene (HAT5), prepared from solution via casting or spin-coating, were investigated by atomic force microscopy and polarizing optical microscopy, revealing large-scale ordered structures substantially different from those typically observed in standard samples of the same material. Thin and very long fibrils of planar-aligned liquid crystal were found, possibly formed as a result of an intermediate lyotropic nematic state arising during the solvent evaporation process. Moreover, in sufficiently thin films the crystallization seems to be suppressed, extending the uniform order of the liquid crystal phase down to room temperature. This should be compared to the bulk situation, where the same material crystallizes into a polymorphic structure at 68 °C.
Function and evolution of sex determination mechanisms, genes and pathways in insects
Gempe, Tanja; Beye, Martin
2011-01-01
Animals have evolved a bewildering diversity of mechanisms to determine the two sexes. Studies of sex determination genes – their history and function – in non-model insects and Drosophila have allowed us to begin to understand the generation of sex determination diversity. One common theme from these studies is that evolved mechanisms produce activities in either males or females to control a shared gene switch that regulates sexual development. Only a few small-scale changes in existing and duplicated genes are sufficient to generate large differences in sex determination systems. This review summarises recent findings in insects, surveys evidence of how and why sex determination mechanisms can change rapidly and suggests fruitful areas of future research. PMID:21110346
Discovery, innovation and the cyclical nature of the pharmaceutical business.
Schmid, Esther F; Smith, Dennis A
2002-05-15
Unlike many recent articles, which paint the future of the pharmaceutical industry in gloomy colours, this article provides an optimistic outlook. It explores the foundations on which the pharmaceutical industry has based its outstanding successes. Case studies of important drug classes underpin the arguments made and provide the basis for the authors' argument that recent technological breakthroughs and the unravelling of the human genome will provide a new wave of high quality targets (substrate) on which the industry can build. The article suggests that in a conducive environment that understands the benefits that pharmaceuticals provide to healthcare, those players who can base their innovation on a sufficient scale and from a large capital base will reshape the industry.
Autophoretic locomotion from geometric asymmetry.
Michelin, Sébastien; Lauga, Eric
2015-02-01
Among the few methods which have been proposed to create small-scale swimmers, those relying on self-phoretic mechanisms present an interesting design challenge in that chemical gradients are required to generate net propulsion. Building on recent work, we propose that asymmetries in geometry are sufficient to induce chemical gradients and swimming. We illustrate this idea using two different calculations. We first calculate exactly the self-propulsion speed of a system composed of two spheres of unequal sizes but identically chemically homogeneous. We then consider arbitrary, small-amplitude, shape deformations of a chemically homogeneous sphere, and calculate asymptotically the self-propulsion velocity induced by the shape asymmetries. Our results demonstrate how geometric asymmetries can be tuned to induce large locomotion speeds without the need of chemical patterning.
NASA Astrophysics Data System (ADS)
Samadi, R.; Belkacem, K.; Ludwig, H.-G.; Caffau, E.; Campante, T. L.; Davies, G. R.; Kallinger, T.; Lund, M. N.; Mosser, B.; Baglin, A.; Mathur, S.; Garcia, R. A.
2013-11-01
Context. A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (τeff) and the root-mean-square (rms) brightness fluctuations (σ) associated with the granulation scale as a function of the peak frequency (νmax) of the solar-like oscillations. Aims: We aim at providing a theoretical background to the observed scaling relations based on a model developed in Paper I. Methods: We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model published in Paper I. For each PDS we derived the associated characteristic time (τeff) and the rms brightness fluctuations (σ) and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the measurements made on a large set of Kepler targets. Results: We derive theoretical scaling relations for τeff and σ, which show the same dependence on νmax as the observed scaling relations. In addition, we show that these quantities also scale as a function of the turbulent Mach number (ℳa) estimated at the photosphere. The theoretical scaling relations for τeff and σ match the observations well on a global scale. Quantitatively, the remaining discrepancies with the observations are found to be much smaller than previous theoretical calculations made for red giants. Conclusions: Our modelling provides additional theoretical support for the observed variations of σ and τeff with νmax. It also highlights the important role of ℳa in controlling the properties of the stellar granulation. However, the observations made with Kepler on a wide variety of stars cannot confirm the dependence of our scaling relations on ℳa. Measurements of the granulation background and detections of solar-like oscillations in a statistically sufficient number of cool dwarf stars will be required for confirming the dependence of the theoretical scaling relations with ℳa. Appendices are available in electronic form at http://www.aanda.org
Bullock, Robin J; Aggarwal, Srijan; Perkins, Robert A; Schnabel, William
2017-04-01
In the event of a marine oil spill in the Arctic, government agencies, industry, and the public have a stake in the successful implementation of oil spill response. Because large spills are rare events, oil spill response techniques are often evaluated with laboratory and meso-scale experiments. The experiments must yield scalable information sufficient to understand the operability and effectiveness of a response technique under actual field conditions. Since in-situ burning augmented with surface collecting agents ("herders") is one of the few viable response options in ice infested waters, a series of oil spill response experiments were conducted in Fairbanks, Alaska, in 2014 and 2015 to evaluate the use of herders to assist in-situ burning and the role of experimental scale. This study compares burn efficiency and herder application for three experimental designs for in-situ burning of Alaska North Slope crude oil in cold, fresh waters with ∼10% ice cover. The experiments were conducted in three project-specific constructed venues with varying scales (surface areas of approximately 0.09 square meters, 9 square meters and 8100 square meters). The results from the herder assisted in-situ burn experiments performed at these three different scales showed good experimental scale correlation and no negative impact due to the presence of ice cover on burn efficiency. Experimental conclusions are predominantly associated with application of the herder material and usability for a given experiment scale to make response decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Great Observatories Origins Deep Survey
NASA Astrophysics Data System (ADS)
Dickinson, Mark
2008-05-01
Observing the formation and evolution of ordinary galaxies at early cosmic times requires data at many wavelengths in order to recognize, separate and analyze the many physical processes which shape galaxies' history, including the growth of large scale structure, gravitational interactions, star formation, and active nuclei. Extremely deep data, covering an adequately large volume, are needed to detect ordinary galaxies in sufficient numbers at such great distances. The Great Observatories Origins Deep Survey (GOODS) was designed for this purpose as an anthology of deep field observing programs that span the electromagnetic spectrum. GOODS targets two fields, one in each hemisphere. Some of the deepest and most extensive imaging and spectroscopic surveys have been carried out in the GOODS fields, using nearly every major space- and ground-based observatory. Many of these data have been taken as part of large, public surveys (including several Hubble Treasury, Spitzer Legacy, and ESO Large Programs), which have produced large data sets that are widely used by the astronomical community. I will review the history of the GOODS program, highlighting results on the formation and early growth of galaxies and their active nuclei. I will also describe new and upcoming observations, such as the GOODS Herschel Key Program, which will continue to fill out our portrait of galaxies in the young universe.
Experimental Investigation of Very Large Model Wind Turbine Arrays
NASA Astrophysics Data System (ADS)
Charmanski, Kyle; Wosnik, Martin
2013-11-01
The decrease in energy yield in large wind farms (array losses) and associated revenue losses can be significant. When arrays are sufficiently large they can reach what is known as a fully developed wind turbine array boundary layer, or fully developed wind farm condition. This occurs when the turbulence statistics and the structure of the turbulence, within and above a wind farm, as well as the performance of the turbines remain the same from one row to the next. The study of this condition and how it is affected by parameters such as turbine spacing, power extraction, tip speed ratio, etc. is important for the optimization of large wind farms. An experimental investigation of the fully developed wind farm condition was conducted using a large array of porous disks (upstream) and realistically scaled 3-bladed wind turbines with a diameter of 0.25m. The turbines and porous disks were placed inside a naturally grown turbulent boundary layer in the 6m × 2.5m × 72m test section of the UNH Flow Physics Facility which can achieve test section velocities of up to 14 m/s and Reynolds numbers δ+ = δuτ / ν ~ 20 , 000 . Power, rate of rotation and rotor thrust were measured for select turbines, and hot-wire anemometry was used for flow measurements.
NASA Astrophysics Data System (ADS)
Day, Danny
2006-04-01
Although `negative emissions' of carbon dioxide need not, in principle, involve use of biological processes to draw carbon out of the atmosphere, such `agricultural' sequestration' is the only known way to remove carbon from the atmosphere on time scales comparable to the time scale for anthropogenic increases in carbon emissions. In order to maintain the `negative emissions' the biomass must be used in such a way that the resulting carbon dioxide is separated and permanently sequestered. Two options for sequestration are in the topsoil and via geologic carbon sequestration. The former has multiple benefits, but the latter also is needed. Thus, although geologic carbon sequestration is viewed skeptically by some environmentalists as simply a way to keep using fossil fuels---it may be a key part of reversing accelerating climate forcing if rapid climate change is beginning to occur. I will first review the general approach of agricultural sequestration combined with use of resulting biofuels in a way that permits carbon separation and then geologic sequestration as a negative emissions technology. Then I discuss the process that is the focus of my company---the EPRIDA cycle. If deployed at a sufficiently large scale, it could reverse the increase in CO2 concentrations. I also estimate of benefits --carbon and other---of large scale deployment of negative emissions technologies. For example, using the EPRIDA cycle by planting and soil sequestering carbon in an area abut In 3X the size of Texas would remove the amount of carbon that is being accumulated worldwide each year. In addition to the atmospheric carbon removal, the EPRIDA approach also counters the depletion of carbon in the soil---increasing topsoil and its fertility; reduces the excess nitrogen in the water by eliminating the need for ammonium nitrate fertilizer and reduces fossil fuel reliance by providing biofuel and avoiding natural gas based fertilizer production.
Variability in vegetation effects on density and nesting success of grassland birds
Winter, Maiken; Johnson, Douglas H.; Shaffer, Jill A.
2005-01-01
The structure of vegetation in grassland systems, unlike that in forest systems, varies dramatically among years on the same sites, and among regions with similar vegetation. The role of this variation in vegetation structure on bird density and nesting success of grassland birds is poorly understood, primarily because few studies have included sufficiently large temporal and spatial scales to capture the variation in vegetation structure, bird density, or nesting success. To date, no large-scale study on grassland birds has been conducted to investigate whether grassland bird density and nesting success respond similarly to changes in vegetation structure. However, reliable management recommendations require investigations into the distribution and nesting success of grassland birds over larger temporal and spatial scales. In addition, studies need to examine whether bird density and nesting success respond similarly to changing environmental conditions. We investigated the effect of vegetation structure on the density and nesting success of 3 grassland-nesting birds: clay-colored sparrow (Spizella pallida), Savannah sparrow (Passerculus sandwichensis), and bobolink (Dolichonyx oryzivorus) in 3 regions of the northern tallgrass prairie in 1998-2001. Few vegetation features influenced the densities of our study species, and each species responded differently to those vegetation variables. We could identify only 1 variable that clearly influenced nesting success of 1 species: clay-colored sparrow nesting success increased with increasing percentage of nest cover from the surrounding vegetation. Because responses of avian density and nesting success to vegetation measures varied among regions, years, and species, land managers at all times need to provide grasslands with different types of vegetation structure. Management guidelines developed from small-scale, short-term studies may lead to misrepresentations of the needs of grassland-nesting birds.
Manufacturing process scale-up of optical grade transparent spinel ceramic at ArmorLine Corporation
NASA Astrophysics Data System (ADS)
Spilman, Joseph; Voyles, John; Nick, Joseph; Shaffer, Lawrence
2013-06-01
While transparent Spinel ceramic's mechanical and optical characteristics are ideal for many Ultraviolet (UV), visible, Short-Wave Infrared (SWIR), Mid-Wave Infrared (MWIR), and multispectral sensor window applications, commercial adoption of the material has been hampered because the material has historically been available in relatively small sizes (one square foot per window or less), low volumes, unreliable supply, and with unreliable quality. Recent efforts, most notably by Technology Assessment and Transfer (TA and T), have scaled-up manufacturing processes and demonstrated the capability to produce larger windows on the order of two square feet, but with limited output not suitable for production type programs. ArmorLine Corporation licensed the hot-pressed Spinel manufacturing know-how of TA and T in 2009 with the goal of building the world's first dedicated full-scale Spinel production facility, enabling the supply of a reliable and sufficient volume of large Transparent Armor and Optical Grade Spinel plates. With over $20 million of private investment by J.F. Lehman and Company, ArmorLine has installed and commissioned the largest vacuum hot press in the world, the largest high-temperature/high-pressure hot isostatic press in the world, and supporting manufacturing processes within 75,000 square feet of manufacturing space. ArmorLine's equipment is capable of producing window blanks as large as 50" x 30" and the facility is capable of producing substantial volumes of material with its Lean configuration and 24/7 operation. Initial production capability was achieved in 2012. ArmorLine will discuss the challenges that were encountered during scale-up of the manufacturing processes, ArmorLine Optical Grade Spinel optical performance, and provide an overview of the facility and its capabilities.
Bohnhoff, Marco; Dresen, Georg; Ellsworth, William L.; Ito, Hisao; Cloetingh, Sierd; Negendank, Jörg
2010-01-01
An important discovery in crustal mechanics has been that the Earth’s crust is commonly stressed close to failure, even in tectonically quiet areas. As a result, small natural or man-made perturbations to the local stress field may trigger earthquakes. To understand these processes, Passive Seismic Monitoring (PSM) with seismometer arrays is a widely used technique that has been successfully applied to study seismicity at different magnitude levels ranging from acoustic emissions generated in the laboratory under controlled conditions, to seismicity induced by hydraulic stimulations in geological reservoirs, and up to great earthquakes occurring along plate boundaries. In all these environments the appropriate deployment of seismic sensors, i.e., directly on the rock sample, at the earth’s surface or in boreholes close to the seismic sources allows for the detection and location of brittle failure processes at sufficiently low magnitude-detection threshold and with adequate spatial resolution for further analysis. One principal aim is to develop an improved understanding of the physical processes occurring at the seismic source and their relationship to the host geologic environment. In this paper we review selected case studies and future directions of PSM efforts across a wide range of scales and environments. These include induced failure within small rock samples, hydrocarbon reservoirs, and natural seismicity at convergent and transform plate boundaries. Each example represents a milestone with regard to bridging the gap between laboratory-scale experiments under controlled boundary conditions and large-scale field studies. The common motivation for all studies is to refine the understanding of how earthquakes nucleate, how they proceed and how they interact in space and time. This is of special relevance at the larger end of the magnitude scale, i.e., for large devastating earthquakes due to their severe socio-economic impact.