Large Eddy Simulation in the Computation of Jet Noise
NASA Technical Reports Server (NTRS)
Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.
1999-01-01
Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.
NASA Astrophysics Data System (ADS)
Tang, Zhanqi; Jiang, Nan
2018-05-01
This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
Large-Scale Coronal Heating from the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.
1999-01-01
In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
Generalization of Turbulent Pair Dispersion to Large Initial Separations
NASA Astrophysics Data System (ADS)
Shnapp, Ron; Liberzon, Alex; International Collaboration for Turbulence Research
2018-06-01
We present a generalization of turbulent pair dispersion to large initial separations (η
NASA Technical Reports Server (NTRS)
Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.
2007-01-01
Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of - 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approx. 1 min for meso-scale currents and approx. 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.
Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.
1999-01-01
In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Choi, N.; Lee, M. I.; Lim, Y. K.; Kim, K. M.
2017-12-01
Heatwave is an extreme hot weather event which accompanies fatal damage to human health. The heatwave has a strong relationship with the large-scale atmospheric teleconnection patterns. In this study, we examine the spatial pattern of heatwave in East Asia by using the EOF analysis and the relationship between heatwave frequency and large-scale atmospheric teleconnection patterns. We also separate the time scale of heatwave frequency as the time scale longer than a decade and the interannual time scale. The long-term variation of heatwave frequency in East Asia shows a linkage with the sea surface temperature (SST) variability over the North Atlantic with a decadal time scale (a.k.a. the Atlantic Multidecadal Oscillation; AMO). On the other hands, the interannual variation of heatwave frequency is linked with the two dominant spatial patterns associated with the large-scale teleconnection patterns mimicking the Scandinavian teleconnection (SCAND-like) pattern and the circumglobal teleconnection (CGT-like) pattern, respectively. It is highlighted that the interannual variation of heatwave frequency in East Asia shows a remarkable change after mid-1990s. While the heatwave frequency was mainly associated with the CGT-like pattern before mid-1990s, the SCAND-like pattern becomes the most dominant one after mid-1990s, making the CGT-like pattern as the second. This study implies that the large-scale atmospheric teleconnection patterns play a key role in developing heatwave events in East Asia. This study further discusses possible mechanisms for the decadal change in the linkage between heatwave frequency and the large-scale teleconnection patterns in East Asia such as early melting of snow cover and/or weakening of East Asian jet stream due to global warming.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Space Technology 5 Multi-Point Observations of Temporal Variability of Field-Aligned Currents
NASA Technical Reports Server (NTRS)
Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.
2008-01-01
Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of approximately 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approximately 1 min for meso-scale currents and approximately 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary S.
2017-09-01
Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less
Transition from large-scale to small-scale dynamo.
Ponty, Y; Plunian, F
2011-04-15
The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.
Space Technology 5 (ST-5) Observations of Field-Aligned Currents: Temporal Variability
NASA Technical Reports Server (NTRS)
Le, Guan
2010-01-01
Space Technology 5 (ST-5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from STS. The data demonstrate that masoscale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of about 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are about I min for meso-scale currents and about 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.
NASA Technical Reports Server (NTRS)
Le, Guan
2010-01-01
Space Technology 5 (ST-5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that mesoscale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of about 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are about 1 min for meso-scale currents and about 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.
NASA Astrophysics Data System (ADS)
Wohlmuth, Johannes; Andersen, Jørgen Vitting
2006-05-01
We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
Large-Scale Coherent Vortex Formation in Two-Dimensional Turbulence
NASA Astrophysics Data System (ADS)
Orlov, A. V.; Brazhnikov, M. Yu.; Levchenko, A. A.
2018-04-01
The evolution of a vortex flow excited by an electromagnetic technique in a thin layer of a conducting liquid was studied experimentally. Small-scale vortices, excited at the pumping scale, merge with time due to the nonlinear interaction and produce large-scale structures—the inverse energy cascade is formed. The dependence of the energy spectrum in the developed inverse cascade is well described by the Kraichnan law k -5/3. At large scales, the inverse cascade is limited by cell sizes, and a large-scale coherent vortex flow is formed, which occupies almost the entire area of the experimental cell. The radial profile of the azimuthal velocity of the coherent vortex immediately after the pumping was switched off has been established for the first time. Inside the vortex core, the azimuthal velocity grows linearly along a radius and reaches a constant value outside the core, which agrees well with the theoretical prediction.
The observation of possible reconnection events in the boundary changes of solar coronal holes
NASA Technical Reports Server (NTRS)
Kahler, S. W.; Moses, J. Daniel
1989-01-01
Coronal holes are large scale regions of magnetically open fields which are easily observed in solar soft X-ray images. The boundaries of coronal holes are separatrices between large scale regions of open and closed magnetic fields where one might expect to observe evidence of solar magnetic reconnection. Previous studies by Nolte and colleagues using Skylab X-ray images established that large scale (greater than or equal to 9 x 10(4) km) changes in coronal hole boundaries were due to coronal processes, i.e., magnetic reconnection, rather than to photospheric motions. Those studies were limited to time scales of about one day, and no conclusion could be drawn about the size and time scales of the reconnection process at hole boundaries. Sequences of appropriate Skylab X-ray images were used with a time resolution of about 90 min during times of the central meridian passages of the coronal hole labelled Coronal Hole 1 to search for hole boundary changes which can yield the spatial and temporal scales of coronal magnetic reconnection. It was found that 29 of 32 observed boundary changes could be associated with bright points. The appearance of the bright point may be the signature of reconnection between small scale and large scale magnetic fields. The observed boundary changes contributed to the quasi-rigid rotation of Coronal Hole 1.
Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement
ERIC Educational Resources Information Center
Zheng, Xiaohui
2009-01-01
The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…
NASA Astrophysics Data System (ADS)
Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.
2003-10-01
Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.
Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.
Baars, W J; Hutchins, N; Marusic, I
2017-03-13
Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100
2015-01-15
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less
Scaling properties of the Arctic sea ice Deformation from Buoy Dispersion Analysis
NASA Astrophysics Data System (ADS)
Weiss, J.; Rampal, P.; Marsan, D.; Lindsay, R.; Stern, H.
2007-12-01
A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over time scales from 3 hours to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate -the Arctic sea ice cover- stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e. it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multi-scale fracturing/faulting processes.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
Effects of large-scale wind driven turbulence on sound propagation
NASA Technical Reports Server (NTRS)
Noble, John M.; Bass, Henry E.; Raspet, Richard
1990-01-01
Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
Multiscale recurrence quantification analysis of order recurrence plots
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian; Lin, Aijing
2017-03-01
In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.
NASA Astrophysics Data System (ADS)
Brasseur, James G.; Juneja, Anurag
1996-11-01
Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117
Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
2016-02-05
Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
NASA Astrophysics Data System (ADS)
Kenward, D. R.; Lessard, M.; Lynch, K. A.; Hysell, D. L.; Hampton, D. L.; Michell, R.; Samara, M.; Varney, R. H.; Oksavik, K.; Clausen, L. B. N.; Hecht, J. H.; Clemmons, J. H.; Fritz, B.
2017-12-01
The RENU2 sounding rocket (launched from Andoya rocket range on December 13th, 2015) observed Poleward Moving Auroral Forms within the dayside cusp. The ISINGLASS rockets (launched from Poker Flat rocket range on February 22, 2017 and March 2, 2017) both observed aurora during a substorm event. Despite observing very different events, both campaigns witnessed a high degree of small scale structuring within the larger auroral boundary, including Alfvenic signatures. These observations suggest a method of coupling large-scale energy input to fine scale structures within aurorae. During RENU2, small (sub-km) scale drivers persist for long (10s of minutes) time scales and result in large scale ionospheric (thermal electron) and thermospheric response (neutral upwelling). ISINGLASS observations show small scale drivers, but with short (minute) time scales, with ionospheric response characterized by the flight's thermal electron instrument (ERPA). The comparison of the two flights provides an excellent opportunity to examine ionospheric and thermospheric response to small scale drivers over different integration times.
Large Scale Traffic Simulations
DOT National Transportation Integrated Search
1997-01-01
Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...
Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number
NASA Astrophysics Data System (ADS)
Smith, W. R.; Wang, Q. X.
2017-08-01
The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.
Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas
2018-02-01
There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.
Stability of Rasch Scales over Time
ERIC Educational Resources Information Center
Taylor, Catherine S.; Lee, Yoonsun
2010-01-01
Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…
Liquidity crises on different time scales
NASA Astrophysics Data System (ADS)
Corradi, Francesco; Zaccaria, Andrea; Pietronero, Luciano
2015-12-01
We present an empirical analysis of the microstructure of financial markets and, in particular, of the static and dynamic properties of liquidity. We find that on relatively large time scales (15 min) large price fluctuations are connected to the failure of the subtle mechanism of compensation between the flows of market and limit orders: in other words, the missed revelation of the latent order book breaks the dynamical equilibrium between the flows, triggering the large price jumps. On smaller time scales (30 s), instead, the static depletion of the limit order book is an indicator of an intrinsic fragility of the system, which is related to a strongly nonlinear enhancement of the response. In order to quantify this phenomenon we introduce a measure of the liquidity imbalance present in the book and we show that it is correlated to both the sign and the magnitude of the next price movement. These findings provide a quantitative definition of the effective liquidity, which proves to be strongly dependent on the considered time scales.
Liquidity crises on different time scales.
Corradi, Francesco; Zaccaria, Andrea; Pietronero, Luciano
2015-12-01
We present an empirical analysis of the microstructure of financial markets and, in particular, of the static and dynamic properties of liquidity. We find that on relatively large time scales (15 min) large price fluctuations are connected to the failure of the subtle mechanism of compensation between the flows of market and limit orders: in other words, the missed revelation of the latent order book breaks the dynamical equilibrium between the flows, triggering the large price jumps. On smaller time scales (30 s), instead, the static depletion of the limit order book is an indicator of an intrinsic fragility of the system, which is related to a strongly nonlinear enhancement of the response. In order to quantify this phenomenon we introduce a measure of the liquidity imbalance present in the book and we show that it is correlated to both the sign and the magnitude of the next price movement. These findings provide a quantitative definition of the effective liquidity, which proves to be strongly dependent on the considered time scales.
NASA Astrophysics Data System (ADS)
Tiselj, Iztok
2014-12-01
Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.
Ultrafast carrier dynamics in the large-magnetoresistance material WTe 2
Dai, Y. M.; Bowlan, J.; Li, H.; ...
2015-10-07
In this study, ultrafast optical pump-probe spectroscopy is used to track carrier dynamics in the large-magnetoresistance material WTe 2. Our experiments reveal a fast relaxation process occurring on a subpicosecond time scale that is caused by electron-phonon thermalization, allowing us to extract the electron-phonon coupling constant. An additional slower relaxation process, occurring on a time scale of ~5–15 ps, is attributed to phonon-assisted electron-hole recombination. As the temperature decreases from 300 K, the time scale governing this process increases due to the reduction of the phonon population. However, below ~50 K, an unusual decrease of the recombination time sets in,more » most likely due to a change in the electronic structure that has been linked to the large magnetoresistance observed in this material.« less
Scaling properties of sea ice deformation from buoy dispersion analysis
NASA Astrophysics Data System (ADS)
Rampal, P.; Weiss, J.; Marsan, D.; Lindsay, R.; Stern, H.
2008-03-01
A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over timescales from 3 h to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate (the Arctic sea ice cover) stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e., it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multiscale fracturing/faulting processes.
Black holes from large N singlet models
NASA Astrophysics Data System (ADS)
Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico
2018-03-01
The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Chen, Wei; Deng, Da
2014-11-11
We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Mouse Activity across Time Scales: Fractal Scenarios
Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.
2014-01-01
In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better understanding of neuroautonomic regulation mechanisms. PMID:25275515
An experimental study of large-scale vortices over a blunt-faced flat plate in pulsating flow
NASA Astrophysics Data System (ADS)
Hwang, K. S.; Sung, H. J.; Hyun, J. M.
Laboratory measurements are made of flow over a blunt flat plate of finite thickness, which is placed in a pulsating free stream, U=Uo(1+Aocos 2πfpt). Low turbulence-intensity wind tunnel experiments are conducted in the ranges of Stp<=1.23 and Ao<=0.118 at ReH=560. Pulsation is generated by means of a woofer speaker. Variations of the time-mean reattachment length xR as functions of Stp and Ao are scrutinized by using the forward-time fraction and surface pressure distributions (Cp). The shedding frequency of large-scale vortices due to pulsation is measured. Flow visualizations depict the behavior of large-scale vortices. The results for non-pulsating flows (Ao=0) are consistent with the published data. In the lower range of Ao, as Stp increases, xR attains a minimum value at a particular pulsation frequency. For large Ao, the results show complicated behaviors of xR. For Stp>=0.80, changes in xR are insignificant as Ao increases. The shedding frequency of large-scale vortices is locked-in to the pulsation frequency. A vortex-pairing process takes place between two neighboring large-scale vortices in the separated shear layer.
ERIC Educational Resources Information Center
Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen
2018-01-01
The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…
Relaxation in two dimensions and the 'sinh-Poisson' equation
NASA Technical Reports Server (NTRS)
Montgomery, D.; Matthaeus, W. H.; Stribling, W. T.; Martinez, D.; Oughton, S.
1992-01-01
Long-time states of a turbulent, decaying, two-dimensional, Navier-Stokes flow are shown numerically to relax toward maximum-entropy configurations, as defined by the "sinh-Poisson" equation. The large-scale Reynolds number is about 14,000, the spatial resolution is (512)-squared, the boundary conditions are spatially periodic, and the evolution takes place over nearly 400 large-scale eddy-turnover times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo
2012-02-08
Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society
NASA Astrophysics Data System (ADS)
Moore, Keegan J.; Bunyan, Jonathan; Tawfick, Sameh; Gendelman, Oleg V.; Li, Shuangbao; Leamy, Michael; Vakakis, Alexander F.
2018-01-01
In linear time-invariant dynamical and acoustical systems, reciprocity holds by the Onsager-Casimir principle of microscopic reversibility, and this can be broken only by odd external biases, nonlinearities, or time-dependent properties. A concept is proposed in this work for breaking dynamic reciprocity based on irreversible nonlinear energy transfers from large to small scales in a system with nonlinear hierarchical internal structure, asymmetry, and intentional strong stiffness nonlinearity. The resulting nonreciprocal large-to-small scale energy transfers mimic analogous nonlinear energy transfer cascades that occur in nature (e.g., in turbulent flows), and are caused by the strong frequency-energy dependence of the essentially nonlinear small-scale components of the system considered. The theoretical part of this work is mainly based on action-angle transformations, followed by direct numerical simulations of the resulting system of nonlinear coupled oscillators. The experimental part considers a system with two scales—a linear large-scale oscillator coupled to a small scale by a nonlinear spring—and validates the theoretical findings demonstrating nonreciprocal large-to-small scale energy transfer. The proposed study promotes a paradigm for designing nonreciprocal acoustic materials harnessing strong nonlinearity, which in a future application will be implemented in designing lattices incorporating nonlinear hierarchical internal structures, asymmetry, and scale mixing.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
NASA Astrophysics Data System (ADS)
Wang, S.; Sobel, A. H.; Nie, J.
2015-12-01
Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and cloud-radiative feedback to the dynamics of the MJO, as well as to accurate simulation and prediction of it in models.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
NASA Astrophysics Data System (ADS)
Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio
2010-11-01
The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the eigenvalue problem for the physically relevant vanishing Neumann boundary conditions in linear-shear channel flow. We show that the life of the corresponding modes at large Pe for this case is shorter than the ones arising from shear free zones in the fluid's interior. A WKBJ study of the latter modes provides a longer intermediate time evolution. This part of the analysis is technical, as the corresponding spectrum is dominated by asymptotically coalescing turning points in the limit of large Pe numbers. When large scale initial data components are present, the transient regime of the WKBJ (anomalous) modes evolves into one governed by Taylor dispersion. This is studied by a regular perturbation expansion of the spectrum in the small wavenumber regimes.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Rolling up of Large-scale Laminar Vortex Ring from Synthetic Jet Impinging onto a Wall
NASA Astrophysics Data System (ADS)
Xu, Yang; Pan, Chong; Wang, Jinjun; Flow Control Lab Team
2015-11-01
Vortex ring impinging onto a wall exhibits a wide range of interesting behaviors. The present work devotes to an experimental investigation of a series of small-scale vortex rings impinging onto a wall. These laminar vortex rings were generated by a piston-cylinder driven synthetic jet in a water tank. Laser Induced Fluorescence (LIF) and Particle Image Velocimetry (PIV) were used for flow visualization/quantification. A special scenario of vortical dynamic was found for the first time: a large-scale laminar vortex ring is formed above the wall, on the outboard side of the jet. This large-scale structure is stable in topology pattern, and continuously grows in strength and size along time, thus dominating dynamics of near wall flow. To quantify its spatial/temporal characteristics, Finite-Time Lyapunov Exponent (FTLE) fields were calculated from PIV velocity fields. It is shown that the flow pattern revealed by FTLE fields is similar to the visualization. The size of this large-scale vortex ring can be up to one-order larger than the jet vortices, and its rolling-up speed and entrainment strength was correlated to constant vorticity flux issued from the jet. This work was supported by the National Natural Science Foundation of China (Grants No.11202015 and 11327202).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
2016-07-06
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
Large Scale Water Vapor Sources Relative to the October 2000 Piedmont Flood
NASA Technical Reports Server (NTRS)
Turato, Barbara; Reale, Oreste; Siccardi, Franco
2003-01-01
Very intense mesoscale or synoptic-scale rainfall events can occasionally be observed in the Mediterranean region without any deep cyclone developing over the areas affected by precipitation. In these perplexing cases the synoptic situation can superficially look similar to cases in which very little precipitation occurs. These situations could possibly baffle the operational weather forecasters. In this article, the major precipitation event that affected Piedmont (Italy) between 13 and 16 October 2000 is investigated. This is one of the cases in which no intense cyclone was observed within the Mediterranean region at any time, only a moderate system was present, and yet exceptional rainfall and flooding occurred. The emphasis of this study is on the moisture origin and transport. Moisture and energy balances are computed on different space- and time-scales, revealing that precipitation exceeds evaporation over an area inclusive of Piedmont and the northwestern Mediterranean region, on a time-scale encompassing the event and about two weeks preceding it. This is suggestive of an important moisture contribution originating from outside the region. A synoptic and dynamic analysis is then performed to outline the potential mechanisms that could have contributed to the large-scale moisture transport. The central part of the work uses a quasi-isentropic water-vapor back trajectory technique. The moisture sources obtained by this technique are compared with the results of the balances and with the synoptic situation, to unveil possible dynamic mechanisms and physical processes involved. It is found that moisture sources on a variety of atmospheric scales contribute to this event. First, an important contribution is caused by the extratropical remnants of former tropical storm Leslie. The large-scale environment related to this system allows a significant amount of moisture to be carried towards Europe. This happens on a time- scale of about 5-15 days preceding the Piedmont event. Second, water-vapor intrusions from the African Inter-Tropical Convergence Zone and evaporation from the eastern Atlantic contribute on the 2-5 day time-scale. The large-scale moist dynamics appears therefore to be one important factor enabling a moderate Mediterranean cyclone to produce heavy precipitation. Finally, local evaporation from the Mediterranean, water-vapor recycling, and orographically-induced low-level convergence enhance and concentrate the moisture over the area where heavy precipitation occurs. This happens on a 12-72 hour time-scale.
NASA Astrophysics Data System (ADS)
Velten, Andreas
2017-05-01
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
USDA-ARS?s Scientific Manuscript database
Soil hydraulic properties can be retrieved from physical sampling of soil, via surveys, but this is time consuming and only as accurate as the scale of the sample. Remote sensing provides an opportunity to get pertinent soil properties at large scales, which is very useful for large scale modeling....
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...
2017-02-16
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.; Halsey, William; Dehoff, Ryan
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Dynamic Time Expansion and Compression Using Nonlinear Waveguides
Findikoglu, Alp T.; Hahn, Sangkoo F.; Jia, Quanxi
2004-06-22
Dynamic time expansion or compression of a small amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.
Dynamic time expansion and compression using nonlinear waveguides
Findikoglu, Alp T [Los Alamos, NM; Hahn, Sangkoo F [Los Alamos, NM; Jia, Quanxi [Los Alamos, NM
2004-06-22
Dynamic time expansion or compression of a small-amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small-amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.
Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2001-01-01
A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).
Caldwell, Robert R
2011-12-28
The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.
DOT National Transportation Integrated Search
2006-12-01
Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...
Where to put things? Spatial land management to sustain biodiversity and economic returns
Expanding human population and economic growth have led to large-scale conversion of natural habitat to human-dominated landscapes with consequent large-scale declines in biodiversity. Conserving biodiversity, while at the same time meeting expanding human needs, is an issue of u...
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
Numerical methods for large-scale, time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1979-01-01
A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.
Measurement of Thunderstorm Cloud-Top Parameters Using High-Frequency Satellite Imagery
1978-01-01
short wave was present well to the south of this system approximately 2000 ka west of Baja California. Two distinct flow patterns were present, one...view can be observed in near real time whereas radar observations, although excellent for local purposes, involve substantial errors when composited...on a large scale. The time delay in such large scale compositing is critical when attempting to monitor convective cloud systems for a potential
Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.;
2001-01-01
It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.
Large-scale structure in superfluid Chaplygin gas cosmology
NASA Astrophysics Data System (ADS)
Yang, Rongjia
2014-03-01
We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.
Exact-Differential Large-Scale Traffic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios
2015-01-01
Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less
Decoupling processes and scales of shoreline morphodynamics
Hapke, Cheryl J.; Plant, Nathaniel G.; Henderson, Rachel E.; Schwab, William C.; Nelson, Timothy R.
2016-01-01
Behavior of coastal systems on time scales ranging from single storm events to years and decades is controlled by both small-scale sediment transport processes and large-scale geologic, oceanographic, and morphologic processes. Improved understanding of coastal behavior at multiple time scales is required for refining models that predict potential erosion hazards and for coastal management planning and decision-making. Here we investigate the primary controls on shoreline response along a geologically-variable barrier island on time scales resolving extreme storms and decadal variations over a period of nearly one century. An empirical orthogonal function analysis is applied to a time series of shoreline positions at Fire Island, NY to identify patterns of shoreline variance along the length of the island. We establish that there are separable patterns of shoreline behavior that represent response to oceanographic forcing as well as patterns that are not explained by this forcing. The dominant shoreline behavior occurs over large length scales in the form of alternating episodes of shoreline retreat and advance, presumably in response to storms cycles. Two secondary responses include long-term response that is correlated to known geologic variations of the island and the other reflects geomorphic patterns with medium length scale. Our study also includes the response to Hurricane Sandy and a period of post-storm recovery. It was expected that the impacts from Hurricane Sandy would disrupt long-term trends and spatial patterns. We found that the response to Sandy at Fire Island is not notable or distinguishable from several other large storms of the prior decade.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
Stochastic dynamics of genetic broadcasting networks
NASA Astrophysics Data System (ADS)
Potoyan, Davit A.; Wolynes, Peter G.
2017-11-01
The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a "time-scale crisis" for master genes that broadcast their signals to a large number of binding sites. We demonstrate that this time-scale crisis for clearance in a large broadcasting network can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying a model of the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκ B which broadcasts its signals to many downstream genes that regulate immune response, apoptosis, etc.
Evolution of scaling emergence in large-scale spatial epidemic spreading.
Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan
2011-01-01
Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.
Multiscale modeling and general theory of non-equilibrium plasma-assisted ignition and combustion
NASA Astrophysics Data System (ADS)
Yang, Suo; Nagaraja, Sharath; Sun, Wenting; Yang, Vigor
2017-11-01
A self-consistent framework for modeling and simulations of plasma-assisted ignition and combustion is established. In this framework, a ‘frozen electric field’ modeling approach is applied to take advantage of the quasi-periodic behaviors of the electrical characteristics to avoid the re-calculation of electric field for each pulse. The correlated dynamic adaptive chemistry (CO-DAC) method is employed to accelerate the calculation of large and stiff chemical mechanisms. The time-step is dynamically updated during the simulation through a three-stage multi-time scale modeling strategy, which utilizes the large separation of time scales in nanosecond pulsed plasma discharges. A general theory of plasma-assisted ignition and combustion is then proposed. Nanosecond pulsed plasma discharges for ignition and combustion can be divided into four stages. Stage I is the discharge pulse, with time scales of O (1-10 ns). In this stage, input energy is coupled into electron impact excitation and dissociation reactions to generate charged/excited species and radicals. Stage II is the afterglow during the gap between two adjacent pulses, with time scales of O (1 0 0 ns). In this stage, quenching of excited species dissociates O2 and fuel molecules, and provides fast gas heating. Stage III is the remaining gap between pulses, with time scales of O (1-100 µs). The radicals generated during Stages I and II significantly enhance exothermic reactions in this stage. The cumulative effects of multiple pulses is seen in Stage IV, with time scales of O (1-1000 ms), which include preheated gas temperatures and a large pool of radicals and fuel fragments to trigger ignition. For flames, plasma could significantly enhance the radical generation and gas heating in the pre-heat zone, thereby enhancing the flame establishment.
Transition from lognormal to χ2-superstatistics for financial time series
NASA Astrophysics Data System (ADS)
Xu, Dan; Beck, Christian
2016-07-01
Share price returns on different time scales can be well modelled by a superstatistical dynamics. Here we provide an investigation which type of superstatistics is most suitable to properly describe share price dynamics on various time scales. It is shown that while χ2-superstatistics works well on a time scale of days, on a much smaller time scale of minutes the price changes are better described by lognormal superstatistics. The system dynamics thus exhibits a transition from lognormal to χ2 superstatistics as a function of time scale. We discuss a more general model interpolating between both statistics which fits the observed data very well. We also present results on correlation functions of the extracted superstatistical volatility parameter, which exhibits exponential decay for returns on large time scales, whereas for returns on small time scales there are long-range correlations and power-law decay.
Synchrony between reanalysis-driven RCM simulations and observations: variation with time scale
NASA Astrophysics Data System (ADS)
de Elía, Ramón; Laprise, René; Biner, Sébastien; Merleau, James
2017-04-01
Unlike coupled global climate models (CGCMs) that run in a stand-alone mode, nested regional climate models (RCMs) are driven by either a CGCM or a reanalysis dataset. This feature makes high correlations between the RCM simulation and its driver possible. When the driving dataset is a reanalysis, time correlations between RCM output and observations are also common and to be expected. In certain situations time correlation between driver and driven RCM is of particular interest and techniques have been developed to increase it (e.g. large-scale spectral nudging). For such cases, a question that remains open is whether aggregating in time increases the correlation between RCM output and observations. That is, although the RCM may be unable to reproduce a given daily event, whether it will still be able to satisfactorily simulate an anomaly on a monthly or annual basis. This is a preconception that the authors of this work and others in the community have held, perhaps as a natural extension of the properties of upscaling or aggregating other statistics such as the mean squared error. Here we explore analytically four particular cases that help us partially answer this question. In addition, we use observations datasets and RCM-simulated data to illustrate our findings. Results indicate that time upscaling does not necessarily increase time correlations, and that those interested in achieving high monthly or annual time correlations between RCM output and observations may have to do so by increasing correlation as much as possible at the shortest time scale. This may indicate that even when only concerned with time correlations at large temporal scale, large-scale spectral nudging acting at the time-step level may have to be used.
NASA Technical Reports Server (NTRS)
Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara
2000-01-01
We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.
Waszczuk, M A; Zavos, H M S; Gregory, A M; Eley, T C
2016-01-01
Depression and anxiety persist within and across diagnostic boundaries. The manner in which common v. disorder-specific genetic and environmental influences operate across development to maintain internalizing disorders and their co-morbidity is unclear. This paper investigates the stability and change of etiological influences on depression, panic, generalized, separation and social anxiety symptoms, and their co-occurrence, across adolescence and young adulthood. A total of 2619 twins/siblings prospectively reported symptoms of depression and anxiety at mean ages 15, 17 and 20 years. Each symptom scale showed a similar pattern of moderate continuity across development, largely underpinned by genetic stability. New genetic influences contributing to change in the developmental course of the symptoms emerged at each time point. All symptom scales correlated moderately with one another over time. Genetic influences, both stable and time-specific, overlapped considerably between the scales. Non-shared environmental influences were largely time- and symptom-specific, but some contributed moderately to the stability of depression and anxiety symptom scales. These stable, longitudinal environmental influences were highly correlated between the symptoms. The results highlight both stable and dynamic etiology of depression and anxiety symptom scales. They provide preliminary evidence that stable as well as newly emerging genes contribute to the co-morbidity between depression and anxiety across adolescence and young adulthood. Conversely, environmental influences are largely time-specific and contribute to change in symptoms over time. The results inform molecular genetics research and transdiagnostic treatment and prevention approaches.
Potential Impacts of Offshore Wind Farms on North Sea Stratification
Carpenter, Jeffrey R.; Merckelbach, Lucas; Callies, Ulrich; Clark, Suzanna; Gaslikova, Lidia; Baschek, Burkard
2016-01-01
Advances in offshore wind farm (OWF) technology have recently led to their construction in coastal waters that are deep enough to be seasonally stratified. As tidal currents move past the OWF foundation structures they generate a turbulent wake that will contribute to a mixing of the stratified water column. In this study we show that the mixing generated in this way may have a significant impact on the large-scale stratification of the German Bight region of the North Sea. This region is chosen as the focus of this study since the planning of OWFs is particularly widespread. Using a combination of idealised modelling and in situ measurements, we provide order-of-magnitude estimates of two important time scales that are key to understanding the impacts of OWFs: (i) a mixing time scale, describing how long a complete mixing of the stratification takes, and (ii) an advective time scale, quantifying for how long a water parcel is expected to undergo enhanced wind farm mixing. The results are especially sensitive to both the drag coefficient and type of foundation structure, as well as the evolution of the pycnocline under enhanced mixing conditions—both of which are not well known. With these limitations in mind, the results show that OWFs could impact the large-scale stratification, but only when they occupy extensive shelf regions. They are expected to have very little impact on large-scale stratification at the current capacity in the North Sea, but the impact could be significant in future large-scale development scenarios. PMID:27513754
Potential Impacts of Offshore Wind Farms on North Sea Stratification.
Carpenter, Jeffrey R; Merckelbach, Lucas; Callies, Ulrich; Clark, Suzanna; Gaslikova, Lidia; Baschek, Burkard
2016-01-01
Advances in offshore wind farm (OWF) technology have recently led to their construction in coastal waters that are deep enough to be seasonally stratified. As tidal currents move past the OWF foundation structures they generate a turbulent wake that will contribute to a mixing of the stratified water column. In this study we show that the mixing generated in this way may have a significant impact on the large-scale stratification of the German Bight region of the North Sea. This region is chosen as the focus of this study since the planning of OWFs is particularly widespread. Using a combination of idealised modelling and in situ measurements, we provide order-of-magnitude estimates of two important time scales that are key to understanding the impacts of OWFs: (i) a mixing time scale, describing how long a complete mixing of the stratification takes, and (ii) an advective time scale, quantifying for how long a water parcel is expected to undergo enhanced wind farm mixing. The results are especially sensitive to both the drag coefficient and type of foundation structure, as well as the evolution of the pycnocline under enhanced mixing conditions-both of which are not well known. With these limitations in mind, the results show that OWFs could impact the large-scale stratification, but only when they occupy extensive shelf regions. They are expected to have very little impact on large-scale stratification at the current capacity in the North Sea, but the impact could be significant in future large-scale development scenarios.
NASA Astrophysics Data System (ADS)
Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi
2018-03-01
Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.
Participation in International Large-Scale Assessments from a US Perspective
ERIC Educational Resources Information Center
Plisko, Valena White
2013-01-01
International large-scale assessments (ILSAs) play a distinct role in the United States' decentralized federal education system. Separate from national and state assessments, they offer an external, objective measure for the United States to assess student performance comparatively with other countries and over time. The US engagement in ILSAs…
New probes of Cosmic Microwave Background large-scale anomalies
NASA Astrophysics Data System (ADS)
Aiola, Simone
Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the tensions between Planck, WMAP, and SPT temperature data and how the upcoming data release of the ACTpol experiment will contribute to this matter. We provide a description of the current status of the data-analysis pipeline and discuss its ability to recover large-scale modes.
Role of optometry school in single day large scale school vision testing
Anuradha, N; Ramani, Krishnakumar
2015-01-01
Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271
Time scales of supercooled water and implications for reversible polyamorphism
NASA Astrophysics Data System (ADS)
Limmer, David T.; Chandler, David
2015-09-01
Deeply supercooled water exhibits complex dynamics with large density fluctuations, ice coarsening and characteristic time scales extending from picoseconds to milliseconds. Here, we discuss implications of these time scales as they pertain to two-phase coexistence and to molecular simulations of supercooled water. Specifically, we argue that it is possible to discount liquid-liquid criticality because the time scales imply that correlation lengths for such behaviour would be bounded by no more than a few nanometres. Similarly, it is possible to discount two-liquid coexistence because the time scales imply a bounded interfacial free energy that cannot grow in proportion to a macroscopic surface area. From time scales alone, therefore, we see that coexisting domains of differing density in supercooled water can be no more than nanoscale transient fluctuations.
NASA Astrophysics Data System (ADS)
Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.
2017-03-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.
Stochastic dynamics of genetic broadcasting networks
NASA Astrophysics Data System (ADS)
Potoyan, Davit; Wolynes, Peter
The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a ''time-scale crisis'' of master genes that broadcast their signals to large number of binding sites. We demonstrate that this ''time-scale crisis'' can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκB which broadcasts its signals to many downstream genes that regulate immune response, apoptosis etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@auecc.aichi-edu.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
We report the first successful simulation of spontaneous formation of surface magnetic structures from a large-scale dynamo by strongly stratified thermal convection in Cartesian geometry. The large-scale dynamo observed in our strongly stratified model has physical properties similar to those in earlier weakly stratified convective dynamo simulations, indicating that the α {sup 2}-type mechanism is responsible for the dynamo. In addition to the large-scale dynamo, we find that large-scale structures of the vertical magnetic field are spontaneously formed in the convection zone (CZ) surface only in cases with a strongly stratified atmosphere. The organization of the vertical magnetic field proceedsmore » in the upper CZ within tens of convective turnover time and band-like bipolar structures recurrently appear in the dynamo-saturated stage. We consider several candidates to be possibly be the origin of the surface magnetic structure formation, and then suggest the existence of an as-yet-unknown mechanism for the self-organization of the large-scale magnetic structure, which should be inherent in the strongly stratified convective atmosphere.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machicoane, Nathanaël; Volk, Romain
We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition.
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition
NASA Astrophysics Data System (ADS)
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Environmental status of livestock and poultry sectors in China under current transformation stage.
Qian, Yi; Song, Kaihui; Hu, Tao; Ying, Tianyu
2018-05-01
Intensive animal husbandry had aroused great environmental concerns in many developed countries. However, some developing countries are still undergoing the environmental pollution from livestock and poultry sectors. Driven by the large demand, China has experienced a remarkable increase in dairy and meat production, especially in the transformation stage from conventional household breeding to large-scale industrial breeding. At the same time, a large amount of manure from the livestock and poultry sector is released into waterbodies and soil, causing eutrophication and soil degradation. This condition will be reinforced in the large-scale cultivation where the amount of manure exceeds the soil nutrient capacity, if not treated or utilized properly. Our research aims to analyze whether the transformation of raising scale would be beneficial to the environment as well as present the latest status of livestock and poultry sectors in China. The estimation of the pollutants generated and discharged from livestock and poultry sector in China will facilitate the legislation of manure management. This paper analyzes the pollutants generated from the manure of the five principal commercial animals in different farming practices. The results show that the fattening pigs contribute almost half of the pollutants released from manure. Moreover, the beef cattle exert the largest environmental impact for unitary production, about 2-3 times of pork and 5-20 times of chicken. The animals raised with large-scale feedlots practice generate fewer pollutants than those raised in households. The shift towards industrial production of livestock and poultry is easier to manage from the environmental perspective, but adequate large-scale cultivation is encouraged. Regulation control, manure treatment and financial subsidies for the manure treatment and utilization are recommended to achieve the ecological agriculture in China. Copyright © 2017 Elsevier B.V. All rights reserved.
The effects of magnetic fields on the growth of thermal instabilities in cooling flows
NASA Technical Reports Server (NTRS)
David, Laurence P.; Bregman, Joel N.
1989-01-01
The effects of heat conduction and magnetic fields on the growth of thermal instabilities in cooling flows are examined using a time-dependent hydrodynamics code. It is found that, for magnetic field strengths of roughly 1 micro-Gauss, magnetic pressure forces can completely suppress shocks from forming in thermally unstable entropy perturbations with initial length scales as large as 20 kpc, even for initial amplitudes as great as 60 percent. Perturbations with initial amplitudes of 50 percent and initial magnetic field strengths of 1 micro-Gauss cool to 10,000 K on a time scale which is only 22 percent of the initial instantaneous cooling time. Nonlinear perturbations can thus condense out of cooling flows on a time scale substantially less than the time required for linear perturbations and produce significant mass deposition of cold gas while the accreting intracluster gas is still at large radii.
Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.
1996-12-17
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.
Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei
1996-01-01
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.
Large-scale dynamos in rapidly rotating plane layer convection
NASA Astrophysics Data System (ADS)
Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.
2018-05-01
Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
Upper Washita River experimental watersheds: Multiyear stability of soil water content profiles
USDA-ARS?s Scientific Manuscript database
Scaling in situ soil water content time series data to a large spatial domain is a key element of watershed environmental monitoring and modeling. The primary method of estimating and monitoring large-scale soil water content distributions is via in situ networks. It is critical to establish the s...
The International Conference on Vector and Parallel Computing (2nd)
1989-01-17
Computation of the SVD of Bidiagonal Matrices" ...................................... 11 " Lattice QCD -As a Large Scale Scientific Computation...vectorizcd for the IBM 3090 Vector Facility. In addition, elapsed times " Lattice QCD -As a Large Scale Scientific have been reduced by using 3090...benchmarked Lattice QCD on a large number ofcompu- come from the wavefront solver routine. This was exten- ters: CrayX-MP and Cray 2 (vector
A new method of presentation the large-scale magnetic field structure on the Sun and solar corona
NASA Technical Reports Server (NTRS)
Ponyavin, D. I.
1995-01-01
The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.
Perspectives on integrated modeling of transport processes in semiconductor crystal growth
NASA Technical Reports Server (NTRS)
Brown, Robert A.
1992-01-01
The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.
Size dependent fragmentation of argon clusters in the soft x-ray ionization regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gisselbrecht, Mathieu; Lindgren, Andreas; Burmeister, Florian
Photofragmentation of argon clusters of average size ranging from 10 up to 1000 atoms is studied using soft x-ray radiation below the 2p threshold and multicoincidence mass spectroscopy technique. For small clusters (
Sound production due to large-scale coherent structures
NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.
Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading
Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan
2011-01-01
Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
Large-scale anisotropy in stably stratified rotating flows
Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...
2014-08-28
We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less
NASA Astrophysics Data System (ADS)
Roudier, Th.; Švanda, M.; Ballot, J.; Malherbe, J. M.; Rieutord, M.
2018-04-01
Context. Large-scale flows in the Sun play an important role in the dynamo process linked to the solar cycle. The important large-scale flows are the differential rotation and the meridional circulation with an amplitude of km s-1 and few m s-1, respectively. These flows also have a cycle-related components, namely the torsional oscillations. Aim. Our attempt is to determine large-scale plasma flows on the solar surface by deriving horizontal flow velocities using the techniques of solar granule tracking, dopplergrams, and time-distance helioseismology. Methods: Coherent structure tracking (CST) and time-distance helioseismology were used to investigate the solar differential rotation and meridional circulation at the solar surface on a 30-day HMI/SDO sequence. The influence of a large sunspot on these large-scale flows with a specific 7-day HMI/SDO sequence has been also studied. Results: The large-scale flows measured by the CST on the solar surface and the same flow determined from the same data with the helioseismology in the first 1 Mm below the surface are in good agreement in amplitude and direction. The torsional waves are also located at the same latitudes with amplitude of the same order. We are able to measure the meridional circulation correctly using the CST method with only 3 days of data and after averaging between ± 15° in longitude. Conclusions: We conclude that the combination of CST and Doppler velocities allows us to detect properly the differential solar rotation and also smaller amplitude flows such as the meridional circulation and torsional waves. The results of our methods are in good agreement with helioseismic measurements.
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.
2018-05-01
Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Universal scaling function in discrete time asymmetric exclusion processes
NASA Astrophysics Data System (ADS)
Chia, Nicholas; Bundschuh, Ralf
2005-03-01
In the universality class of the one dimensional Kardar-Parisi-Zhang surface growth, Derrida and Lebowitz conjectured the universality of not only the scaling exponents, but of an entire scaling function. Since Derrida and Lebowitz' original publication this universality has been verified for a variety of continuous time systems in the KPZ universality class. We study the Derrida-Lebowitz scaling function for multi-particle versions of the discrete time Asymmetric Exclusion Process. We find that in this discrete time system the Derrida-Lebowitz scaling function not only properly characterizes the large system size limit, but even accurately describes surprisingly small systems. These results have immediate applications in searching biological sequence databases.
Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation
NASA Technical Reports Server (NTRS)
He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.
Micro-Macro Coupling in Plasma Self-Organization Processes during Island Coalescence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Weigang; Lapenta, Giovanni; Centrum voor Plasma-Astrofysica, Departement Wiskunde, Katholieke Universiteit Leuven, Celestijnenlaan 200B, 3001 Leuven
The collisionless island coalescence process is studied with particle-in-cell simulations, as an internal-driven magnetic self-organization scenario. The macroscopic relaxation time, corresponding to the total time required for the coalescence to complete, is found to depend crucially on the scale of the system. For small-scale systems, where the macroscopic scales and the dissipation scales are more tightly coupled, the relaxation time is independent of the strength of the internal driving force: the small-scale processes of magnetic reconnection adjust to the amount of the initial magnetic flux to be reconnected, indicating that at the microscopic scales reconnection is enslaved by the macroscopicmore » drive. However, for large-scale systems, where the micro-macro scale separation is larger, the relaxation time becomes dependent on the driving force.« less
Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.
1998-01-01
Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
The influence of cosmic rays on the stability and large-scale dynamics of the interstellar medium
NASA Astrophysics Data System (ADS)
Kuznetsov, V. D.
1986-06-01
The diffusion-convection formulation is used to study the influence of galactic cosmic rays on the stability and dynamics of the interstellar medium which is supposedly kept in equilibrium by the gravitational field of stars. It is shown that the influence of cosmic rays on the growth rate of MHD instability depends largely on a dimensionless parameter expressing the ratio of the characteristic acoustic time scale to the cosmic-ray diffusion time. If this parameter is small, the cosmic rays will decelerate the build-up of instabilities, thereby stabilizing the system; in contrast, if the parameter is large, the system will be destabilized.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Labat, David; Jourde, Hervé; Lecoq, Nicolas; Mazzilli, Naomi
2017-04-01
The french karst observatory network SNO KARST is a national initiative from the National Institute for Earth Sciences and Astronomy (INSU) of the National Center for Scientific Research (CNRS). It is also part of the new french research infrastructure for the observation of the critical zone OZCAR. SNO KARST is composed by several karst sites distributed over conterminous France which are located in different physiographic and climatic contexts (Mediterranean, Pyrenean, Jura mountain, western and northwestern shore near the Atlantic or the English Channel). This allows the scientific community to develop advanced research and experiments dedicated to improve understanding of the hydrological functioning of karst catchments. Here we used several sites of SNO KARST in order to assess the hydrological response of karst catchments to long-term variation of large-scale atmospheric circulation. Using NCEP reanalysis products and karst discharge, we analyzed the links between large-scale circulation and karst water resources variability. As karst hydrosystems are highly heterogeneous media, they behave differently across different time-scales : we explore the large-scale/local-scale relationships according to time-scales using a wavelet multiresolution approach of both karst hydrological variables and large-scale climate fields such as sea level pressure (SLP). The different wavelet components of karst discharge in response to the corresponding wavelet component of climate fields are either 1) compared to physico-chemical/geochemical responses at karst springs, or 2) interpreted in terms of hydrological functioning by comparing discharge wavelet components to internal components obtained from precipitation/discharge models using the KARSTMOD conceptual modeling platform of SNO KARST.
Controllability of multiplex, multi-time-scale networks
NASA Astrophysics Data System (ADS)
Pósfai, Márton; Gao, Jianxi; Cornelius, Sean P.; Barabási, Albert-László; D'Souza, Raissa M.
2016-09-01
The paradigm of layered networks is used to describe many real-world systems, from biological networks to social organizations and transportation systems. While recently there has been much progress in understanding the general properties of multilayer networks, our understanding of how to control such systems remains limited. One fundamental aspect that makes this endeavor challenging is that each layer can operate at a different time scale; thus, we cannot directly apply standard ideas from structural control theory of individual networks. Here we address the problem of controlling multilayer and multi-time-scale networks focusing on two-layer multiplex networks with one-to-one interlayer coupling. We investigate the practically relevant case when the control signal is applied to the nodes of one layer. We develop a theory based on disjoint path covers to determine the minimum number of inputs (Ni) necessary for full control. We show that if both layers operate on the same time scale, then the network structure of both layers equally affect controllability. In the presence of time-scale separation, controllability is enhanced if the controller interacts with the faster layer: Ni decreases as the time-scale difference increases up to a critical time-scale difference, above which Ni remains constant and is completely determined by the faster layer. We show that the critical time-scale difference is large if layer I is easy and layer II is hard to control in isolation. In contrast, control becomes increasingly difficult if the controller interacts with the layer operating on the slower time scale and increasing time-scale separation leads to increased Ni, again up to a critical value, above which Ni still depends on the structure of both layers. This critical value is largely determined by the longest path in the faster layer that does not involve cycles. By identifying the underlying mechanisms that connect time-scale difference and controllability for a simplified model, we provide crucial insight into disentangling how our ability to control real interacting complex systems is affected by a variety of sources of complexity.
NASA Astrophysics Data System (ADS)
Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.
2009-12-01
Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Corridors Increase Plant Species Richness at Large Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.
2006-09-01
Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.
Corridors increase plant species richness at large scales.
Damschen, Ellen I; Haddad, Nick M; Orrock, John L; Tewksbury, Joshua J; Levey, Douglas J
2006-09-01
Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.
Effects of large-scale irregularities of the ionosphere in the propagation of decametric radio waves
NASA Astrophysics Data System (ADS)
Kerblai, T. S.; Kovalevskaia, E. M.
1985-12-01
A numerical experiment is used to study the simultaneous influence of regular space-time gradients and large-scale traveling ionospheric disturbances (TIDs) as manifested in the angular and Doppler characteristics of decametric-wave propagation. Conditions typical for middle latitudes are chosen as the ionospheric models: conditions under which large-scale TIDs in the F2-layer evolve on the background of winter or equinox structures of the ionosphere. Certain conclusions on the character of TID effects for various states of the background ionosphere are drawn which can be used to interpret experimental results.
ERIC Educational Resources Information Center
Hooper, Martin
2017-01-01
TIMSS and PIRLS assess representative samples of students at regular intervals, measuring trends in student achievement and student contexts for learning. Because individual students are not tracked over time, analysis of international large-scale assessment data is usually conducted cross-sectionally. Gustafsson (2007) proposed examining the data…
Counseling Psychology and Large-Scale Disasters: Moving on to Action, Practice, and Research
ERIC Educational Resources Information Center
Jacobs, Sue C.; Hoffman, Mary Ann; Leach, Mark M.; Gerstein, Lawrence H.
2011-01-01
Juntunen and Parham each reacted positively with important personal reflections and/or calls to action in response to "Counseling Psychology and Large-Scale Disasters, Catastrophes, and Traumas: Opportunities for Growth." We comment on the primary themes and suggestions they raised. Since the time we were stimulated by Katrina and its aftermath…
Time dependent turbulence modeling and analytical theories of turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, R.
1993-01-01
By simplifying the direct interaction approximation (DIA) for turbulent shear flow, time dependent formulas are derived for the Reynolds stresses which can be included in two equation models. The Green's function is treated phenomenologically, however, following Smith and Yakhot, we insist on the short and long time limits required by DIA. For small strain rates, perturbative evaluation of the correlation function yields a time dependent theory which includes normal stress effects in simple shear flows. From this standpoint, the phenomenological Launder-Reece-Rodi model is obtained by replacing the Green's function by its long time limit. Eddy damping corrections to short time behavior initiate too quickly in this model; in contrast, the present theory exhibits strong suppression of eddy damping at short times. A time dependent theory for large strain rates is proposed in which large scales are governed by rapid distortion theory while small scales are governed by Kolmogorov inertial range dynamics. At short times and large strain rates, the theory closely matches rapid distortion theory, but at long times it relaxes to an eddy damping model.
Identifying the scale-dependent motifs in atmospheric surface layer by ordinal pattern analysis
NASA Astrophysics Data System (ADS)
Li, Qinglei; Fu, Zuntao
2018-07-01
Ramp-like structures in various atmospheric surface layer time series have been long studied, but the presence of motifs with the finer scale embedded within larger scale ramp-like structures has largely been overlooked in the reported literature. Here a novel, objective and well-adapted methodology, the ordinal pattern analysis, is adopted to study the finer-scaled motifs in atmospheric boundary-layer (ABL) time series. The studies show that the motifs represented by different ordinal patterns take clustering properties and 6 dominated motifs out of the whole 24 motifs account for about 45% of the time series under particular scales, which indicates the higher contribution of motifs with the finer scale to the series. Further studies indicate that motif statistics are similar for both stable conditions and unstable conditions at larger scales, but large discrepancies are found at smaller scales, and the frequencies of motifs "1234" and/or "4321" are a bit higher under stable conditions than unstable conditions. Under stable conditions, there are great changes for the occurrence frequencies of motifs "1234" and "4321", where the occurrence frequencies of motif "1234" decrease from nearly 24% to 4.5% with the scale factor increasing, and the occurrence frequencies of motif "4321" change nonlinearly with the scale increasing. These great differences of dominated motifs change with scale can be taken as an indicator to quantify the flow structure changes under different stability conditions, and motif entropy can be defined just by only 6 dominated motifs to quantify this time-scale independent property of the motifs. All these results suggest that the defined scale of motifs with the finer scale should be carefully taken into consideration in the interpretation of turbulence coherent structures.
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
NASA Astrophysics Data System (ADS)
Scheifinger, Helfried; Menzel, Annette; Koch, Elisabeth; Peter, Christian; Ahas, Rein
2002-11-01
A data set of 17 phenological phases from Germany, Austria, Switzerland and Slovenia spanning the time period from 1951 to 1998 has been made available for analysis together with a gridded temperature data set (1° × 1° grid) and the North Atlantic Oscillation (NAO) index time series. The disturbances of the westerlies constitute the main atmospheric source for the temporal variability of phenological events in Europe. The trend, the standard deviation and the discontinuity of the phenological time series at the end of the 1980s can, to a great extent, be explained by the NAO. A number of factors modulate the influence of the NAO in time and space. The seasonal northward shift of the westerlies overlaps with the sequence of phenological spring phases, thereby gradually reducing its influence on the temporal variability of phenological events with progression of spring (temporal loss of influence). This temporal process is reflected by a pronounced decrease in trend and standard deviation values and common variability with the NAO with increasing year-day. The reduced influence of the NAO with increasing distance from the Atlantic coast is not only apparent in studies based on the data set of the International Phenological Gardens, but also in the data set of this study with a smaller spatial extent (large-scale loss of influence). The common variance between phenological and NAO time series displays a discontinuous drop from the European Atlantic coast towards the Alps. On a local and regional scale, mountainous terrain reduces the influence of the large-scale atmospheric flow from the Atlantic via a proposed decoupling mechanism. Valleys in mountainous terrain have the inclination to harbour temperature inversions over extended periods of time during the cold season, which isolate the valley climate from the large-scale atmospheric flow at higher altitudes. Most phenological stations reside at valley bottoms and are thus largely decoupled in their temporal variability from the influence of the westerly flow regime (local-scale loss of influence). This study corroborates an increasing number of similar investigations that find that vegetation does react in a sensitive way to variations of its atmospheric environment across various temporal and spatial scales.
Mountain erosion over 10 yr, 10 k.y., and 10 m.y. time scales
James W. Kirchner; Robert C. Finkel; Clifford S. Riebe; Darryl E. Granger; James L. Clayton; John G. King; Walter F. Megahan
2001-01-01
We used cosmogenic 10Be to measure erosion rates over 10 k.y. time scales at 32 Idaho mountain catchments, ranging from small experimental watersheds (0.2 km2) to large river basins (35 000 km2). These long-term sediment yields are, on average, 17 times higher than stream sediment fluxes measured over...
SIGN SINGULARITY AND FLARES IN SOLAR ACTIVE REGION NOAA 11158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorriso-Valvo, L.; De Vita, G.; Kazachenko, M. D.
Solar Active Region NOAA 11158 has hosted a number of strong flares, including one X2.2 event. The complexity of current density and current helicity are studied through cancellation analysis of their sign-singular measure, which features power-law scaling. Spectral analysis is also performed, revealing the presence of two separate scaling ranges with different spectral index. The time evolution of parameters is discussed. Sudden changes of the cancellation exponents at the time of large flares and the presence of correlation with Extreme-Ultra-Violet and X-ray flux suggest that eruption of large flares can be linked to the small-scale properties of the current structures.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Experimental quantification of nonlinear time scales in inertial wave rotating turbulence
NASA Astrophysics Data System (ADS)
Yarom, Ehud; Salhov, Alon; Sharon, Eran
2017-12-01
We study nonlinearities of inertial waves in rotating turbulence. At small Rossby numbers the kinetic energy in the system is contained in helical inertial waves with time dependence amplitudes. In this regime the amplitude variations time scales are slow compared to wave periods, and the spectrum is concentrated along the dispersion relation of the waves. A nonlinear time scale was extracted from the width of the spectrum, which reflects the intensity of nonlinear wave interactions. This nonlinear time scale is found to be proportional to (U.k ) -1, where k is the wave vector and U is the root-mean-square horizontal velocity, which is dominated by large scales. This correlation, which indicates the existence of turbulence in which inertial waves undergo weak nonlinear interactions, persists only for small Rossby numbers.
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
NASA Astrophysics Data System (ADS)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
Large-scale circulation departures related to wet episodes in north-east Brazil
NASA Technical Reports Server (NTRS)
Sikdar, Dhirendra N.; Elsner, James B.
1987-01-01
Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season is divided into dry and wet periods; the FGGE and geostationary satellite data was averaged; and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLPs have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.
Large-scale circulation departures related to wet episodes in northeast Brazil
NASA Technical Reports Server (NTRS)
Sikdar, D. N.; Elsner, J. B.
1985-01-01
Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season season is devided into dry and wet periods, the FGGE and geostationary satellite data was averaged and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLP's have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Thick strings, the liquid crystal blue phase, and cosmological large-scale structure
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1992-01-01
A phenomenological model based on the liquid crystal blue phase is proposed as a model for a late-time cosmological phase transition. Topological defects, in particular thick strings and/or domain walls, are presented as seeds for structure formation. It is shown that the observed large-scale structure, including quasi-periodic wall structure, can be well fitted in the model without violating the microwave background isotropy bound or the limits from induced gravitational waves and the millisecond pulsar timing. Furthermore, such late-time transitions can produce objects such as quasars at high redshifts. The model appears to work with either cold or hot dark matter.
The build up of the correlation between halo spin and the large-scale structure
NASA Astrophysics Data System (ADS)
Wang, Peng; Kang, Xi
2018-01-01
Both simulations and observations have confirmed that the spin of haloes/galaxies is correlated with the large-scale structure (LSS) with a mass dependence such that the spin of low-mass haloes/galaxies tend to be parallel with the LSS, while that of massive haloes/galaxies tend to be perpendicular with the LSS. It is still unclear how this mass dependence is built up over time. We use N-body simulations to trace the evolution of the halo spin-LSS correlation and find that at early times the spin of all halo progenitors is parallel with the LSS. As time goes on, mass collapsing around massive halo is more isotropic, especially the recent mass accretion along the slowest collapsing direction is significant and it brings the halo spin to be perpendicular with the LSS. Adopting the fractional anisotropy (FA) parameter to describe the degree of anisotropy of the large-scale environment, we find that the spin-LSS correlation is a strong function of the environment such that a higher FA (more anisotropic environment) leads to an aligned signal, and a lower anisotropy leads to a misaligned signal. In general, our results show that the spin-LSS correlation is a combined consequence of mass flow and halo growth within the cosmic web. Our predicted environmental dependence between spin and large-scale structure can be further tested using galaxy surveys.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
Lifetime evaluation of large format CMOS mixed signal infrared devices
NASA Astrophysics Data System (ADS)
Linder, A.; Glines, Eddie
2015-09-01
New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.
Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.
Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S
2016-09-26
In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley
2017-04-01
Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
NASA Astrophysics Data System (ADS)
Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.; Pontzen, A.
2018-04-01
We present the first self-consistent prediction for the distribution of formation time-scales for close supermassive black hole (SMBH) pairs following galaxy mergers. Using ROMULUS25, the first large-scale cosmological simulation to accurately track the orbital evolution of SMBHs within their host galaxies down to sub-kpc scales, we predict an average formation rate density of close SMBH pairs of 0.013 cMpc-3 Gyr-1. We find that it is relatively rare for galaxy mergers to result in the formation of close SMBH pairs with sub-kpc separation and those that do form are often the result of Gyr of orbital evolution following the galaxy merger. The likelihood and time-scale to form a close SMBH pair depends strongly on the mass ratio of the merging galaxies, as well as the presence of dense stellar cores. Low stellar mass ratio mergers with galaxies that lack a dense stellar core are more likely to become tidally disrupted and deposit their SMBH at large radii without any stellar core to aid in their orbital decay, resulting in a population of long-lived `wandering' SMBHs. Conversely, SMBHs in galaxies that remain embedded within a stellar core form close pairs in much shorter time-scales on average. This time-scale is a crucial, though often ignored or very simplified, ingredient to models predicting SMBH mergers rates and the connection between SMBH and star formation activity.
M. Lorenz; G. Becher; V. Mues; E. Ulrich
2006-01-01
Forest condition in Europe has been monitored over 19 years jointly by the United Nations Economic Commission for Europe (UNECE) and the European Union (EU). Large-scale variations of forest condition over space and time in relation to natural and anthropogenic factors are assessed on about 6,000 plots systematically spread across Europe. This large-scale monitoring...
Nurture Groups: A Large-Scale, Controlled Study of Effects on Development and Academic Attainment
ERIC Educational Resources Information Center
Reynolds, Sue; MacKay, Tommy; Kearney, Maura
2009-01-01
Nurture groups have contributed to inclusive practices in primary schools in the UK for some time now and have frequently been the subject of articles in this journal. This large-scale, controlled study of nurture groups across 32 schools in the City of Glasgow provides further evidence for their effectiveness in addressing the emotional…
Spatio-temporal hierarchy in the dynamics of a minimalist protein model
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen
2013-12-01
A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.
NASA Astrophysics Data System (ADS)
Habarulema, John Bosco; Yizengaw, Endawoke; Katamzi-Joseph, Zama T.; Moldwin, Mark B.; Buchert, Stephan
2018-01-01
This paper discusses the ionosphere's response to the largest storm of solar cycle 24 during 16-18 March 2015. We have used the Global Navigation Satellite Systems (GNSS) total electron content data to study large-scale traveling ionospheric disturbances (TIDs) over the American, African, and Asian regions. Equatorward large-scale TIDs propagated and crossed the equator to the other side of the hemisphere especially over the American and Asian sectors. Poleward TIDs with velocities in the range ≈400-700 m/s have been observed during local daytime over the American and African sectors with origin from around the geomagnetic equator. Our investigation over the American sector shows that poleward TIDs may have been launched by increased Lorentz coupling as a result of penetrating electric field during the southward turning of the interplanetary magnetic field, Bz. We have observed increase in SWARM satellite electron density (Ne) at the same time when equatorward large-scale TIDs are visible over the European-African sector. The altitude Ne profiles from ionosonde observations show a possible link that storm-induced TIDs may have influenced the plasma distribution in the topside ionosphere at SWARM satellite altitude.
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
Highly Efficient Large-Scale Lentiviral Vector Concentration by Tandem Tangential Flow Filtration
Cooper, Aaron R.; Patel, Sanjeet; Senadheera, Shantha; Plath, Kathrin; Kohn, Donald B.; Hollis, Roger P.
2014-01-01
Large-scale lentiviral vector (LV) concentration can be inefficient and time consuming, often involving multiple rounds of filtration and centrifugation. This report describes a simpler method using two tangential flow filtration (TFF) steps to concentrate liter-scale volumes of LV supernatant, achieving in excess of 2000-fold concentration in less than 3 hours with very high recovery (>97%). Large volumes of LV supernatant can be produced easily through the use of multi-layer flasks, each having 1720 cm2 surface area and producing ~560 mL of supernatant per flask. Combining the use of such flasks and TFF greatly simplifies large-scale production of LV. As a demonstration, the method is used to produce a very high titer LV (>1010 TU/mL) and transduce primary human CD34+ hematopoietic stem/progenitor cells at high final vector concentrations with no overt toxicity. A complex LV (STEMCCA) for induced pluripotent stem cell generation is also concentrated from low initial titer and used to transduce and reprogram primary human fibroblasts with no overt toxicity. Additionally, a generalized and simple multiplexed real- time PCR assay is described for lentiviral vector titer and copy number determination. PMID:21784103
The Triggering of Large-Scale Waves by CME Initiation
NASA Astrophysics Data System (ADS)
Forbes, Terry
Studies of the large-scale waves generated at the onset of a coronal mass ejection (CME) can provide important information about the processes in the corona that trigger and drive CMEs. The size of the region where the waves originate can indicate the location of the magnetic forces that drive the CME outward, and the rate at which compressive waves steepen into shocks can provide a measure of how the driving forces develop in time. However, in practice it is difficult to separate the effects of wave formation from wave propagation. The problem is particularly acute for the corona because of the multiplicity of wave modes (e.g. slow versus fast MHD waves) and the highly nonuniform structure of the solar atmosphere. At the present time large-scale numerical simulations provide the best hope for deconvolving wave propagation and formation effects from one another.
Command and Control for Large-Scale Hybrid Warfare Systems
2014-06-05
Prescribed by ANSI Std Z39-18 2 CK Pang et al. in C2 architectures was proposed using Petri nets (PNs).10 Liao in [11] reported an architecture for...arises from the chal- lenging and often-conflicting user requirements, scale, scope, inter-connectivity with different large-scale net - worked teams and...resources can be easily modelled and reconfigured by the notion of block matrix. At any time, the various missions of the net - worked team can be added
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Drought forecasting in Luanhe River basin involving climatic indices
NASA Astrophysics Data System (ADS)
Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.
2017-11-01
Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.
An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH
NASA Astrophysics Data System (ADS)
Lee, D.; Gopal, S.; Mohapatra, P.
2012-07-01
We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
Dynamic structural disorder in supported nanoscale catalysts
NASA Astrophysics Data System (ADS)
Rehr, J. J.; Vila, F. D.
2014-04-01
We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
Space-Time Controls on Carbon Sequestration Over Large-Scale Amazon Basin
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Cooper, Harry J.; Gu, Jiujing; Grose, Andrew; Norman, John; daRocha, Humberto R.; Starr, David O. (Technical Monitor)
2002-01-01
A major research focus of the LBA Ecology Program is an assessment of the carbon budget and the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amazonia landscape, and its time-space heterogeneity manifest in carbon fluxes across the large scale Amazon basin ecosystem. Quantification of these processes requires a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle modes of photosynthesis and respiration. Here we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation fluxes and precipitation retrieval retrieved from a combination of GOES, SSM/I, TOMS, and TRMM satellite measurements. Brief discussion concerning validation of (a) retrieved surface radiation fluxes and precipitation based on 30-min averaged surface measurements taken at Ji-Parana in Rondonia and Manaus in Amazonas, and (b) modeled carbon fluxes based on tower CO2 flux measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical control paths which perturb net carbon fluxes and sequestration, produce time-space switching of carbon sources and sinks, undergo modulation through atmospheric boundary layer feedbacks, and respond to any discontinuous intervention on the landscape itself such as produced by human intervention in converting rainforest to pasture or conducting selective/clearcut logging operations.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
NASA Astrophysics Data System (ADS)
Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya
2017-04-01
Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.
Si, Wenjie; Dong, Xunde; Yang, Feifei
2018-03-01
This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Imprint of non-linear effects on HI intensity mapping on large scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna, E-mail: umeobinna@gmail.com
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on themore » power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.« less
Imprint of non-linear effects on HI intensity mapping on large scales
NASA Astrophysics Data System (ADS)
Umeh, Obinna
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Anomalously strong observations of PKiKP/PcP amplitude ratios on a global scale
NASA Astrophysics Data System (ADS)
Waszek, Lauren; Deuss, Arwen
2015-07-01
The inner core boundary marks the phase transition between the solid inner core and the fluid outer core. As the site of inner core solidification, the boundary provides insight into the processes generating the seismic structures of the inner core. In particular, it may hold the key to understanding the previously observed hemispherical asymmetry in inner core seismic velocity, anisotropy, and attenuation. Here we use a large PKiKP-PcP amplitude ratio and travel time residual data set to investigate velocity and density contrast properties near the inner core boundary. Although hemispherical structure at the boundary has been proposed by previous inner core studies, we find no evidence for hemispheres in the amplitude ratios or travel time residuals. In addition, we find that the amplitude ratios are much larger than can be explained by variations in density contrast at the inner core boundary or core-mantle boundary. This indicates that PKiKP is primarily observed when it is anomalously large, due to focusing along its raypath. Using data in which PKiKP is not detected above the noise level, we calculate an upper estimate for the inner core boundary (ICB) density contrast of 1.2 kg m-3. The travel time residuals display large regional variations, which differ on long and short length scales. These regions may be explained by large-scale velocity variations in the F layer just above the inner core boundary, and/or small-scale topography of varying magnitude on the ICB, which also causes the large amplitudes. Such differences could arise from localized freezing and melting of the inner core.
The stability properties of cylindrical force-free fields - Effect of an external potential field
NASA Technical Reports Server (NTRS)
Chiuderi, C.; Einaudi, G.; Ma, S. S.; Van Hoven, G.
1980-01-01
A large-scale potential field with an embedded smaller-scale force-free structure gradient x B equals alpha B is studied in cylindrical geometry. Cases in which alpha goes continuously from a constant value alpha 0 on the axis to zero at large r are considered. Such a choice of alpha (r) produces fields which are realistic (few field reversals) but not completely stable. The MHD-unstable wavenumber regime is found. Since the considered equilibrium field exhibits a certain amount of magnetic shear, resistive instabilities can arise. The growth rates of the tearing mode in the limited MHD-stable region of k space are calculated, showing time-scales much shorter than the resistive decay time.
Time drawings: Spatial representation of temporal concepts.
Leone, María Juliana; Salles, Alejo; Pulver, Alejandro; Golombek, Diego Andrés; Sigman, Mariano
2018-03-01
Time representation is a fundamental property of human cognition. Ample evidence shows that time (and numbers) are represented in space. However, how the conceptual mapping varies across individuals, scales, and temporal structures remains largely unknown. To investigate this issue, we conducted a large online study consisting in five experiments that addressed different time scales and topology: Zones of time, Seasons, Days of the week, Parts of the day and Timeline. Participants were asked to map different kinds of time events to a location in space and to determine their size and color. Results showed that time is organized in space in a hierarchical progression: some features appear to be universal (i.e. selection order), others are shaped by how time is organized in distinct cultures (i.e. location order) and, finally, some aspects vary depending on individual features such as age, gender, and chronotype (i.e. size and color). Copyright © 2018 Elsevier Inc. All rights reserved.
Modelling and mitigating refractive propagation effects in precision pulsar timing observations
NASA Astrophysics Data System (ADS)
Shannon, R. M.; Cordes, J. M.
2017-01-01
To obtain the most accurate pulse arrival times from radio pulsars, it is necessary to correct or mitigate the effects of the propagation of radio waves through the warm and ionized interstellar medium. We examine both the strength of propagation effects associated with large-scale electron-density variations and the methodology used to estimate infinite frequency arrival times. Using simulations of two-dimensional phase-varying screens, we assess the strength and non-stationarity of timing perturbations associated with large-scale density variations. We identify additional contributions to arrival times that are stochastic in both radio frequency and time and therefore not amenable to correction solely using times of arrival. We attribute this to the frequency dependence of the trajectories of the propagating radio waves. We find that this limits the efficacy of low-frequency (metre-wavelength) observations. Incorporating low-frequency pulsar observations into precision timing campaigns is increasingly problematic for pulsars with larger dispersion measures.
Rosenberg, D; Marino, R; Herbert, C; Pouquet, A
2016-01-01
We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.
Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.
Kusenko, Alexander; Pearce, Lauren; Yang, Louis
2015-02-13
The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.
Derivation of large-scale cellular regulatory networks from biological time series data.
de Bivort, Benjamin L
2010-01-01
Pharmacological agents and other perturbants of cellular homeostasis appear to nearly universally affect the activity of many genes, proteins, and signaling pathways. While this is due in part to nonspecificity of action of the drug or cellular stress, the large-scale self-regulatory behavior of the cell may also be responsible, as this typically means that when a cell switches states, dozens or hundreds of genes will respond in concert. If many genes act collectively in the cell during state transitions, rather than every gene acting independently, models of the cell can be created that are comprehensive of the action of all genes, using existing data, provided that the functional units in the model are collections of genes. Techniques to develop these large-scale cellular-level models are provided in detail, along with methods of analyzing them, and a brief summary of major conclusions about large-scale cellular networks to date.
MHD Modeling of the Solar Wind with Turbulence Transport and Heating
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Usmanov, A. V.; Matthaeus, W. H.; Breech, B.
2009-01-01
We have developed a magnetohydrodynamic model that describes the global axisymmetric steady-state structure of the solar wind near solar minimum with account for transport of small-scale turbulence associated heating. The Reynolds-averaged mass, momentum, induction, and energy equations for the large-scale solar wind flow are solved simultaneously with the turbulence transport equations in the region from 0.3 to 100 AU. The large-scale equations include subgrid-scale terms due to turbulence and the turbulence (small-scale) equations describe the effects of transport and (phenomenologically) dissipation of the MHD turbulence based on a few statistical parameters (turbulence energy, normalized cross-helicity, and correlation scale). The coupled set of equations is integrated numerically for a source dipole field on the Sun by a time-relaxation method in the corotating frame of reference. We present results on the plasma, magnetic field, and turbulence distributions throughout the heliosphere and on the role of the turbulence in the large-scale structure and temperature distribution in the solar wind.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
A. Townsend Peterson; Daniel A. Kluza
2005-01-01
Large-scale assessments of the distribution and diversity of birds have been challenged by the need for a robust methodology for summarizing or predicting species' geographic distributions (e.g. Beard et al. 1999, Manel et al. 1999, Saveraid et al. 2001). Methodologies used in such studies have at times been inappropriate, or even more frequently limited in their...
ERIC Educational Resources Information Center
Frey, Andreas; Hartig, Johannes; Rupp, Andre A.
2009-01-01
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Enhancing ecosystem restoration efficiency through spatial and temporal coordination.
Neeson, Thomas M; Ferris, Michael C; Diebel, Matthew W; Doran, Patrick J; O'Hanley, Jesse R; McIntyre, Peter B
2015-05-12
In many large ecosystems, conservation projects are selected by a diverse set of actors operating independently at spatial scales ranging from local to international. Although small-scale decision making can leverage local expert knowledge, it also may be an inefficient means of achieving large-scale objectives if piecemeal efforts are poorly coordinated. Here, we assess the value of coordinating efforts in both space and time to maximize the restoration of aquatic ecosystem connectivity. Habitat fragmentation is a leading driver of declining biodiversity and ecosystem services in rivers worldwide, and we simultaneously evaluate optimal barrier removal strategies for 661 tributary rivers of the Laurentian Great Lakes, which are fragmented by at least 6,692 dams and 232,068 road crossings. We find that coordinating barrier removals across the entire basin is nine times more efficient at reconnecting fish to headwater breeding grounds than optimizing independently for each watershed. Similarly, a one-time pulse of restoration investment is up to 10 times more efficient than annual allocations totaling the same amount. Despite widespread emphasis on dams as key barriers in river networks, improving road culvert passability is also essential for efficiently restoring connectivity to the Great Lakes. Our results highlight the dramatic economic and ecological advantages of coordinating efforts in both space and time during restoration of large ecosystems.
Enhancing ecosystem restoration efficiency through spatial and temporal coordination
Neeson, Thomas M.; Ferris, Michael C.; Diebel, Matthew W.; Doran, Patrick J.; O’Hanley, Jesse R.; McIntyre, Peter B.
2015-01-01
In many large ecosystems, conservation projects are selected by a diverse set of actors operating independently at spatial scales ranging from local to international. Although small-scale decision making can leverage local expert knowledge, it also may be an inefficient means of achieving large-scale objectives if piecemeal efforts are poorly coordinated. Here, we assess the value of coordinating efforts in both space and time to maximize the restoration of aquatic ecosystem connectivity. Habitat fragmentation is a leading driver of declining biodiversity and ecosystem services in rivers worldwide, and we simultaneously evaluate optimal barrier removal strategies for 661 tributary rivers of the Laurentian Great Lakes, which are fragmented by at least 6,692 dams and 232,068 road crossings. We find that coordinating barrier removals across the entire basin is nine times more efficient at reconnecting fish to headwater breeding grounds than optimizing independently for each watershed. Similarly, a one-time pulse of restoration investment is up to 10 times more efficient than annual allocations totaling the same amount. Despite widespread emphasis on dams as key barriers in river networks, improving road culvert passability is also essential for efficiently restoring connectivity to the Great Lakes. Our results highlight the dramatic economic and ecological advantages of coordinating efforts in both space and time during restoration of large ecosystems. PMID:25918378
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2016-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology, size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and small…
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology and size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and…
Quality of life in small-scaled homelike nursing homes: an 8-month controlled trial.
Kok, Jeroen S; Nielen, Marjan M A; Scherder, Erik J A
2018-02-27
Quality of life is a clinical highly relevant outcome for residents with dementia. The question arises whether small scaled homelike facilities are associated with better quality of life than regular larger scale nursing homes do. A sample of 145 residents living in a large scale care facility were followed over 8 months. Half of the sample (N = 77) subsequently moved to a small scaled facility. Quality of life aspects were measured with the QUALIDEM and GIP before and after relocation. We found a significant Group x Time interaction on measures of anxiety meaning that residents who moved to small scale units became less anxious than residents who stayed on the regular care large-scale units. No significant differences were found on other aspects of quality of life. This study demonstrates that residents who move from a large scale facility to a small scale environment can improve an aspect of quality of life by showing a reduction in anxiety. Current Controlled Trials ISRCTN11151241 . registration date: 21-06-2017. Retrospectively registered.
Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana
2016-01-01
With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.
Large-scale neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
NASA Astrophysics Data System (ADS)
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
The predictability of consumer visitation patterns
NASA Astrophysics Data System (ADS)
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-04-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.
NASA Astrophysics Data System (ADS)
Moritz, R. E.
2005-12-01
The properties, distribution and temporal variation of sea-ice are reviewed for application to problems of ice-atmosphere chemical processes. Typical vertical structure of sea-ice is presented for different ice types, including young ice, first-year ice and multi-year ice, emphasizing factors relevant to surface chemistry and gas exchange. Time average annual cycles of large scale variables are presented, including ice concentration, ice extent, ice thickness and ice age. Spatial and temporal variability of these large scale quantities is considered on time scales of 1-50 years, emphasizing recent and projected changes in the Arctic pack ice. The amount and time evolution of open water and thin ice are important factors that influence ocean-ice-atmosphere chemical processes. Observations and modeling of the sea-ice thickness distribution function are presented to characterize the range of variability in open water and thin ice.
A real-time interferometer technique for compressible flow research
NASA Technical Reports Server (NTRS)
Bachalo, W. D.; Houser, M. J.
1984-01-01
Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.
The predictability of consumer visitation patterns
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-01-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917
The Timing of Teacher Hires and Teacher Qualifications: Is There an Association?
ERIC Educational Resources Information Center
Engel, Mimi
2012-01-01
Background: Case studies suggest that late hiring timelines are common in large urban school districts and result in the loss of qualified teachers to surrounding suburbs. To date, however, there has been no large-scale quantitative investigation of the relationship between the timing of teacher hires and teacher qualifications. Purpose: This…
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-05
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
NASA Astrophysics Data System (ADS)
van der Molen, Johan
2015-04-01
Tidal power generation through submerged turbine-type devices is in an advanced stage of testing, and large-scale applications are being planned in areas with high tidal current speeds. The potential impact of such large-scale applications on the hydrography can be investigated using hydrodynamical models. In addition, aspects of the potential impact on the marine ecosystem can be studied using biogeochemical models. In this study, the coupled hydrodynamics-biogeochemistry model GETM-ERSEM is used in a shelf-wide application to investigate the potential impact of large-scale tidal power generation in the Pentland Firth. A scenario representing the currently licensed power extraction suggested i) an average reduction in M2 tidal current velocities of several cm/s within the Pentland Firth, ii) changes in the residual circulation of several mm/s in the vicinity of the Pentland Firth, iii) an increase in M2 tidal amplitude of up to 1 cm to the west of the Pentland Firth, and iv) a reduction of several mm in M2 tidal amplitude along the east coast of the UK. A second scenario representing 10 times the currently licensed power extraction resulted in changes that were approximately 10 times as large. Simulations including the biogeochemistry model for these scenarios are currently in preparation, and first results will be presented at the the conference, aiming at impacts on primary production and benthic production.
The atmospheric implications of radiation belt remediation
NASA Astrophysics Data System (ADS)
Rodger, C. J.; Clilverd, M. A.; Ulich, Th.; Verronen, P. T.; Turunen, E.; Thomson, N. R.
2006-08-01
High altitude nuclear explosions (HANEs) and geomagnetic storms can produce large scale injections of relativistic particles into the inner radiation belts. It is recognised that these large increases in >1 MeV trapped electron fluxes can shorten the operational lifetime of low Earth orbiting satellites, threatening a large, valuable population. Therefore, studies are being undertaken to bring about practical human control of the radiation belts, termed "Radiation Belt Remediation" (RBR). Here we consider the upper atmospheric consequences of an RBR system operating over either 1 or 10 days. The RBR-forced neutral chemistry changes, leading to NOx enhancements and Ox depletions, are significant during the timescale of the precipitation but are generally not long-lasting. The magnitudes, time-scales, and altitudes of these changes are no more significant than those observed during large solar proton events. In contrast, RBR-operation will lead to unusually intense HF blackouts for about the first half of the operation time, producing large scale disruptions to radio communication and navigation systems. While the neutral atmosphere changes are not particularly important, HF disruptions could be an important area for policy makers to consider, particularly for the remediation of natural injections.
NASA Astrophysics Data System (ADS)
Pfister, Olivier
2017-05-01
When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
Assessment of Disturbance at Three Spatial Scales in Two Large Tropical Reservoirs
Large reservoirs vary from lentic to lotic systems in time and space. Therefore our objective was to assess disturbance gradients for two large tropical reservoirs and their influences on benthic macroinvertebrates. We tested three hypothesis: 1) a disturbance gradient of environ...
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Fine-scale characteristics of interplanetary sector
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Neubauer, F. M.; Barnstoff, H.
1980-01-01
The structure of the interplanetary sector boundaries observed by Helios 1 within sector transition regions was studied. Such regions consist of intermediate (nonspiral) average field orientations in some cases, as well as a number of large angle directional discontinuities (DD's) on the fine scale (time scales 1 hour). Such DD's are found to be more similar to tangential than rotational discontinuities, to be oriented on average more nearly perpendicular than parallel to the ecliptic plane to be accompanied usually by a large dip ( 80%) in B and, with a most probable thickness of 3 x 10 to the 4th power km, significantly thicker previously studied. It is hypothesized that the observed structures represent multiple traversals of the global heliospheric current sheet due to local fluctuations in the position of the sheet. There is evidence that such fluctuations are sometimes produced by wavelike motions or surface corrugations of scale length 0.05 - 0.1 AU superimposed on the large scale structure.
Homogenization techniques for population dynamics in strongly heterogeneous landscapes.
Yurk, Brian P; Cobbold, Christina A
2018-12-01
An important problem in spatial ecology is to understand how population-scale patterns emerge from individual-level birth, death, and movement processes. These processes, which depend on local landscape characteristics, vary spatially and may exhibit sharp transitions through behavioural responses to habitat edges, leading to discontinuous population densities. Such systems can be modelled using reaction-diffusion equations with interface conditions that capture local behaviour at patch boundaries. In this work we develop a novel homogenization technique to approximate the large-scale dynamics of the system. We illustrate our approach, which also generalizes to multiple species, with an example of logistic growth within a periodic environment. We find that population persistence and the large-scale population carrying capacity is influenced by patch residence times that depend on patch preference, as well as movement rates in adjacent patches. The forms of the homogenized coefficients yield key theoretical insights into how large-scale dynamics arise from the small-scale features.
Advances in Parallelization for Large Scale Oct-Tree Mesh Generation
NASA Technical Reports Server (NTRS)
O'Connell, Matthew; Karman, Steve L.
2015-01-01
Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.
The statistical overlap theory of chromatography using power law (fractal) statistics.
Schure, Mark R; Davis, Joe M
2011-12-30
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.
Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns
NASA Astrophysics Data System (ADS)
Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro
2017-05-01
The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.
Extreme reaction times determine fluctuation scaling in human color vision
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-11-01
In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.
A successful trap design for capturing large terrestrial snakes
Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer
2005-01-01
Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
NASA Astrophysics Data System (ADS)
Lamb, Derek A.
2016-10-01
While sunspots follow a well-defined pattern of emergence in space and time, small-scale flux emergence is assumed to occur randomly at all times in the quiet Sun. HMI's full-disk coverage, high cadence, spatial resolution, and duty cycle allow us to probe that basic assumption. Some case studies of emergence suggest that temporal clustering on spatial scales of 50-150 Mm may occur. If clustering is present, it could serve as a diagnostic of large-scale subsurface magnetic field structures. We present the results of a manual survey of small-scale flux emergence events over a short time period, and a statistical analysis addressing the question of whether these events show spatio-temporal behavior that is anything other than random.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.
Li, Min; Zhang, John Zenghui; Xia, Fei
2016-04-12
Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Active Self-Testing Noise Measurement Sensors for Large-Scale Environmental Sensor Networks
Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris
2013-01-01
Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10. PMID:24351634
Mandák, Bohumil; Hadincová, Věroslava; Mahelka, Václav; Wildová, Radka
2013-01-01
Background North American Pinus strobus is a highly invasive tree species in Central Europe. Using ten polymorphic microsatellite loci we compared various aspects of the large-scale genetic diversity of individuals from 30 sites in the native distribution range with those from 30 sites in the European adventive distribution range. To investigate the ascertained pattern of genetic diversity of this intercontinental comparison further, we surveyed fine-scale genetic diversity patterns and changes over time within four highly invasive populations in the adventive range. Results Our data show that at the large scale the genetic diversity found within the relatively small adventive range in Central Europe, surprisingly, equals the diversity found within the sampled area in the native range, which is about thirty times larger. Bayesian assignment grouped individuals into two genetic clusters separating North American native populations from the European, non-native populations, without any strong genetic structure shown over either range. In the case of the fine scale, our comparison of genetic diversity parameters among the localities and age classes yielded no evidence of genetic diversity increase over time. We found that SGS differed across age classes within the populations under study. Old trees in general completely lacked any SGS, which increased over time and reached its maximum in the sapling stage. Conclusions Based on (1) the absence of difference in genetic diversity between the native and adventive ranges, together with the lack of structure in the native range, and (2) the lack of any evidence of any temporal increase in genetic diversity at four highly invasive populations in the adventive range, we conclude that population amalgamation probably first happened in the native range, prior to introduction. In such case, there would have been no need for multiple introductions from previously isolated populations, but only several introductions from genetically diverse populations. PMID:23874648
Trend Switching Processes in Financial Markets
NASA Astrophysics Data System (ADS)
Preis, Tobias; Stanley, H. Eugene
For an intriguing variety of switching processes in nature, the underlying complex system abruptly changes at a specific point from one state to another in a highly discontinuous fashion. Financial market fluctuations are characterized by many abrupt switchings creating increasing trends ("bubble formation") and decreasing trends ("bubble collapse"), on time scales ranging from macroscopic bubbles persisting for hundreds of days to microscopic bubbles persisting only for very short time scales. Our analysis is based on a German DAX Future data base containing 13,991,275 transactions recorded with a time resolution of 10- 2 s. For a parallel analysis, we use a data base of all S&P500 stocks providing 2,592,531 daily closing prices. We ask whether these ubiquitous switching processes have quantifiable features independent of the time horizon studied. We find striking scale-free behavior of the volatility after each switching occurs. We interpret our findings as being consistent with time-dependent collective behavior of financial market participants. We test the possible universality of our result by performing a parallel analysis of fluctuations in transaction volume and time intervals between trades. We show that these financial market switching processes have features similar to those present in phase transitions. We find that the well-known catastrophic bubbles that occur on large time scales - such as the most recent financial crisis - are no outliers but in fact single dramatic representatives caused by the formation of upward and downward trends on time scales varying over nine orders of magnitude from the very large down to the very small.
NASA Astrophysics Data System (ADS)
Palus, Milan; Jajcay, Nikola; Hlinka, Jaroslav; Kravtsov, Sergey; Tsonis, Anastasios
2016-04-01
Complexity of the climate system stems not only from the fact that it is variable over a huge range of spatial and temporal scales, but also from the nonlinear character of the climate system that leads to interactions of dynamics across scales. The dynamical processes on large time scales influence variability on shorter time scales. This nonlinear phenomenon of cross-scale causal interactions can be observed due to the recently introduced methodology [1] which starts with a wavelet decomposition of a multi-scale signal into quasi-oscillatory modes of a limited bandwidth, described using their instantaneous phases and amplitudes. Then their statistical associations are tested in order to search for interactions across time scales. An information-theoretic formulation of the generalized, nonlinear Granger causality [2] uncovers causal influence and information transfer from large-scale modes of climate variability with characteristic time scales from years to almost a decade to regional temperature variability on short time scales. In analyses of air temperature records from various European locations, a quasioscillatory phenomenon with the period around 7-8 years has been identified as the factor influencing variability of surface air temperature (SAT) on shorter time scales. Its influence on the amplitude of the SAT annual cycle was estimated in the range 0.7-1.4 °C and the effect on the overall variability of the SAT anomalies (SATA) leads to the changes 1.5-1.7 °C in the annual SATA means. The strongest effect of the 7-8 year cycle was observed in the winter SATA means where it reaches 4-5 °C in central European station and reanalysis data [3]. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, Phys. Rev. Lett. 112 078702 (2014) [2] M. Palus, M. Vejmelka, Phys. Rev. E 75, 056211 (2007) [3] N. Jajcay, J. Hlinka, S. Kravtsov, A. A. Tsonis, M. Palus, Time-scales of the European surface air temperature variability: The role of the 7-8 year cycle. Geophys. Res. Lett., in press, DOI: 10.1002/2015GL067325
Temporal evolution of continental lithospheric strength in actively deforming regions
Thatcher, W.; Pollitz, F.F.
2008-01-01
It has been agreed for nearly a century that a strong, load-bearing outer layer of earth is required to support mountain ranges, transmit stresses to deform active regions and store elastic strain to generate earthquakes. However the dept and extent of this strong layer remain controversial. Here we use a variety of observations to infer the distribution of lithospheric strength in the active western United States from seismic to steady-state time scales. We use evidence from post-seismic transient and earthquake cycle deformation reservoir loading glacio-isostatic adjustment, and lithosphere isostatic adjustment to large surface and subsurface loads. The nearly perfectly elastic behavior of Earth's crust and mantle at the time scale of seismic wave propagation evolves to that of a strong, elastic crust and weak, ductile upper mantle lithosphere at both earthquake cycle (EC, ???10?? to 103 yr) and glacio-isostatic adjustment (GIA, ???103 to 104 yr) time scales. Topography and gravity field correlations indicate that lithosphere isostatic adjustment (LIA) on ???106-107 yr time scales occurs with most lithospheric stress supported by an upper crust overlying a much weaker ductile subtrate. These comparisons suggest that the upper mantle lithosphere is weaker than the crust at all time scales longer than seismic. In contrast, the lower crust has a chameleon-like behavior, strong at EC and GIA time scales and weak for LIA and steady-state deformation processes. The lower crust might even take on a third identity in regions of rapid crustal extension or continental collision, where anomalously high temperatures may lead to large-scale ductile flow in a lower crustal layer that is locally weaker than the upper mantle. Modeling of lithospheric processes in active regions thus cannot use a one-size-fits-all prescription of rheological layering (relation between applied stress and deformation as a function of depth) but must be tailored to the time scale and tectonic setting of the process being investigated.
Scale-dependent temporal variations in stream water geochemistry.
Nagorski, Sonia A; Moore, Iohnnie N; McKinnon, Temple E; Smith, David B
2003-03-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
Scale-dependent temporal variations in stream water geochemistry
Nagorski, S.A.; Moore, J.N.; McKinnon, Temple E.; Smith, D.B.
2003-01-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
NASA Astrophysics Data System (ADS)
De Michelis, Paola; Federica Marcucci, Maria; Consolini, Giuseppe
2015-04-01
Recently we have investigated the spatial distribution of the scaling features of short-time scale magnetic field fluctuations using measurements from several ground-based geomagnetic observatories distributed in the northern hemisphere. We have found that the scaling features of fluctuations of the horizontal magnetic field component at time scales below 100 minutes are correlated with the geomagnetic activity level and with changes in the currents flowing in the ionosphere. Here, we present a detailed analysis of the dynamical changes of the magnetic field scaling features as a function of the geomagnetic activity level during the well-known large geomagnetic storm occurred on July, 15, 2000 (the Bastille event). The observed dynamical changes are discussed in relationship with the changes of the overall ionospheric polar convection and potential structure as reconstructed using SuperDARN data. This work is supported by the Italian National Program for Antarctic Research (PNRA) - Research Project 2013/AC3.08 and by the European Community's Seventh Framework Programme ([FP7/2007-2013]) under Grant no. 313038/STORM and
Universal scaling and nonlinearity of aggregate price impact in financial markets.
Patzelt, Felix; Bouchaud, Jean-Philippe
2018-01-01
How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.
Universal scaling and nonlinearity of aggregate price impact in financial markets
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Bouchaud, Jean-Philippe
2018-01-01
How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.
NASA Technical Reports Server (NTRS)
Le, G.; Wang, Y.; Slavin, J. A.; Strangeway, R. L.
2009-01-01
Space Technology 5 (ST5) is a constellation mission consisting of three microsatellites. It provides the first multipoint magnetic field measurements in low Earth orbit, which enables us to separate spatial and temporal variations. In this paper, we present a study of the temporal variability of field-aligned currents using the ST5 data. We examine the field-aligned current observations during and after a geomagnetic storm and compare the magnetic field profiles at the three spacecraft. The multipoint data demonstrate that mesoscale current structures, commonly embedded within large-scale current sheets, are very dynamic with highly variable current density and/or polarity in approx.10 min time scales. On the other hand, the data also show that the time scales for the currents to be relatively stable are approx.1 min for mesoscale currents and approx.10 min for large-scale currents. These temporal features are very likely associated with dynamic variations of their charge carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of mesoscale field-aligned currents are found to be consistent with those of auroral parallel electric field.
The scale-dependent market trend: Empirical evidences using the lagged DFA method
NASA Astrophysics Data System (ADS)
Li, Daye; Kou, Zhun; Sun, Qiankun
2015-09-01
In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Large-scale weather dynamics during the 2015 haze event in Singapore
NASA Astrophysics Data System (ADS)
Djamil, Yudha; Lee, Wen-Chien; Tien Dat, Pham; Kuwata, Mikinori
2017-04-01
The 2015 haze event in South East Asia is widely considered as a period of the worst air quality in the region in more than a decade. The source of the haze was from forest and peatland fire in Sumatra and Kalimantan Islands, Indonesia. The fires were mostly came from the practice of forest clearance known as slash and burn, to be converted to palm oil plantation. Such practice of clearance although occurs seasonally but at 2015 it became worst by the impact of strong El Nino. The long period of dryer atmosphere over the region due to El Nino makes the fire easier to ignite, spread and difficult to stop. The biomass emission from the forest and peatland fire caused large-scale haze pollution problem in both Islands and further spread into the neighboring countries such as Singapore and Malaysia. In Singapore, for about two months (September-October, 2015) the air quality was in the unhealthy level. Such unfortunate condition caused some socioeconomic losses such as school closure, cancellation of outdoor events, health issues and many more with total losses estimated as S700 million. The unhealthy level of Singapore's air quality is based on the increasing pollutant standard index (PSI>120) due to the haze arrival, it even reached a hazardous level (PSI= 300) for several days. PSI is a metric of air quality in Singapore that aggregate six pollutants (SO2, PM10, PM2.5, NO2, CO and O3). In this study, we focused on PSI variability in weekly-biweekly time scales (periodicity < 30 days) since it is the least understood compare to their diurnal and seasonal scales. We have identified three dominant time scales of PSI ( 5, 10 and 20 days) using Wavelet method and investigated their large-scale atmospheric structures. The PSI associated large-scale column moisture horizontal structures over the Indo-Pacific basin are dominated by easterly propagating gyres in synoptic (macro) scale for the 5 days ( 10 and 20 days) time scales. The propagating gyres manifest as cyclical column moisture flux trajectory around Singapore region. Some of its phases are identified to be responsible in transporting the haze from its source to Singapore. The haze source was identified by compositing number of hotspots in grid-space based on the three time scales of PSI. Further discussion about equatorial waves during the haze event will also be presented.
Mesoscale Dynamical Regimes in the Midlatitudes
NASA Astrophysics Data System (ADS)
Craig, G. C.; Selz, T.
2018-01-01
The atmospheric mesoscales are characterized by a complex variety of meteorological phenomena that defy simple classification. Here a full space-time spectral analysis is carried out, based on a 7 day convection-permitting simulation of springtime midlatitude weather on a large domain. The kinetic energy is largest at synoptic scales, and on the mesoscale it is largely confined to an "advective band" where space and time scales are related by a constant of proportionality which corresponds to a velocity scale of about 10 m s-1. Computing the relative magnitude of different terms in the governing equations allows the identification of five dynamical regimes. These are tentatively identified as quasi-geostrophic flow, propagating gravity waves, stationary gravity waves related to orography, acoustic modes, and a weak temperature gradient regime, where vertical motions are forced by diabatic heating.
Validation of Satellite Retrieved Land Surface Variables
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1999-01-01
The effective use of satellite observations of the land surface is limited by the lack of high spatial resolution ground data sets for validation of satellite products. Recent large scale field experiments include FIFE, HAPEX-Sahel and BOREAS which provide us with data sets that have large spatial coverage and long time coverage. It is the objective of this paper to characterize the difference between the satellite estimates and the ground observations. This study and others along similar lines will help us in utilization of satellite retrieved data in large scale modeling studies.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
Herault, J; Rincon, F; Cossu, C; Lesur, G; Ogilvie, G I; Longaretti, P-Y
2011-09-01
The nature of dynamo action in shear flows prone to magnetohydrodynamc instabilities is investigated using the magnetorotational dynamo in Keplerian shear flow as a prototype problem. Using direct numerical simulations and Newton's method, we compute an exact time-periodic magnetorotational dynamo solution to three-dimensional dissipative incompressible magnetohydrodynamic equations with rotation and shear. We discuss the physical mechanism behind the cycle and show that it results from a combination of linear and nonlinear interactions between a large-scale axisymmetric toroidal magnetic field and nonaxisymmetric perturbations amplified by the magnetorotational instability. We demonstrate that this large-scale dynamo mechanism is overall intrinsically nonlinear and not reducible to the standard mean-field dynamo formalism. Our results therefore provide clear evidence for a generic nonlinear generation mechanism of time-dependent coherent large-scale magnetic fields in shear flows and call for new theoretical dynamo models. These findings may offer important clues to understanding the transitional and statistical properties of subcritical magnetorotational turbulence.
Evidence for the timing of sea-level events during MIS 3
NASA Astrophysics Data System (ADS)
Siddall, M.
2005-12-01
Four large sea-level peaks of millennial-scale duration occur during MIS 3. In addition smaller peaks may exist close to the sensitivity of existing methods to derive sea level during these periods. Millennial-scale changes in temperature during MIS 3 are well documented across much of the planet and are linked in some unknown, yet fundamental way to changes in ice volume / sea level. It is therefore highly likely that the timing of the sea level events during MIS 3 will prove to be a `Rosetta Stone' for understanding millennial scale climate variability. I will review observational and mechanistic arguments for the variation of sea level on Antarctic, Greenland and absolute time scales.
NASA Astrophysics Data System (ADS)
Bunyan, Jonathan; Moore, Keegan J.; Mojahed, Alireza; Fronk, Matthew D.; Leamy, Michael; Tawfick, Sameh; Vakakis, Alexander F.
2018-05-01
In linear time-invariant systems acoustic reciprocity holds by the Onsager-Casimir principle of microscopic reversibility, and it can be broken only by odd external biases, nonlinearities, or time-dependent properties. Recently it was shown that one-dimensional lattices composed of a finite number of identical nonlinear cells with internal scale hierarchy and asymmetry exhibit nonreciprocity both locally and globally. Considering a single cell composed of a large scale nonlinearly coupled to a small scale, local dynamic nonreciprocity corresponds to vibration energy transfer from the large to the small scale, but absence of energy transfer (and localization) from the small to the large scale. This has been recently proven both theoretically and experimentally. Then, considering the entire lattice, global acoustic nonreciprocity has been recently proven theoretically, corresponding to preferential energy transfer within the lattice under transient excitation applied at one of its boundaries, and absence of similar energy transfer (and localization) when the excitation is applied at its other boundary. This work provides experimental validation of the global acoustic nonreciprocity with a one-dimensional asymmetric lattice composed of three cells, with each cell incorporating nonlinearly coupled large and small scales. Due to the intentional asymmetry of the lattice, low impulsive excitations applied to one of its boundaries result in wave transmission through the lattice, whereas when the same excitations are applied to the other end, they lead in energy localization at the boundary and absence of wave transmission. This global nonreciprocity depends critically on energy (i.e., the intensity of the applied impulses), and reduced-order models recover the nonreciprocal acoustics and clarify the nonlinear mechanism generating nonreciprocity in this system.
Scaling behavior of an airplane-boarding model.
Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard
2013-04-01
An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time
Topologically associating domains are stable units of replication-timing regulation.
Pope, Benjamin D; Ryba, Tyrone; Dileep, Vishnu; Yue, Feng; Wu, Weisheng; Denas, Olgert; Vera, Daniel L; Wang, Yanli; Hansen, R Scott; Canfield, Theresa K; Thurman, Robert E; Cheng, Yong; Gülsoy, Günhan; Dennis, Jonathan H; Snyder, Michael P; Stamatoyannopoulos, John A; Taylor, James; Hardison, Ross C; Kahveci, Tamer; Ren, Bing; Gilbert, David M
2014-11-20
Eukaryotic chromosomes replicate in a temporal order known as the replication-timing program. In mammals, replication timing is cell-type-specific with at least half the genome switching replication timing during development, primarily in units of 400-800 kilobases ('replication domains'), whose positions are preserved in different cell types, conserved between species, and appear to confine long-range effects of chromosome rearrangements. Early and late replication correlate, respectively, with open and closed three-dimensional chromatin compartments identified by high-resolution chromosome conformation capture (Hi-C), and, to a lesser extent, late replication correlates with lamina-associated domains (LADs). Recent Hi-C mapping has unveiled substructure within chromatin compartments called topologically associating domains (TADs) that are largely conserved in their positions between cell types and are similar in size to replication domains. However, TADs can be further sub-stratified into smaller domains, challenging the significance of structures at any particular scale. Moreover, attempts to reconcile TADs and LADs to replication-timing data have not revealed a common, underlying domain structure. Here we localize boundaries of replication domains to the early-replicating border of replication-timing transitions and map their positions in 18 human and 13 mouse cell types. We demonstrate that, collectively, replication domain boundaries share a near one-to-one correlation with TAD boundaries, whereas within a cell type, adjacent TADs that replicate at similar times obscure replication domain boundaries, largely accounting for the previously reported lack of alignment. Moreover, cell-type-specific replication timing of TADs partitions the genome into two large-scale sub-nuclear compartments revealing that replication-timing transitions are indistinguishable from late-replicating regions in chromatin composition and lamina association and accounting for the reduced correlation of replication timing to LADs and heterochromatin. Our results reconcile cell-type-specific sub-nuclear compartmentalization and replication timing with developmentally stable structural domains and offer a unified model for large-scale chromosome structure and function.
Downscaling Ocean Conditions: Initial Results using a Quasigeostrophic and Realistic Ocean Model
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith
2014-05-01
Previous theoretical work (Henshaw et al, 2003) has shown that the small-scale modes of variability of solutions of the unforced, incompressible Navier-Stokes equation, and Burgers' equation, can be reconstructed with surprisingly high accuracy from the time history of a few of the large-scale modes. Motivated by this theoretical work we first describe a straightforward method for assimilating information on the large scales in order to recover the small scale oceanic variability. The method is based on nudging in specific wavebands and frequencies and is similar to the so-called spectral nudging method that has been used successfully for atmospheric downscaling with limited area models (e.g. von Storch et al., 2000). The validity of the method is tested using a quasigestrophic model configured to simulate a double ocean gyre separated by an unstable mid-ocean jet. It is shown that important features of the ocean circulation including the position of the meandering mid-ocean jet and associated pinch-off eddies can indeed be recovered from the time history of a small number of large-scales modes. The benefit of assimilating additional time series of observations from a limited number of locations, that alone are too sparse to significantly improve the recovery of the small scales using traditional assimilation techniques, is also demonstrated using several twin experiments. The final part of the study outlines the application of the approach using a realistic high resolution (1/36 degree) model, based on the NEMO (Nucleus for European Modelling of the Ocean) modeling framework, configured for the Scotian Shelf of the east coast of Canada. The large scale conditions used in this application are obtained from the HYCOM (HYbrid Coordinate Ocean Model) + NCODA (Navy Coupled Ocean Data Assimilation) global 1/12 degree analysis product. Henshaw, W., Kreiss, H.-O., Ystrom, J., 2003. Numerical experiments on the interaction between the larger- and the small-scale motion of the Navier-Stokes equations. Multiscale Modeling and Simulation 1, 119-149. von Storch, H., Langenberg, H., Feser, F., 2000. A spectral nudging technique for dynamical downscaling purposes. Monthly Weather Review 128, 3664-3673.
Relationship of D'' structure with the velocity variations near the inner-core boundary
NASA Astrophysics Data System (ADS)
Luo, Sheng-Nian; Ni, Sidao; Helmberger, Don
2002-06-01
Variations in regional differential times between PKiKP (i) and PKIKP (I) have been attributed to hemispheric P-velocity variations of about 1% in the upper 100 km of the inner core (referred to as HIC). The top of the inner core appears relatively fast beneath Asia where D'' is also fast. An alternative interpretation could be the lateral variation in P velocity at the lowermost outer core (HOC) producing the same differential times. To resolve this issue, we introduce the diffracted PKP phase near the B caustic (Bdiff) in the range of 139-145° epicenter distances, and the corresponding differential times between Bdiff and PKiKP and PKIKP as observed on broadband arrays. Due to the long-wavelength nature of Bdiff, we scaled the S-wave tomography model with k values (k ≡ dlnVs/dlnVp) to obtain large-scale P-wave velocity structure in the lower mantle as proposed by earlier studies. Waveform synthetics of Bdiff constructed with small k's predict complex waveforms not commonly observed, confirming the validity of large scaling factor k. With P-velocity in lower mantle constrained at large scale, the extra travel-time constraint imposed by Bdiff helps to resolve the HOC-HIC issue. Our preliminary results suggest k > 2 for the lowermost mantle and support HIC hypothesis. An important implication is that there appears to be a relationship of D'' velocity structures with the structures near the inner core boundary via core dynamics.
Rotation and magnetism in intermediate-mass stars
NASA Astrophysics Data System (ADS)
Quentin, Léo G.; Tout, Christopher A.
2018-06-01
Rotation and magnetism are increasingly recognized as important phenomena in stellar evolution. Surface magnetic fields from a few to 20 000 G have been observed and models have suggested that magnetohydrodynamic transport of angular momentum and chemical composition could explain the peculiar composition of some stars. Stellar remnants such as white dwarfs have been observed with fields from a few to more than 109 G. We investigate the origin of and the evolution, on thermal and nuclear rather than dynamical time-scales, of an averaged large-scale magnetic field throughout a star's life and its coupling to stellar rotation. Large-scale magnetic fields sustained until late stages of stellar evolution with conservation of magnetic flux could explain the very high fields observed in white dwarfs. We include these effects in the Cambridge stellar evolution code using three time-dependant advection-diffusion equations coupled to the structural and composition equations of stars to model the evolution of angular momentum and the two components of the magnetic field. We present the evolution in various cases for a 3 M_{⊙} star from the beginning to the late stages of its life. Our particular model assumes that turbulent motions, including convection, favour small-scale field at the expense of large-scale field. As a result, the large-scale field concentrates in radiative zones of the star and so is exchanged between the core and the envelope of the star as it evolves. The field is sustained until the end of the asymptotic giant branch, when it concentrates in the degenerate core.
A novel computational approach towards the certification of large-scale boson sampling
NASA Astrophysics Data System (ADS)
Huh, Joonsuk
Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.
Activity-Based Introductory Physics Reform *
NASA Astrophysics Data System (ADS)
Thornton, Ronald
2004-05-01
Physics education research has shown that learning environments that engage students and allow them to take an active part in their learning can lead to large conceptual gains compared to those of good traditional instruction. Examples of successful curricula and methods include Peer Instruction, Just in Time Teaching, RealTime Physics, Workshop Physics, Scale-Up, and Interactive Lecture Demonstrations (ILDs). RealTime Physics promotes interaction among students in a laboratory setting and makes use of powerful real-time data logging tools to teach concepts as well as quantitative relationships. An active learning environment is often difficult to achieve in large lecture sessions and Workshop Physics and Scale-Up largely eliminate lectures in favor of collaborative student activities. Peer Instruction, Just in Time Teaching, and Interactive Lecture Demonstrations (ILDs) make lectures more interactive in complementary ways. This presentation will introduce these reforms and use Interactive Lecture Demonstrations (ILDs) with the audience to illustrate the types of curricula and tools used in the curricula above. ILDs make use real experiments, real-time data logging tools and student interaction to create an active learning environment in large lecture classes. A short video of students involved in interactive lecture demonstrations will be shown. The results of research studies at various institutions to measure the effectiveness of these methods will be presented.
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
Automatic location of L/H transition times for physical studies with a large statistical basis
NASA Astrophysics Data System (ADS)
González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA
2012-06-01
Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.
Hopping Diffusion of Nanoparticles Subjected to Topological Constraints
NASA Astrophysics Data System (ADS)
Cai, Li-Heng; Panyukov, Sergey; Rubinstein, Michael
2013-03-01
We describe a novel hopping mechanism for diffusion of large non-sticky nanoparticles subjected to topological constraints in polymer solids (networks and gels) and entangled polymer liquids (melts and solutions). Probe particles with size larger than the mesh size of unentangled polymer networks (tube diameter of entangled polymer liquids) are trapped by the network (entanglement) cages at time scales longer than the relaxation time of the network (entanglement) strand. At long time scales, however, these particles can move further by hopping between neighboring confinement cages. This hopping is controlled by fluctuations of surrounding confinement cages, which could be large enough to allow particles to slip through. The terminal particle diffusion coefficient dominated by this hopping diffusion is appreciable for particles with size slightly larger than the network mesh size (tube diameter). Very large particles in polymer solids will be permanently trapped by local network cages, whereas they can still move in polymer liquids by waiting for entanglement cages to rearrange on the relaxation time scale of the liquids. We would like to acknowledge the financial support of NSF CHE-0911588, DMR-0907515, DMR-1121107, DMR-1122483, and CBET-0609087, NIH R01HL077546 and P50HL107168, and Cystic Fibrosis Foundation under grant RUBIN09XX0.
NASA Astrophysics Data System (ADS)
de Beurs, K.; Henebry, G. M.; Owsley, B.; Sokolik, I. N.
2016-12-01
Land surface phenology metrics allow for the summarization of long image time series into a set of annual observations that describe the vegetated growing season. These metrics have been shown to respond to both large scale climatic and anthropogenic impacts. In this study we assemble a time series (2001 - 2014) of Moderate Resolution Imaging Spectroradiometer (MODIS) Nadir BRDF-Adjusted Reflectance data and land surface temperature data at 0.05º spatial resolution. We then derive land surface phenology metrics focusing on the peak of the growing season by fitting quadratic regression models using NDVI and Accumulated Growing Degree-Days (AGDD) derived from land surface temperature. We link the annual information on the peak timing, the thermal time to peak and the maximum of the growing season with five of the most important large scale climate oscillations: NAO, AO, PDO, PNA and ENSO. We demonstrate several significant correlations between the climate oscillations and the land surface phenology peak metrics for a range of different bioclimatic regions in both dryland Central Asia and the northern Polar Regions. We will then link the correlation results with trends derived by the seasonal Mann-Kendall trend detection method applied to several satellite derived vegetation and albedo datasets.
Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan
NASA Astrophysics Data System (ADS)
Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun
2017-04-01
Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.
NASA Astrophysics Data System (ADS)
Steinhaus, Ben; Shen, Amy; Sureshkumar, Radhakrishna
2006-11-01
We investigate the effects of fluid elasticity and channel geometry on polymeric droplet pinch-off by performing systematic experiments using viscoelastic polymer solutions which possess practically shear rate-independent viscosity (Boger fluids). Four different geometric sizes (width and depth are scaled up proportionally at the ratio of 0.5, 1, 2, 20) are used to study the effect of the length scale, which in turn influences the ratio of elastic to viscous forces as well as the Rayleigh time scale associated with the interfacial instability of a cylindrical column of liquid. We observe a power law relationship between the dimensionless (scaled with respect to the Rayleigh time scale) capillary pinch-off time, T, and the elasticity number, E, defined as the ratio of the fluid relaxation time to the time scale of viscous diffusion. In general, T increases dramatically with increasing E. The inhibition of ``bead-on-a-string'' formation is observed for flows with effective Deborah number, De, defined as the ratio of the fluid relaxation time to the Rayleigh time scale becomes greater than 10. For sufficiently large values of De, the Rayleigh instability may be modified substantially by fluid elasticity.
Renz, Adina J.; Meyer, Axel; Kuraku, Shigehiro
2013-01-01
Cartilaginous fishes, divided into Holocephali (chimaeras) and Elasmoblanchii (sharks, rays and skates), occupy a key phylogenetic position among extant vertebrates in reconstructing their evolutionary processes. Their accurate evolutionary time scale is indispensable for better understanding of the relationship between phenotypic and molecular evolution of cartilaginous fishes. However, our current knowledge on the time scale of cartilaginous fish evolution largely relies on estimates using mitochondrial DNA sequences. In this study, making the best use of the still partial, but large-scale sequencing data of cartilaginous fish species, we estimate the divergence times between the major cartilaginous fish lineages employing nuclear genes. By rigorous orthology assessment based on available genomic and transcriptomic sequence resources for cartilaginous fishes, we selected 20 protein-coding genes in the nuclear genome, spanning 2973 amino acid residues. Our analysis based on the Bayesian inference resulted in the mean divergence time of 421 Ma, the late Silurian, for the Holocephali-Elasmobranchii split, and 306 Ma, the late Carboniferous, for the split between sharks and rays/skates. By applying these results and other documented divergence times, we measured the relative evolutionary rate of the Hox A cluster sequences in the cartilaginous fish lineages, which resulted in a lower substitution rate with a factor of at least 2.4 in comparison to tetrapod lineages. The obtained time scale enables mapping phenotypic and molecular changes in a quantitative framework. It is of great interest to corroborate the less derived nature of cartilaginous fish at the molecular level as a genome-wide phenomenon. PMID:23825540
Renz, Adina J; Meyer, Axel; Kuraku, Shigehiro
2013-01-01
Cartilaginous fishes, divided into Holocephali (chimaeras) and Elasmoblanchii (sharks, rays and skates), occupy a key phylogenetic position among extant vertebrates in reconstructing their evolutionary processes. Their accurate evolutionary time scale is indispensable for better understanding of the relationship between phenotypic and molecular evolution of cartilaginous fishes. However, our current knowledge on the time scale of cartilaginous fish evolution largely relies on estimates using mitochondrial DNA sequences. In this study, making the best use of the still partial, but large-scale sequencing data of cartilaginous fish species, we estimate the divergence times between the major cartilaginous fish lineages employing nuclear genes. By rigorous orthology assessment based on available genomic and transcriptomic sequence resources for cartilaginous fishes, we selected 20 protein-coding genes in the nuclear genome, spanning 2973 amino acid residues. Our analysis based on the Bayesian inference resulted in the mean divergence time of 421 Ma, the late Silurian, for the Holocephali-Elasmobranchii split, and 306 Ma, the late Carboniferous, for the split between sharks and rays/skates. By applying these results and other documented divergence times, we measured the relative evolutionary rate of the Hox A cluster sequences in the cartilaginous fish lineages, which resulted in a lower substitution rate with a factor of at least 2.4 in comparison to tetrapod lineages. The obtained time scale enables mapping phenotypic and molecular changes in a quantitative framework. It is of great interest to corroborate the less derived nature of cartilaginous fish at the molecular level as a genome-wide phenomenon.
The scientific targets of the SCOPE mission
NASA Astrophysics Data System (ADS)
Fujimoto, M.; Saito, Y.; Tsuda, Y.; Shinohara, I.; Kojima, H.
Future Japanese magnetospheric mission "SCOPE" is now under study (planned to be launched in 2012). The main purpose of this mission is to investigate the dynamic behaviors of plasmas in the Earth's magnetosphere from the view-point of cross-scale coupling. Dynamical collisionless space plasma phenomena, be they large scale as a whole, are chracterized by coupling over various time and spatial scales. The best example would be the magnetic reconnection process, which is a large scale energy conversion process but has a small key region at the heart of its engine. Inside the key region, electron scale dynamics plays the key role in liberating the frozen-in constraint, by which reconnection is allowed to proceed. The SCOPE mission is composed of one large mother satellite and four small daughter satellites. The mother spacecraft will be equiped with the electron detector that has 10 msec time resolution so that scales down to the electron's will be resolved. Three of the four daughter satellites surround the mother satellite 3-dimensionally with the mutual distances between several km and several thousand km, which are varied during the mission. Plasma measurements on these spacecrafts will have 1 sec resolution and will provide information on meso-scale plasma structure. The fourth daughter satellite stays near the mother satellite with the distance less than 100km. By correlation between the two plasma wave instruments on the daughter and the mother spacecrafts, propagation of the waves and the information on the electron scale dynamics will be obtained. By this strategy, both meso- and micro-scale information on dynamics are obtained, that will enable us to investigate the physics of the space plasma from the cross-scale coupling point of view.
Synthesis of underreported small-scale fisheries catch in Pacific island waters
NASA Astrophysics Data System (ADS)
Zeller, D.; Harper, S.; Zylich, K.; Pauly, D.
2015-03-01
We synthesize fisheries catch reconstruction studies for 25 Pacific island countries, states and territories, which compare estimates of total domestic catches with officially reported catch data. We exclude data for the large-scale tuna fleets, which have largely foreign beneficial ownership, even when flying Pacific flags. However, we recognize the considerable financial contributions derived from foreign access or charter fees for Pacific host countries. The reconstructions for the 25 entities from 1950 to 2010 suggested that total domestic catches were 2.5 times the data reported to FAO. This discrepancy was largest in early periods (1950: 6.4 times), while for 2010, total catches were 1.7 times the reported data. There was a significant difference in trend between reported and reconstructed catches since 2000, with reconstructed catches declining strongly since their peak in 2000. Total catches increased from 110,000 t yr-1 in 1950 (of which 17,400 t were reported) to a peak of over 250,000 t yr-1 in 2000, before declining to around 200,000 t yr-1 by 2010. This decrease is driven by a declining artisanal (small-scale commercial) catch, which was not compensated for by increasing domestic industrial (large-scale commercial) catches. The artisanal fisheries appear to be declining from a peak of 97,000 t yr-1 in 1992 to less than 50,000 t yr-1 by 2010. However, total catches were dominated by subsistence (small-scale, non-commercial) fisheries, which accounted for 69 % of total catches, with the majority missing from the reported data. Artisanal catches accounted for 22 %, while truly domestic industrial fisheries accounted for only 6 % of total catches. The smallest component is the recreational (small-scale, non-commercial and largely for leisure) sector (2 %), which, although small in catch, is likely of economic importance in some areas due to its direct link to tourism income.
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.
2017-12-01
With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also present some of our findings from applying machine learning and data analytics on the processed SAR data streams. We will also present lessons learned on how to ease the SAR community onto interfacing with these cloud-based SAR science data systems.
Horiguchi, Hiromasa; Yasunaga, Hideo; Hashimoto, Hideki; Ohe, Kazuhiko
2012-12-22
Secondary use of large scale administrative data is increasingly popular in health services and clinical research, where a user-friendly tool for data management is in great demand. MapReduce technology such as Hadoop is a promising tool for this purpose, though its use has been limited by the lack of user-friendly functions for transforming large scale data into wide table format, where each subject is represented by one row, for use in health services and clinical research. Since the original specification of Pig provides very few functions for column field management, we have developed a novel system called GroupFilterFormat to handle the definition of field and data content based on a Pig Latin script. We have also developed, as an open-source project, several user-defined functions to transform the table format using GroupFilterFormat and to deal with processing that considers date conditions. Having prepared dummy discharge summary data for 2.3 million inpatients and medical activity log data for 950 million events, we used the Elastic Compute Cloud environment provided by Amazon Inc. to execute processing speed and scaling benchmarks. In the speed benchmark test, the response time was significantly reduced and a linear relationship was observed between the quantity of data and processing time in both a small and a very large dataset. The scaling benchmark test showed clear scalability. In our system, doubling the number of nodes resulted in a 47% decrease in processing time. Our newly developed system is widely accessible as an open resource. This system is very simple and easy to use for researchers who are accustomed to using declarative command syntax for commercial statistical software and Structured Query Language. Although our system needs further sophistication to allow more flexibility in scripts and to improve efficiency in data processing, it shows promise in facilitating the application of MapReduce technology to efficient data processing with large scale administrative data in health services and clinical research.
Resistivity scaling and electron relaxation times in metallic nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moors, Kristof, E-mail: kristof@itf.fys.kuleuven.be; Imec, Kapeldreef 75, B-3001 Leuven; Sorée, Bart
2014-08-14
We study the resistivity scaling in nanometer-sized metallic wires due to surface roughness and grain-boundaries, currently the main cause of electron scattering in nanoscaled interconnects. The resistivity has been obtained with the Boltzmann transport equation, adopting the relaxation time approximation of the distribution function and the effective mass approximation for the conducting electrons. The relaxation times are calculated exactly, using Fermi's golden rule, resulting in a correct relaxation time for every sub-band state contributing to the transport. In general, the relaxation time strongly depends on the sub-band state, something that remained unclear with the methods of previous work. The resistivitymore » scaling is obtained for different roughness and grain-boundary properties, showing large differences in scaling behavior and relaxation times. Our model clearly indicates that the resistivity is dominated by grain-boundary scattering, easily surpassing the surface roughness contribution by a factor of 10.« less
ROADNET: A Real-time Data Aware System for Earth, Oceanographic, and Environmental Applications
NASA Astrophysics Data System (ADS)
Vernon, F.; Hansen, T.; Lindquist, K.; Ludascher, B.; Orcutt, J.; Rajasekar, A.
2003-12-01
The Real-time Observatories, Application, and Data management Network (ROADNet) Program aims to develop an integrated, seamless, and transparent environmental information network that will deliver geophysical, oceanographic, hydrological, ecological, and physical data to a variety of users in real-time. ROADNet is a multidisciplinary, multinational partnership of researchers, policymakers, natural resource managers, educators, and students who aim to use the data to advance our understanding and management of coastal, ocean, riparian, and terrestrial Earth systems in Southern California, Mexico, and well off shore. To date, project activity and funding have focused on the design and deployment of network linkages and on the exploratory development of the real-time data management system. We are currently adapting powerful "Data Grid" technologies to the unique challenges associated with the management and manipulation of real-time data. Current "Grid" projects deal with static data files, and significant technical innovation is required to address fundamental problems of real-time data processing, integration, and distribution. The technologies developed through this research will create a system that dynamically adapt downstream processing, cataloging, and data access interfaces when sensors are added or removed from the system; provide for real-time processing and monitoring of data streams--detecting events, and triggering computations, sensor and logger modifications, and other actions; integrate heterogeneous data from multiple (signal) domains; and provide for large-scale archival and querying of "consolidated" data. The software tools which must be developed do not exist, although limited prototype systems are available. This research has implications for the success of large-scale NSF initiatives in the Earth sciences (EarthScope), ocean sciences (OOI- Ocean Observatories Initiative), biological sciences (NEON - National Ecological Observatory Network) and civil engineering (NEES - Network for Earthquake Engineering Simulation). Each of these large scale initiatives aims to collect real-time data from thousands of sensors, and each will require new technologies to process, manage, and communicate real-time multidisciplinary environmental data on regional, national, and global scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Gang
Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
Strain localisation in the continental lithosphere, a scale-dependent process
NASA Astrophysics Data System (ADS)
Jolivet, Laurent; Burov, Evguenii
2013-04-01
Strain localisation in continents is a general question tackled by specialists of various disciplines in Earth Sciences. Field geologists working at regional scale are able to describe the succession of events leading to the formation of large strain zones that accommodate large displacement within plate boundaries. On the other end of the spectrum, laboratory experiments provide numbers that quantitatively describe the rheology of rock material at the scale of a few mm and at deformation rates up to 8-10 orders of magnitude faster than in nature. Extrapolating from the scale of the experiment to the scale of the continental lithosphere is a considerable leap across 8-10 orders of magnitude both in space and time. It is however quite obvious that different processes are at work for each scale considered. At the scale of a grain aggregate diffusion within individual grains, dislocation or grain boundary sliding, depending on temperature and fluid conditions, are of primary importance. But at the scale of a mountain belt, a major detachment or a strike-slip shear zone that have accommodated tens or hundreds of kilometres of relative displacement, other parameters will take over such as structural softening and the heterogeneity of the crust inherited from past tectonic events that have juxtaposed rock units of very different compositions and induced a strong orientation of rocks. Once the deformation is localised along major shear zones, grain size reduction, interaction between rocks and fluids and metamorphic reactions and other small-scale processes tend to further localise the strain. Because the crust is colder and more lithologically complex this heterogeneity is likely much more prominent in the crust than in the mantle and then the relative importance of "small-scale" and "large-scale" parameters will be very different in the crust and in the mantle. Thus, depending upon the relative thickness of the crust and mantle in the deforming lithosphere, the role of each mechanism will have more or less important consequences on strain localisation. This complexity sometimes leads to disregard of experimental parameters in large-scale thermo-mechanical models and to use instead ad hoc "large-scale" numbers that better fit the observed geological history. The goal of the ERC RHEOLITH project is to associate to each tectonic process the relevant rheological parameters depending upon the scale considered, in an attempt to elaborate a generalized "Preliminary Rheology Model Set for Lithosphere" (PReMSL), which will cover the entire time and spatial scale range of deformation.
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
Wood, Fiona; Kowalczuk, Jenny; Elwyn, Glyn; Mitchell, Clive; Gallacher, John
2011-08-01
Population based genetics studies are dependent on large numbers of individuals in the pursuit of small effect sizes. Recruiting and consenting a large number of participants is both costly and time consuming. We explored whether an online consent process for large-scale genetics studies is acceptable for prospective participants using an example online genetics study. We conducted semi-structured interviews with 42 members of the public stratified by age group, gender and newspaper readership (a measure of social status). Respondents were asked to use a website designed to recruit for a large-scale genetic study. After using the website a semi-structured interview was conducted to explore opinions and any issues they would have. Responses were analysed using thematic content analysis. The majority of respondents said they would take part in the research (32/42). Those who said they would decline to participate saw fewer benefits from the research, wanted more information and expressed a greater number of concerns about the study. Younger respondents had concerns over time commitment. Middle aged respondents were concerned about privacy and security. Older respondents were more altruistic in their motivation to participate. Common themes included trust in the authenticity of the website, security of personal data, curiosity about their own genetic profile, operational concerns and a desire for more information about the research. Online consent to large-scale genetic studies is likely to be acceptable to the public. The online consent process must establish trust quickly and effectively by asserting authenticity and credentials, and provide access to a range of information to suit different information preferences.
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
How Large Scales Flows May Influence Solar Activity
NASA Technical Reports Server (NTRS)
Hathaway, D. H.
2004-01-01
Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
NASA Astrophysics Data System (ADS)
Yearsley, J. R.
2017-12-01
The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.
Large-Scale Hybrid Motor Testing. Chapter 10
NASA Technical Reports Server (NTRS)
Story, George
2006-01-01
Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.
Cognitive Performance Decrement in U.S. Army Aircrews.
1985-08-31
through his technical insight, patience and understanding of the challenges A associated with large- scale data collection. Inputs from members of... SCALES FOR HELICOPTER TASK TAXONOMY -1-------133 F LITERATURE REVIEW ON TIME ESTIMATION -------- --- 137 F.1 PURPOSE ----------------------- 137 F.2...The Glickman study indi- cates that the time estimation methodology employed by them did a minimal job of discriminating tasks. However, the current
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libin, A., E-mail: a_libin@netvision.net.il
2012-12-15
A linear combination of a pair of dual anisotropic decaying Beltrami flows with spatially constant amplitudes (the Trkal solutions) with the same eigenvalue of the curl operator and of a constant velocity orthogonal vector to the Beltrami pair yields a triplet solution of the force-free Navier-Stokes equation. The amplitudes slightly variable in space (large scale perturbations) yield the emergence of a time-dependent phase between the dual Beltrami flows and of the upward velocity, which are unstable at large values of the Reynolds number. They also lead to the formation of large-scale curved prisms of streamlines with edges being the stringsmore » of singular vorticity.« less
Large Composite Structures Processing Technologies for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.
2001-01-01
Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.
Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor
Sheu, Jonathan; Beltzer, Jim; Fury, Brian; Wilczek, Katarzyna; Tobin, Steve; Falconer, Danny; Nolta, Jan; Bauer, Gerhard
2015-01-01
Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs), we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s) and in 10-layer cell factories (CF10s), while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation. PMID:26151065
Real-time evolution of a large-scale relativistic jet
NASA Astrophysics Data System (ADS)
Martí, Josep; Luque-Escamilla, Pedro L.; Romero, Gustavo E.; Sánchez-Sutil, Juan R.; Muñoz-Arjonilla, Álvaro J.
2015-06-01
Context. Astrophysical jets are ubiquitous in the Universe on all scales, but their large-scale dynamics and evolution in time are hard to observe since they usually develop at a very slow pace. Aims: We aim to obtain the first observational proof of the expected large-scale evolution and interaction with the environment in an astrophysical jet. Only jets from microquasars offer a chance to witness the real-time, full-jet evolution within a human lifetime, since they combine a "short", few parsec length with relativistic velocities. Methods: The methodology of this work is based on a systematic recalibraton of interferometric radio observations of microquasars available in public archives. In particular, radio observations of the microquasar GRS 1758-258 over less than two decades have provided the most striking results. Results: Significant morphological variations in the extended jet structure of GRS 1758-258 are reported here that were previously missed. Its northern radio lobe underwent a major morphological variation that rendered the hotspot undetectable in 2001 and reappeared again in the following years. The reported changes confirm the Galactic nature of the source. We tentatively interpret them in terms of the growth of instabilities in the jet flow. There is also evidence of surrounding cocoon. These results can provide a testbed for models accounting for the evolution of jets and their interaction with the environment.
Climate and wildfires in the North American boreal forest.
Macias Fauria, Marc; Johnson, E A
2008-07-12
The area burned in the North American boreal forest is controlled by the frequency of mid-tropospheric blocking highs that cause rapid fuel drying. Climate controls the area burned through changing the dynamics of large-scale teleconnection patterns (Pacific Decadal Oscillation/El Niño Southern Oscillation and Arctic Oscillation, PDO/ENSO and AO) that control the frequency of blocking highs over the continent at different time scales. Changes in these teleconnections may be caused by the current global warming. Thus, an increase in temperature alone need not be associated with an increase in area burned in the North American boreal forest. Since the end of the Little Ice Age, the climate has been unusually moist and variable: large fire years have occurred in unusual years, fire frequency has decreased and fire-climate relationships have occurred at interannual to decadal time scales. Prolonged and severe droughts were common in the past and were partly associated with changes in the PDO/ENSO system. Under these conditions, large fire years become common, fire frequency increases and fire-climate relationships occur at decadal to centennial time scales. A suggested return to the drier climate regimes of the past would imply major changes in the temporal dynamics of fire-climate relationships and in area burned, a reduction in the mean age of the forest, and changes in species composition of the North American boreal forest.
Understanding metropolitan patterns of daily encounters.
Sun, Lijun; Axhausen, Kay W; Lee, Der-Horng; Huang, Xianfeng
2013-08-20
Understanding of the mechanisms driving our daily face-to-face encounters is still limited; the field lacks large-scale datasets describing both individual behaviors and their collective interactions. However, here, with the help of travel smart card data, we uncover such encounter mechanisms and structures by constructing a time-resolved in-vehicle social encounter network on public buses in a city (about 5 million residents). Using a population scale dataset, we find physical encounters display reproducible temporal patterns, indicating that repeated encounters are regular and identical. On an individual scale, we find that collective regularities dominate distinct encounters' bounded nature. An individual's encounter capability is rooted in his/her daily behavioral regularity, explaining the emergence of "familiar strangers" in daily life. Strikingly, we find individuals with repeated encounters are not grouped into small communities, but become strongly connected over time, resulting in a large, but imperceptible, small-world contact network or "structure of co-presence" across the whole metropolitan area. Revealing the encounter pattern and identifying this large-scale contact network are crucial to understanding the dynamics in patterns of social acquaintances, collective human behaviors, and--particularly--disclosing the impact of human behavior on various diffusion/spreading processes.
Understanding metropolitan patterns of daily encounters
Sun, Lijun; Axhausen, Kay W.; Lee, Der-Horng; Huang, Xianfeng
2013-01-01
Understanding of the mechanisms driving our daily face-to-face encounters is still limited; the field lacks large-scale datasets describing both individual behaviors and their collective interactions. However, here, with the help of travel smart card data, we uncover such encounter mechanisms and structures by constructing a time-resolved in-vehicle social encounter network on public buses in a city (about 5 million residents). Using a population scale dataset, we find physical encounters display reproducible temporal patterns, indicating that repeated encounters are regular and identical. On an individual scale, we find that collective regularities dominate distinct encounters’ bounded nature. An individual’s encounter capability is rooted in his/her daily behavioral regularity, explaining the emergence of “familiar strangers” in daily life. Strikingly, we find individuals with repeated encounters are not grouped into small communities, but become strongly connected over time, resulting in a large, but imperceptible, small-world contact network or “structure of co-presence” across the whole metropolitan area. Revealing the encounter pattern and identifying this large-scale contact network are crucial to understanding the dynamics in patterns of social acquaintances, collective human behaviors, and—particularly—disclosing the impact of human behavior on various diffusion/spreading processes. PMID:23918373
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
NASA Astrophysics Data System (ADS)
Guthoff, Rudolf F.; Zhivov, Andrey; Stachs, Oliver
2010-02-01
The aim of the study was to produce two-dimensional reconstruction maps of the living corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy in real time. CLSM source data (frame rate 30Hz, 384x384 pixel) were used to create large-scale maps of the scanned area by selecting the Automatic Real Time (ART) composite mode. The mapping algorithm is based on an affine transformation. Microscopy of the sub-basal nerve plexus was performed on normal and LASIK eyes as well as on rabbit eyes. Real-time mapping of the sub-basal nerve plexus was performed in large-scale up to a size of 3.2mm x 3.2mm. The developed method enables a real-time in vivo mapping of the sub-basal nerve plexus which is stringently necessary for statistically firmed conclusions about morphometric plexus alterations.
Camera, Stefano; Santos, Mário G; Ferreira, Pedro G; Ferramacho, Luís
2013-10-25
The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z=/~1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter f(NL), with an error close to σ(f(NL))~1.
2004-10-01
MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory
Teaching Real Science with a Microcomputer.
ERIC Educational Resources Information Center
Naiman, Adeline
1983-01-01
Discusses various ways science can be taught using microcomputers, including simulations/games which allow large-scale or historic experiments to be replicated on a manageable scale in a brief time. Examples of several computer programs are also presented, including "Experiments in Human Physiology,""Health Awareness…
Memory: Ironing Out a Wrinkle in Time.
Miller, Adam M P; Frankland, Paul W; Josselyn, Sheena A
2018-05-21
Individual hippocampal neurons encode time over seconds, whereas large-scale changes in population activity of hippocampal neurons encode time over minutes and days. New research shows how the hippocampus represents these multiple timescales simultaneously. Copyright © 2018 Elsevier Ltd. All rights reserved.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
NASA Astrophysics Data System (ADS)
Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.
2017-05-01
Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.
Flagellum synchronization inhibits large-scale hydrodynamic instabilities in sperm suspensions
NASA Astrophysics Data System (ADS)
Schöller, Simon F.; Keaveny, Eric E.
2016-11-01
Sperm in suspension can exhibit large-scale collective motion and form coherent structures. Our picture of such coherent motion is largely based on reduced models that treat the swimmers as self-locomoting rigid bodies that interact via steady dipolar flow fields. Swimming sperm, however, have many more degrees of freedom due to elasticity, have a more exotic shape, and generate spatially-complex, time-dependent flow fields. While these complexities are known to lead to phenomena such as flagellum synchronization and attraction, how these effects impact the overall suspension behaviour and coherent structure formation is largely unknown. Using a computational model that captures both flagellum beating and elasticity, we simulate suspensions on the order of 103 individual swimming sperm cells whose motion is coupled through the surrounding Stokesian fluid. We find that the tendency for flagella to synchronize and sperm to aggregate inhibits the emergence of the large-scale hydrodynamic instabilities often associated with active suspensions. However, when synchronization is repressed by adding noise in the flagellum actuation mechanism, the picture changes and the structures that resemble large-scale vortices appear to re-emerge. Supported by an Imperial College PhD scholarship.
Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction
NASA Astrophysics Data System (ADS)
Keith, Theo G., Jr.; Hixon, Duane R.
2002-07-01
Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.
Topologically-associating domains are stable units of replication-timing regulation
Pope, Benjamin D.; Ryba, Tyrone; Dileep, Vishnu; Yue, Feng; Wu, Weisheng; Denas, Olgert; Vera, Daniel L.; Wang, Yanli; Hansen, R. Scott; Canfield, Theresa K.; Thurman, Robert E.; Cheng, Yong; Gülsoy, Günhan; Dennis, Jonathan H.; Snyder, Michael P.; Stamatoyannopoulos, John A.; Taylor, James; Hardison, Ross C.; Kahveci, Tamer; Ren, Bing; Gilbert, David M.
2014-01-01
Summary Eukaryotic chromosomes replicate in a temporal order known as the replication-timing program1. During mammalian development, at least half the genome changes replication timing, primarily in units of 400–800 kb (“replication domains”; RDs), whose positions are preserved in different cell types, conserved between species, and appear to confine long-range effects of chromosome rearrangements2–7. Early and late replication correlate strongly with open and closed chromatin compartments identified by high-resolution chromosome conformation capture (Hi-C), and, to a lesser extent, lamina-associated domains (LADs)4,5,8,9. Recent Hi-C mapping has unveiled a substructure of topologically-associating domains (TADs) that are largely conserved in their positions between cell types and are similar in size to RDs8,10. However, TADs can be further sub-stratified into smaller domains, challenging the significance of structures at any particular scale11,12. Moreover, attempts to reconcile TADs and LADs to replication-timing data have not revealed a common, underlying domain structure8,9,13. Here, we localize boundaries of RDs to the early-replicating border of replication-timing transitions and map their positions in 18 human and 13 mouse cell types. We demonstrate that, collectively, RD boundaries share a near one-to-one correlation with TAD boundaries, whereas within a cell type, adjacent TADs that replicate at similar times obscure RD boundaries, largely accounting for the previously reported lack of alignment. Moreover, cell-type specific replication timing of TADs partitions the genome into two large-scale sub-nuclear compartments revealing that replication-timing transitions are indistinguishable from late-replicating regions in chromatin composition and lamina association and accounting for the reduced correlation of replication timing to LADs and heterochromatin. Our results reconcile cell type specific sub-nuclear compartmentalization with developmentally stable chromosome domains and offer a unified model for large-scale chromosome structure and function. PMID:25409831
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falceta-Gonçalves, D.; Kowal, G.
2015-07-20
In this work we report on a numerical study of the cosmic magnetic field amplification due to collisionless plasma instabilities. The collisionless magnetohydrodynamic equations derived account for the pressure anisotropy that leads, in specific conditions, to the firehose and mirror instabilities. We study the time evolution of seed fields in turbulence under the influence of such instabilities. An approximate analytical time evolution of the magnetic field is provided. The numerical simulations and the analytical predictions are compared. We found that (i) amplification of the magnetic field was efficient in firehose-unstable turbulent regimes, but not in the mirror-unstable models; (ii) the growthmore » rate of the magnetic energy density is much faster than the turbulent dynamo; and (iii) the efficient amplification occurs at small scales. The analytical prediction for the correlation between the growth timescales and pressure anisotropy is confirmed by the numerical simulations. These results reinforce the idea that pressure anisotropies—driven naturally in a turbulent collisionless medium, e.g., the intergalactic medium, could efficiently amplify the magnetic field in the early universe (post-recombination era), previous to the collapse of the first large-scale gravitational structures. This mechanism, though fast for the small-scale fields (∼kpc scales), is unable to provide relatively strong magnetic fields at large scales. Other mechanisms that were not accounted for here (e.g., collisional turbulence once instabilities are quenched, velocity shear, or gravitationally induced inflows of gas into galaxies and clusters) could operate afterward to build up large-scale coherent field structures in the long time evolution.« less
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
On the large eddy simulation of turbulent flows in complex geometry
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1993-01-01
Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime
2017-04-01
Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic. Moreover, in accordance with other previous studies, the wavelet components detected in SLP and precipitation on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation. Current works are now conducted including SST over the Atlantic in order to get further insights into this mechanism.
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
Plague and Climate: Scales Matter
Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.
2011-01-01
Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648
NASA Technical Reports Server (NTRS)
Jeong, Su-Jong; Schimel, David; Frankenberg, Christian; Drewry, Darren T.; Fisher, Joshua B.; Verma, Manish; Berry, Joseph A.; Lee, Jung-Eun; Joiner, Joanna
2016-01-01
This study evaluates the large-scale seasonal phenology and physiology of vegetation over northern high latitude forests (40 deg - 55 deg N) during spring and fall by using remote sensing of solar-induced chlorophyll fluorescence (SIF), normalized difference vegetation index (NDVI) and observation-based estimate of gross primary productivity (GPP) from 2009 to 2011. Based on GPP phenology estimation in GPP, the growing season determined by SIF time-series is shorter in length than the growing season length determined solely using NDVI. This is mainly due to the extended period of high NDVI values, as compared to SIF, by about 46 days (+/-11 days), indicating a large-scale seasonal decoupling of physiological activity and changes in greenness in the fall. In addition to phenological timing, mean seasonal NDVI and SIF have different responses to temperature changes throughout the growing season. We observed that both NDVI and SIF linearly increased with temperature increases throughout the spring. However, in the fall, although NDVI linearly responded to temperature increases, SIF and GPP did not linearly increase with temperature increases, implying a seasonal hysteresis of SIF and GPP in response to temperature changes across boreal ecosystems throughout their growing season. Seasonal hysteresis of vegetation at large-scales is consistent with the known phenomena that light limits boreal forest ecosystem productivity in the fall. Our results suggest that continuing measurements from satellite remote sensing of both SIF and NDVI can help to understand the differences between, and information carried by, seasonal variations vegetation structure and greenness and physiology at large-scales across the critical boreal regions.
HFSB-seeding for large-scale tomographic PIV in wind tunnels
NASA Astrophysics Data System (ADS)
Caridi, Giuseppe Carlo Alp; Ragni, Daniele; Sciacchitano, Andrea; Scarano, Fulvio
2016-12-01
A new system for large-scale tomographic particle image velocimetry in low-speed wind tunnels is presented. The system relies upon the use of sub-millimetre helium-filled soap bubbles as flow tracers, which scatter light with intensity several orders of magnitude higher than micron-sized droplets. With respect to a single bubble generator, the system increases the rate of bubbles emission by means of transient accumulation and rapid release. The governing parameters of the system are identified and discussed, namely the bubbles production rate, the accumulation and release times, the size of the bubble injector and its location with respect to the wind tunnel contraction. The relations between the above parameters, the resulting spatial concentration of tracers and measurement of dynamic spatial range are obtained and discussed. Large-scale experiments are carried out in a large low-speed wind tunnel with 2.85 × 2.85 m2 test section, where a vertical axis wind turbine of 1 m diameter is operated. Time-resolved tomographic PIV measurements are taken over a measurement volume of 40 × 20 × 15 cm3, allowing the quantitative analysis of the tip-vortex structure and dynamical evolution.
Learning, climate and the evolution of cultural capacity.
Whitehead, Hal
2007-03-21
Patterns of environmental variation influence the utility, and thus evolution, of different learning strategies. I use stochastic, individual-based evolutionary models to assess the relative advantages of 15 different learning strategies (genetic determination, individual learning, vertical social learning, horizontal/oblique social learning, and contingent combinations of these) when competing in variable environments described by 1/f noise. When environmental variation has little effect on fitness, then genetic determinism persists. When environmental variation is large and equal over all time-scales ("white noise") then individual learning is adaptive. Social learning is advantageous in "red noise" environments when variation over long time-scales is large. Climatic variability increases with time-scale, so that short-lived organisms should be able to rely largely on genetic determination. Thermal climates usually are insufficiently red for social learning to be advantageous for species whose fitness is very determined by temperature. In contrast, population trajectories of many species, especially large mammals and aquatic carnivores, are sufficiently red to promote social learning in their predators. The ocean environment is generally redder than that on land. Thus, while individual learning should be adaptive for many longer-lived organisms, social learning will often be found in those dependent on the populations of other species, especially if they are marine. This provides a potential explanation for the evolution of a prevalence of social learning, and culture, in humans and cetaceans.
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2015-12-01
Runoff generated during heavy rainfall imposes quick, but often intense, changes in the flow of streams, which increase the chance of flash floods in the vicinity of the streams. Understanding the temporal response of streams to heavy rainfall requires a hydrological model that considers meteorological, hydrological, and geological components of the streams and their watersheds. SWAT is a physically-based, semi-distributed model that is capable of simulating water flow within watersheds with both long-term, i.e. annually and monthly, and short-term (daily and sub-daily) time scales. However, the capability of SWAT in sub-daily water flow modeling within large watersheds has not been studied much, compare to long-term and daily time scales. In this study we are investigating the water flow in a large, semi-arid watershed, Nueces River Basin (NRB) with the drainage area of 16950 mi2 located in South Texas, with daily and sub-daily time scales. The objectives of this study are: (1) simulating the response of streams to heavy, and often quick, rainfall, (2) evaluating SWAT performance in sub-daily modeling of water flow within a large watershed, and (3) examining means for model performance improvement during model calibration and verification based on results of sensitivity and uncertainty analysis. The results of this study can provide important information for water resources planning during flood seasons.
Lectures on algebraic system theory: Linear systems over rings
NASA Technical Reports Server (NTRS)
Kamen, E. W.
1978-01-01
The presentation centers on four classes of systems that can be treated as linear systems over a ring. These are: (1) discrete-time systems over a ring of scalars such as the integers; (2) continuous-time systems containing time delays; (3) large-scale discrete-time systems; and (4) time-varying discrete-time systems.
Heavy nuclei as thermal insulation for protoneutron stars
NASA Astrophysics Data System (ADS)
Nakazato, Ken'ichiro; Suzuki, Hideyuki; Togashi, Hajime
2018-03-01
A protoneutron star (PNS) is a newly formed compact object in a core collapse supernova. In this paper, the neutrino emission from the cooling process of a PNS is investigated using two types of nuclear equation of state (EOS). It is found that the neutrino signal is mainly determined by the high-density EOS. The neutrino luminosity and mean energy are higher and the cooling time scale is longer for the softer EOS. Meanwhile, the neutrino mean energy and the cooling time scale are also affected by the low-density EOS because of the difference in the population of heavy nuclei. Heavy nuclei have a large scattering cross section with neutrinos owing to the coherent effects and act as thermal insulation near the surface of a PNS. The neutrino mean energy is higher and the cooling time scale is longer for an EOS with a large symmetry energy at low densities, namely a small density derivative coefficient of the symmetry energy, L .
Communication: Polymer entanglement dynamics: Role of attractive interactions
Grest, Gary S.
2016-10-10
The coupled dynamics of entangled polymers, which span broad time and length scales, govern their unique viscoelastic properties. To follow chain mobility by numerical simulations from the intermediate Rouse and reptation regimes to the late time diffusive regime, highly coarse grained models with purely repulsive interactions between monomers are widely used since they are computationally the most efficient. In this paper, using large scale molecular dynamics simulations, the effect of including the attractive interaction between monomers on the dynamics of entangled polymer melts is explored for the first time over a wide temperature range. Attractive interactions have little effect onmore » the local packing for all temperatures T and on the chain mobility for T higher than about twice the glass transition T g. Finally, these results, across a broad range of molecular weight, show that to study the dynamics of entangled polymer melts, the interactions can be treated as pure repulsive, confirming a posteriori the validity of previous studies and opening the way to new large scale numerical simulations.« less
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
Downscaling large-scale circulation to local winter climate using neural network techniques
NASA Astrophysics Data System (ADS)
Cavazos Perez, Maria Tereza
1998-12-01
The severe impacts of climate variability on society reveal the increasing need for improving regional-scale climate diagnosis. A new downscaling approach for climate diagnosis is developed here. It is based on neural network techniques that derive transfer functions from the large-scale atmospheric controls to the local winter climate in northeastern Mexico and southeastern Texas during the 1985-93 period. A first neural network (NN) model employs time-lagged component scores from a rotated principal component analysis of SLP, 500-hPa heights, and 1000-500 hPa thickness as predictors of daily precipitation. The model is able to reproduce the phase and, to some decree, the amplitude of large rainfall events, reflecting the influence of the large-scale circulation. Large errors are found over the Sierra Madre, over the Gulf of Mexico, and during El Nino events, suggesting an increase in the importance of meso-scale rainfall processes. However, errors are also due to the lack of randomization of the input data and the absence of local atmospheric predictors such as moisture. Thus, a second NN model uses time-lagged specific humidity at the Earth's surface and at the 700 hPa level, SLP tendency, and 700-500 hPa thickness as input to a self-organizing map (SOM) that pre-classifies the atmospheric fields into different patterns. The results from the SOM classification document that negative (positive) anomalies of winter precipitation over the region are associated with: (1) weaker (stronger) Aleutian low; (2) stronger (weaker) North Pacific high; (3) negative (positive) phase of the Pacific North American pattern; and (4) La Nina (El Nino) events. The SOM atmospheric patterns are then used as input to a feed-forward NN that captures over 60% of the daily rainfall variance and 94% of the daily minimum temperature variance over the region. This demonstrates the ability of artificial neural network models to simulate realistic relationships on daily time scales. The results of this research also reveal that the SOM pre-classification of days with similar atmospheric conditions succeeded in emphasizing the differences of the atmospheric variance conducive to extreme events. This resulted in a downscaling NN model that is highly sensitive to local-scale weather anomalies associated with El Nino and extreme cold events.
Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event
NASA Astrophysics Data System (ADS)
Jiang, chaowei; Hu, Qiang; Wu, S. T.
2015-04-01
We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Mohapatra, Pratyasha; Mendivelso-Perez, Deyny; Bobbitt, Jonathan M; Shaw, Santosh; Yuan, Bin; Tian, Xinchun; Smith, Emily A; Cademartiri, Ludovico
2018-05-30
This paper describes a simple approach to the large scale synthesis of colloidal Si nanocrystals and their processing by He plasma into spin-on carbon-free nanocrystalline Si films. We further show that the RIE etching rate in these films is 1.87 times faster than for single crystalline Si, consistent with a simple geometric argument that accounts for the nanoscale roughness caused by the nanoparticle shape.
A Systematic Multi-Time Scale Solution for Regional Power Grid Operation
NASA Astrophysics Data System (ADS)
Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.
2017-10-01
Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.
paraGSEA: a scalable approach for large-scale gene expression profiling
Peng, Shaoliang; Yang, Shunyun
2017-01-01
Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463
NASA Technical Reports Server (NTRS)
Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.;
2016-01-01
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.
The Origin of Clusters and Large-Scale Structures: Panoramic View of the High-z Universe
NASA Astrophysics Data System (ADS)
Ouchi, Masami
We will report results of our on-going survey for proto-clusters and large-scale structures at z=3-6. We carried out very wide and deep optical imaging down to i=27 for a 1 deg^2 field of the Subaru/XMM Deep Field with 8.2m Subaru Telescope. We obtain maps of the Universe traced by ~1,000 Ly-a galaxies at z=3, 4, and 6 and by ~10,000 Lyman break galaxies at z=3-6. These cosmic maps have a transverse dimension of ~150 Mpc x 150 Mpc in comoving units at these redshifts, and provide us, for the first time, a panoramic view of the high-z Universe from the scales of galaxies, clusters to large-scale structures. Major results and implications will be presented in our talk. (Part of this work is subject to press embargo.)
Fire extinguishing tests -80 with methyl alcohol gasoline
NASA Astrophysics Data System (ADS)
Holmstedt, G.; Ryderman, A.; Carlsson, B.; Lennmalm, B.
1980-10-01
Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15 fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results; and performances with small pools can hardly be correlated with results from large scale fires.
Large- to small-scale dynamo in domains of large aspect ratio: kinematic regime
NASA Astrophysics Data System (ADS)
Shumaylova, Valeria; Teed, Robert J.; Proctor, Michael R. E.
2017-04-01
The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In this work, we look for numerical evidence of a large-scale magnetic field as the magnetic Reynolds number, Rm, is increased. The investigation is based on the simulations of the induction equation in elongated periodic boxes. The imposed flows considered are the standard ABC flow (named after Arnold, Beltrami & Childress) with wavenumber ku = 1 (small-scale) and a modulated ABC flow with wavenumbers ku = m, 1, 1 ± m, where m is the wavenumber corresponding to the long-wavelength perturbation on the scale of the box. The critical magnetic Reynolds number R_m^{crit} decreases as the permitted scale separation in the system increases, such that R_m^{crit} ∝ [L_x/L_z]^{-1/2}. The results show that the α-effect derived from the mean-field theory ansatz is valid for a small range of Rm after which small scale dynamo instability occurs and the mean-field approximation is no longer valid. The transition from large- to small-scale dynamo is smooth and takes place in two stages: a fast transition into a predominantly small-scale magnetic energy state and a slower transition into even smaller scales. In the range of Rm considered, the most energetic Fourier component corresponding to the structure in the long x-direction has twice the length-scale of the forcing scale. The long-wavelength perturbation imposed on the ABC flow in the modulated case is not preserved in the eigenmodes of the magnetic field.
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less
NASA Technical Reports Server (NTRS)
Gradwohl, Ben-Ami
1991-01-01
The universe may have undergone a superfluid-like phase during its evolution, resulting from the injection of nontopological charge into the spontaneously broken vacuum. In the presence of vortices this charge is identified with angular momentum. This leads to turbulent domains on the scale of the correlation length. By restoring the symmetry at low temperatures, the vortices dissociate and push the charges to the boundaries of these domains. The model can be scaled (phenomenologically) to very low energies, it can be incorporated in a late time phase transition and form large scale structure in the boundary layers of the correlation volumes. The novel feature of the model lies in the fact that the dark matter is endowed with coherent motion. The possibilities of identifying this flow around superfluid vortices with the observed large scale bulk motion is discussed. If this identification is possible, then the definite prediction can be made that a more extended map of peculiar velocities would have to reveal large scale circulations in the flow pattern.
NASA Astrophysics Data System (ADS)
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
NASA Astrophysics Data System (ADS)
Darema, F.
2016-12-01
InfoSymbiotics/DDDAS embodies the power of Dynamic Data Driven Applications Systems (DDDAS), a concept whereby an executing application model is dynamically integrated, in a feed-back loop, with the real-time data-acquisition and control components, as well as other data sources of the application system. Advanced capabilities can be created through such new computational approaches in modeling and simulations, and in instrumentation methods, and include: enhancing the accuracy of the application model; speeding-up the computation to allow faster and more comprehensive models of a system, and create decision support systems with the accuracy of full-scale simulations; in addition, the notion of controlling instrumentation processes by the executing application results in more efficient management of application-data and addresses challenges of how to architect and dynamically manage large sets of heterogeneous sensors and controllers, an advance over the static and ad-hoc ways of today - with DDDAS these sets of resources can be managed adaptively and in optimized ways. Large-Scale-Dynamic-Data encompasses the next wave of Big Data, and namely dynamic data arising from ubiquitous sensing and control in engineered, natural, and societal systems, through multitudes of heterogeneous sensors and controllers instrumenting these systems, and where opportunities and challenges at these "large-scales" relate not only to data size but the heterogeneity in data, data collection modalities, fidelities, and timescales, ranging from real-time data to archival data. In tandem with this important dimension of dynamic data, there is an extended view of Big Computing, which includes the collective computing by networked assemblies of multitudes of sensors and controllers, this range from the high-end to the real-time seamlessly integrated and unified, and comprising the Large-Scale-Big-Computing. InfoSymbiotics/DDDAS engenders transformative impact in many application domains, ranging from the nano-scale to the terra-scale and to the extra-terra-scale. The talk will address opportunities for new capabilities together with corresponding research challenges, with illustrative examples from several application areas including environmental sciences, geosciences, and space sciences.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Huffman, Allan W.; Lux, Kevin M.; Charney, Joseph J.; Riordan, Allan J.; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2002-01-01
A 44 case study analysis of the large-scale atmospheric structure associated with development of accident-producing aircraft turbulence is described. Categorization is a function of the accident location, altitude, time of year, time of day, and the turbulence category, which classifies disturbances. National Centers for Environmental Prediction Reanalyses data sets and satellite imagery are employed to diagnose synoptic scale predictor fields associated with the large-scale environment preceding severe turbulence. These analyses indicate a predominance of severe accident-producing turbulence within the entrance region of a jet stream at the synoptic scale. Typically, a flow curvature region is just upstream within the jet entrance region, convection is within 100 km of the accident, vertical motion is upward, absolute vorticity is low, vertical wind shear is increasing, and horizontal cold advection is substantial. The most consistent predictor is upstream flow curvature and nearby convection is the second most frequent predictor.
Bartlein, Patrick J.; Hostetler, Steven W.; Alder, Jay R.; Ohring, G.
2014-01-01
As host to one of the major continental-scale ice sheets, and with considerable spatial variability of climate related to its physiography and location, North America has experienced a wide range of climates over time. The aim of this chapter is to review the history of those climate variations, focusing in particular on the continental-scale climatic variations between the Last Glacial Maximum (LGM, ca. 21,000 years ago or 21 ka) and the present, which were as large in amplitude as any experienced over a similar time span during the past several million years. As background to that discussion, the climatic variations over the Cenozoic (the past 65.5 Myr, or 65.5 Ma to present) that led ultimately to the onset of Northern Hemisphere glaciation at 2.59 Ma will also be discussed. Superimposed on the large-amplitude, broad-scale variations from the LGM to present, are climatic variations on millennial-to-decadal scales, and these will be reviewed in particular for the Holocene (11.7 ka to present) and the past millennium.
Chatterjee, Gourab; Singh, Prashant Kumar; Robinson, A P L; Blackman, D; Booth, N; Culfa, O; Dance, R J; Gizzi, L A; Gray, R J; Green, J S; Koester, P; Kumar, G Ravindra; Labate, L; Lad, Amit D; Lancaster, K L; Pasley, J; Woolsey, N C; Rajeev, P P
2017-08-21
The transport of hot, relativistic electrons produced by the interaction of an intense petawatt laser pulse with a solid has garnered interest due to its potential application in the development of innovative x-ray sources and ion-acceleration schemes. We report on spatially and temporally resolved measurements of megagauss magnetic fields at the rear of a 50-μm thick plastic target, irradiated by a multi-picosecond petawatt laser pulse at an incident intensity of ~10 20 W/cm 2 . The pump-probe polarimetric measurements with micron-scale spatial resolution reveal the dynamics of the magnetic fields generated by the hot electron distribution at the target rear. An annular magnetic field profile was observed ~5 ps after the interaction, indicating a relatively smooth hot electron distribution at the rear-side of the plastic target. This is contrary to previous time-integrated measurements, which infer that such targets will produce highly structured hot electron transport. We measured large-scale filamentation of the hot electron distribution at the target rear only at later time-scales of ~10 ps, resulting in a commensurate large-scale filamentation of the magnetic field profile. Three-dimensional hybrid simulations corroborate our experimental observations and demonstrate a beam-like hot electron transport at initial time-scales that may be attributed to the local resistivity profile at the target rear.
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
The EX-SHADWELL-Full Scale Fire Research and Test Ship
1988-01-20
If shipboard testing is necessary after the large scale land tests at China Lake, the EX-SHADWELL has a helo pad and well deck available which makes...8217 *,~. *c ’q.. ~ I b. Data acquistion system started. c. Fire started d. Data is recorded until all fire activity has ceased. 3.0 THE TEST AREA 3.1 Test...timing clocks will be started at the instant the fuel is lighted. That instant will be time zero . The time the cables become involved will be recorded
Riverbed Hydrologic Exchange Dynamics in a Large Regulated River Reach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Tian; Bao, Jie; Huang, Maoyi
Hydrologic exchange flux (HEF) is an important hydrologic component in river corridors that includes both bidirectional (hyporheic) and unidirectional (gaining/losing) surface water – groundwater exchanges. Quantifying HEF rates in a large regulated river is difficult due to the large spatial domains, complexity of geomorphologic features and subsurface properties, and the great stage variations created by dam operations at multiple time scales. In this study, we developed a method that combined numerical modeling and field measurements for estimating HEF rates across the river bed in a 7‐km long reach of the highly regulated Columbia River. A high‐resolution computational fluid dynamics (CFD)more » modeling framework was developed and validated by field measurements and other modeling results to characterize the HEF dynamics across the river bed. We found that about 85% of the time from 2008‐2014 the river was losing water with an annual average net HEF rates across the river bed (Qz) of ‐2.3 m3 s−1 (negative indicating downwelling). June was the only month that the river gained water, with monthly averaged Qz of 0.8 m3 s−1. We also found that the daily dam operations increased the hourly gross gaining and losing rate over an average year of 8% and 2%, respectively. By investigating the HEF feedbacks at various time scales, we suggest that the dam operations could reduce the HEF at seasonal time scale by decreasing the seasonal flow variations, while also enhance the HEF at sub‐daily time scale by generating high frequency discharge variations. These changes could generate significant impacts on biogeochemical processes in the hyporheic zone.« less
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
2018-05-01
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
NASA Astrophysics Data System (ADS)
Orr, Matthew; Hopkins, Philip F.
2018-06-01
I will present a simple model of non-equilibrium star formation and its relation to the scatter in the Kennicutt-Schmidt relation and large-scale star formation efficiencies in galaxies. I will highlight the importance of a hierarchy of timescales, between the galaxy dynamical time, local free-fall time, the delay time of stellar feedback, and temporal overlap in observables, in setting the scatter of the observed star formation rates for a given gas mass. Further, I will talk about how these timescales (and their associated duty-cycles of star formation) influence interpretations of the large-scale star formation efficiency in reasonably star-forming galaxies. Lastly, the connection with galactic centers and out-of-equilibrium feedback conditions will be mentioned.
NASA Technical Reports Server (NTRS)
Smith, P. H.; Bewtra, N. K.; Hoffman, R. A.
1979-01-01
The motions of charged particles under the influence of the geomagnetic and electric fields were quite complex in the region of the inner magnetosphere. The Volland-Stern type large scale convection electric field was used successfully to predict both the plasmapause location and particle enhancements determined from Explorer 45 measurements. A time dependence in this electric field was introduced based on the variation in Kp for actual magnetic storm conditions. The particle trajectories were computed as they change in this time-varying electric field. Several storm fronts of particles of different magnetic moments were allowed to be injected into the inner magnetosphere from L = 10 in the equatorial plane. The motions of these fronts are presented in a movie format.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
Diffuse pollution of soil and water: Long term trends at large scales?
NASA Astrophysics Data System (ADS)
Grathwohl, P.
2012-04-01
Industrialization and urbanization, which consequently increased pressure on the environment to cause degradation of soil and water quality over more than a century, is still ongoing. The number of potential environmental contaminants detected in surface and groundwater is continuously increasing; from classical industrial and agricultural chemicals, to flame retardants, pharmaceuticals, and personal care products. While point sources of pollution can be managed in principle, diffuse pollution is only reversible at very long time scales if at all. Compounds which were phased out many decades ago such as PCBs or DDT are still abundant in soils, sediments and biota. How diffuse pollution is processed at large scales in space (e.g. catchments) and time (centuries) is unknown. The relevance to the field of processes well investigated at the laboratory scale (e.g. sorption/desorption and (bio)degradation kinetics) is not clear. Transport of compounds is often coupled to the water cycle and in order to assess trends in diffuse pollution, detailed knowledge about the hydrology and the solute fluxes at the catchment scale is required (e.g. input/output fluxes, transformation rates at the field scale). This is also a prerequisite in assessing management options for reversal of adverse trends.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
NASA Astrophysics Data System (ADS)
Verdon-Kidd, D.; Kiem, A. S.
2008-10-01
In this paper regional (synoptic) and large-scale climate drivers of rainfall are investigated for Victoria, Australia. A non-linear classification methodology known as self-organizing maps (SOM) is used to identify 20 key regional synoptic patterns, which are shown to capture a range of significant synoptic features known to influence the climate of the region. Rainfall distributions are assigned to each of the 20 patterns for nine rainfall stations located across Victoria, resulting in a clear distinction between wet and dry synoptic types at each station. The influence of large-scale climate modes on the frequency and timing of the regional synoptic patterns is also investigated. This analysis revealed that phase changes in the El Niño Southern Oscillation (ENSO), the Southern Annular Mode (SAM) and/or Indian Ocean Dipole (IOD) are associated with a shift in the relative frequency of wet and dry synoptic types. Importantly, these results highlight the potential to utilise the link between the regional synoptic patterns derived in this study and large-scale climate modes to improve rainfall forecasting for Victoria, both in the short- (i.e. seasonal) and long-term (i.e. decadal/multi-decadal scale). In addition, the regional and large-scale climate drivers identified in this study provide a benchmark by which the performance of Global Climate Models (GCMs) may be assessed.
Self-Organized Evolution of Sandy Coastline Shapes: Connections with Shoreline Erosion Problems
NASA Astrophysics Data System (ADS)
Murray, A. B.; Ashton, A.
2002-12-01
Landward movement of the shoreline severely impacts property owners and communities where structures and infrastructure are built near the coast. While sea level rise will increase the average rate of coastal erosion, even a slight gradient in wave-driven alongshore sediment flux will locally overwhelm that effect, causing either shoreline accretion or enhanced erosion. Recent analysis shows that because of the nonlinear relationship between alongshore sediment flux and the angle between deep water wave crests and local shoreline orientation, in some wave climates a straight coastline is unstable (Ashton et al., Nature, 2001). When deep-water waves approach from angles greater than the one that maximizes alongshore flux, in concave-seaward shoreline segments sediment flux will diverge, causing erosion. Similarly, convex regions such as the crests of perturbations on an otherwise straight shoreline will experience accretion; perturbations will grow. When waves approach from smaller angles, the sign of the relationship between shoreline curvature and shoreline change is reversed, but any deviation from a perfectly straight coastline will still result in alongshore-inhomogeneous shoreline change. A numerical model designed to explore the long-term effects of this instability operating over a spatially extended alongshore domain has shown that as perturbations grow to finite amplitude and interact with each other, large-scale coastline structures can emerge. The character of the local and non-local interactions, and the resulting emergent structures, depends on the wave climate. The 100-km scale capes and cuspate forelands that form much of the coast of the Carolinas, USA, provides one possible natural example. Our modeling suggests that on such a shoreline, continued interactions between large-scale structures will cause continued large-scale change in coastline shape. Consequently, some coastline segments will tend to experience accentuated erosion. Communities established in these areas face discouraging future prospects. Attempts can be made to arrest the shoreline retreat on large scales-for example through large beach nourishment projects or policies that allow pervasive hard stabilization (e.g. seawall, jetties) along a coastline segment. However, even if such attempts are successful for a significant period of time, the pinning in place of some parts of an otherwise dynamic system will change the large-scale evolution of the coastline, altering the future erosion/accretion experienced at other, perhaps distant, locations. Simple properties of alongshore sediment transport could also be relevant to alongshore-inhomogeneous shoreline change (including erosion 'hot spots') on shorter time scales and smaller spatial scales. We are comparing predictions arising from the modeling, and from analysis of alongshore transport as a function of shoreline orientation, to recent observations of shoreline change ranging across spatial scales from 100s of meters to 10s of kilometers, and time scales from days to decades (List and Farris, Coastal Sediments,1999; Tebbens et al., PNAS, 2002). Considering that many other processes and factors can also influence shoreline change, initial results show a surprising degree of correlation between observations and predictions.
Minimal microwave anisotrophy from perturbations induced at late times
NASA Technical Reports Server (NTRS)
Jaffe, Andrew H.; Stebbins, Albert; Frieman, Joshua A.
1994-01-01
Aside from primordial gravitational instability of the cosmological fluid, various mechanisms have been proposed to generate large-scale structure at relatively late times, including, e.g., 'late-time' cosmological phase transitions. In these scenarios, it is envisioned that the universe is nearly homogeneous at the times of last scattering and that perturbations grow rapidly sometimes after the primordial plasma recombines. On this basis, it was suggested that large inhomogeneities could be generated while leaving relatively little imprint on the cosmic microwave background (MBR) anisotropy. In this paper, we calculate the minimal anisotropies possible in any 'late-time' scenario for structure formation, given the level of inhomogeneity observed at present. Since the growth of the inhomogeneity involves time-varying gravitational fields, these scenarios inevitably generate significant MBR anisotropy via the Sachs-Wolfe effect. Moreover, we show that the large-angle MBR anisotropy produced by the rapid post-recombination growth of inhomogeneity is generally greater than that produced by the same inhomogeneity growth via gravitational instability. In 'realistic' scenarios one can decrease the anisotropy compared to models with primordial adiabatic fluctuations, but only on very small angular scales. The value of any particular measure of the anisotropy can be made small in late-time models, but only by making the time-dependence of the gravitational field sufficiently 'pathological'.
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
Statistical Ensemble of Large Eddy Simulations
NASA Technical Reports Server (NTRS)
Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.
NASA Technical Reports Server (NTRS)
Hussain, A. K. M. F.
1980-01-01
Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.
Extreme-scale motions in turbulent plane Couette flows
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Moser, Robert D.
2018-05-01
We study the size of large-scale motions in turbulent plane Couette flows at moderate Reynolds number up to $Re_\\tau$ = 500. Direct numerical simulation domains were as large as $100\\pi\\delta\\times2\\delta\\times5\\pi\\delta$, where $\\delta$ is half the distance between the walls. The results indicate that there are structures with streamwise extent, as measured by the wavelength, as long as 78$\\delta$ and at least 310$\\delta$ at $Re_\\tau$ = 220 and 500, respectively. The presence of these very long structures is apparent in the spectra of all three velocity components and the Reynolds stress. In DNS using a smaller domain, the large structures are constrained, eliminating the streamwise variations present in the larger domain. Effects of a smaller domain are also present in the mean velocity and the streamwise velocity variance in the outer flow.
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Lecci, Rita; Mossa, Michele
2017-01-01
SANIFS (Southern Adriatic Northern Ionian coastal Forecasting System) is a coastal-ocean operational system based on the unstructured grid finite-element three-dimensional hydrodynamic SHYFEM model, providing short-term forecasts. The operational chain is based on a downscaling approach starting from the large-scale system for the entire Mediterranean Basin (MFS, Mediterranean Forecasting System), which provides initial and boundary condition fields to the nested system. The model is configured to provide hydrodynamics and active tracer forecasts both in open ocean and coastal waters of southeastern Italy using a variable horizontal resolution from the open sea (3-4 km) to coastal areas (50-500 m). Given that the coastal fields are driven by a combination of both local (also known as coastal) and deep-ocean forcings propagating along the shelf, the performance of SANIFS was verified both in forecast and simulation mode, first (i) on the large and shelf-coastal scales by comparing with a large-scale survey CTD (conductivity-temperature-depth) in the Gulf of Taranto and then (ii) on the coastal-harbour scale (Mar Grande of Taranto) by comparison with CTD, ADCP (acoustic doppler current profiler) and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 6.5 km). The SANIFS forecasts at a lead time of 1 day were compared with the MFS forecasts, highlighting that SANIFS is able to retain the large-scale dynamics of MFS. The large-scale dynamics of MFS are correctly propagated to the shelf-coastal scale, improving the forecast accuracy (+17 % for temperature and +6 % for salinity compared to MFS). Moreover, the added value of SANIFS was assessed on the coastal-harbour scale, which is not covered by the coarse resolution of MFS, where the fields forecasted by SANIFS reproduced the observations well (temperature RMSE equal to 0.11 °C). Furthermore, SANIFS simulations were compared with hourly time series of temperature, sea level and velocity measured on the coastal-harbour scale, showing a good agreement. Simulations in the Gulf of Taranto described a circulation mainly characterized by an anticyclonic gyre with the presence of cyclonic vortexes in shelf-coastal areas. A surface water inflow from the open sea to Mar Grande characterizes the coastal-harbour scale.
Neurobehavioral studies pose unique challenges for dose-response modeling, including small sample size and relatively large intra-subject variation, repeated measurements over time, multiple endpoints with both continuous and ordinal scales, and time dependence of risk characteri...
Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E
2016-07-25
Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
NASA Astrophysics Data System (ADS)
Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.
2018-06-01
The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
Highly multiplexed targeted proteomics using precise control of peptide retention time.
Gallien, Sebastien; Peterman, Scott; Kiyonami, Reiko; Souady, Jamal; Duriez, Elodie; Schoen, Alan; Domon, Bruno
2012-04-01
Large-scale proteomics applications using SRM analysis on triple quadrupole mass spectrometers present new challenges to LC-MS/MS experimental design. Despite the automation of building large-scale LC-SRM methods, the increased numbers of targeted peptides can compromise the balance between sensitivity and selectivity. To facilitate large target numbers, time-scheduled SRM transition acquisition is performed. Previously published results have demonstrated incorporation of a well-characterized set of synthetic peptides enabled chromatographic characterization of the elution profile for most endogenous peptides. We have extended this application of peptide trainer kits to not only build SRM methods but to facilitate real-time elution profile characterization that enables automated adjustment of the scheduled detection windows. Incorporation of dynamic retention time adjustments better facilitate targeted assays lasting several days without the need for constant supervision. This paper provides an overview of how the dynamic retention correction approach identifies and corrects for commonly observed LC variations. This adjustment dramatically improves robustness in targeted discovery experiments as well as routine quantification experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits
NASA Astrophysics Data System (ADS)
Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas
2018-04-01
Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Lagrangian space consistency relation for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-09-29
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Isotope Mass Scaling of Turbulence and Transport
NASA Astrophysics Data System (ADS)
McKee, George; Yan, Zheng; Gohil, Punit; Luce, Tim; Rhodes, Terry
2017-10-01
The dependence of turbulence characteristics and transport scaling on the fuel ion mass has been investigated in a set of hydrogen (A = 1) and deuterium (A = 2) plasmas on DIII-D. Normalized energy confinement time (B *τE) is two times lower in hydrogen (H) plasmas compare to similar deuterium (D) plasmas. Dimensionless parameters other than ion mass (A) , including ρ*, q95, Te /Ti , βN, ν*, and Mach number were maintained nearly fixed. Matched profiles of electron density, electron and ion temperature, and toroidal rotation were well matched. The normalized turbulence amplitude (ñ / n) is approximately twice as large in H as in D, which may partially explain the increased transport and reduced energy confinement time. Radial correlation lengths of low-wavenumber density turbulence in hydrogen are similar to or slightly larger than correlation lengths in the deuterium plasmas and generally scale with the ion gyroradius, which were maintained nearly fixed in this dimensionless scan. Predicting energy confinement in D-T burning plasmas requires an understanding of the large and beneficial isotope scaling of transport. Supported by USDOE under DE-FG02-08ER54999 and DE-FC02-04ER54698.
Large-eddy simulation of a turbulent mixing layer
NASA Technical Reports Server (NTRS)
Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.
1978-01-01
The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.
Universality of accelerating change
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Shlesinger, Michael F.
2018-03-01
On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Space-time dependence between energy sources and climate related energy production
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Borga, Marco; Creutin, Jean-Dominique; Ramos, Maria-Helena; Tøfte, Lena; Warland, Geir
2014-05-01
The European Renewable Energy Directive adopted in 2009 focuses on achieving a 20% share of renewable energy in the EU overall energy mix by 2020. A major part of renewable energy production is related to climate, called "climate related energy" (CRE) production. CRE production systems (wind, solar, and hydropower) are characterized by a large degree of intermittency and variability on both short and long time scales due to the natural variability of climate variables. The main strategies to handle the variability of CRE production include energy-storage, -transport, -diversity and -information (smart grids). The three first strategies aim to smooth out the intermittency and variability of CRE production in time and space whereas the last strategy aims to provide a more optimal interaction between energy production and demand, i.e. to smooth out the residual load (the difference between demand and production). In order to increase the CRE share in the electricity system, it is essential to understand the space-time co-variability between the weather variables and CRE production under both current and future climates. This study presents a review of the literature that searches to tackle these problems. It reveals that the majority of studies deals with either a single CRE source or with the combination of two CREs, mostly wind and solar. This may be due to the fact that the most advanced countries in terms of wind equipment have also very little hydropower potential (Denmark, Ireland or UK, for instance). Hydropower is characterized by both a large storage capacity and flexibility in electricity production, and has therefore a large potential for both balancing and storing energy from wind- and solar-power. Several studies look at how to better connect regions with large share of hydropower (e.g., Scandinavia and the Alps) to regions with high shares of wind- and solar-power (e.g., green battery North-Sea net). Considering time scales, various studies consider wind and solar power production and their co-fluctuation at small time scales. The multi-scale nature of the variability is less studied, i.e., the potential adverse or favorable co-fluctuation at intermediate time scales involving water scarcity or abundance, is less present in the literature.Our review points out that it could be especially interesting to promote research on how the pronounced large-scale fluctuations in inflow to hydropower (intra-annual run-off) and smaller scale fluctuations in wind- and solar-power interact in an energy system. There is a need to better represent the profound difference between wind-, solar- and hydro-energy sources. On the one hand, they are all directly linked to the 2-D horizontal dynamics of meteorology. On the other hand, the branching structure of hydrological systems transforms this variability and governs the complex combination of natural inflows and reservoir storage.Finally, we note that the CRE production is, in addition to weather, also influenced by the energy system and market, i.e., the energy transport and demand across scales as well as changes of market regulation. The CRE production system lies thus in this nexus between climate, energy systems and market regulations. The work presented is part of the FP7 project COMPLEX (Knowledge based climate mitigation systems for a low carbon economy; http://www.complex.ac.uk)
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1994-01-01
We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.
Optimal Control Modification Adaptive Law for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Optimal Control Modification for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
NASA Astrophysics Data System (ADS)
Qu, T.; Lu, P.; Liu, C.; Wan, H.
2016-06-01
Western China is very susceptible to landslide hazards. As a result, landslide detection and early warning are of great importance. This work employs the SBAS (Small Baseline Subset) InSAR Technique for detection and monitoring of large-scale landslides that occurred in Li County, Sichuan Province, Western China. The time series INSAR is performed using descending scenes acquired from TerraSAR-X StripMap mode since 2014 to get the spatial distribution of surface displacements of this giant landslide. The time series results identify the distinct deformation zone on the landslide body with a rate of up to 150mm/yr. The deformation acquired by SBAS technique is validated by inclinometers from diverse boreholes of in-situ monitoring. The integration of InSAR time series displacements and ground-based monitoring data helps to provide reliable data support for the forecasting and monitoring of largescale landslide.
NASA Astrophysics Data System (ADS)
Tarpin, Malo; Canet, Léonie; Wschebor, Nicolás
2018-05-01
In this paper, we present theoretical results on the statistical properties of stationary, homogeneous, and isotropic turbulence in incompressible flows in three dimensions. Within the framework of the non-perturbative renormalization group, we derive a closed renormalization flow equation for a generic n-point correlation (and response) function for large wave-numbers with respect to the inverse integral scale. The closure is obtained from a controlled expansion and relies on extended symmetries of the Navier-Stokes field theory. It yields the exact leading behavior of the flow equation at large wave-numbers |p→ i| and for arbitrary time differences ti in the stationary state. Furthermore, we obtain the form of the general solution of the corresponding fixed point equation, which yields the analytical form of the leading wave-number and time dependence of n-point correlation functions, for large wave-numbers and both for small ti and in the limit ti → ∞. At small ti, the leading contribution at large wave-numbers is logarithmically equivalent to -α (ɛL ) 2 /3|∑tip→ i|2, where α is a non-universal constant, L is the integral scale, and ɛ is the mean energy injection rate. For the 2-point function, the (tp)2 dependence is known to originate from the sweeping effect. The derived formula embodies the generalization of the effect of sweeping to n-point correlation functions. At large wave-numbers and large ti, we show that the ti2 dependence in the leading order contribution crosses over to a |ti| dependence. The expression of the correlation functions in this regime was not derived before, even for the 2-point function. Both predictions can be tested in direct numerical simulations and in experiments.
NASA Astrophysics Data System (ADS)
Folsom, C. P.; Bouvier, J.; Petit, P.; Lèbre, A.; Amard, L.; Palacios, A.; Morin, J.; Donati, J.-F.; Vidotto, A. A.
2018-03-01
There is a large change in surface rotation rates of sun-like stars on the pre-main sequence and early main sequence. Since these stars have dynamo-driven magnetic fields, this implies a strong evolution of their magnetic properties over this time period. The spin-down of these stars is controlled by interactions between stellar and magnetic fields, thus magnetic evolution in turn plays an important role in rotational evolution. We present here the second part of a study investigating the evolution of large-scale surface magnetic fields in this critical time period. We observed stars in open clusters and stellar associations with known ages between 120 and 650 Myr, and used spectropolarimetry and Zeeman Doppler Imaging to characterize their large-scale magnetic field strength and geometry. We report 15 stars with magnetic detections here. These stars have masses from 0.8 to 0.95 M⊙, rotation periods from 0.326 to 10.6 d, and we find large-scale magnetic field strengths from 8.5 to 195 G with a wide range of geometries. We find a clear trend towards decreasing magnetic field strength with age, and a power law decrease in magnetic field strength with Rossby number. There is some tentative evidence for saturation of the large-scale magnetic field strength at Rossby numbers below 0.1, although the saturation point is not yet well defined. Comparing to younger classical T Tauri stars, we support the hypothesis that differences in internal structure produce large differences in observed magnetic fields, however for weak-lined T Tauri stars this is less clear.
Experimental Investigation of the Turbulent Large Scale Temporal Flow in the Wing-Body Junction.
1984-03-01
densities, the coherence, and the relative phase were experimentally obtained and used to determine the space-time extent of the temporal flow . Oil dot...Cenedese, A., Cerri, G., and Ianeta, S., " Experimental Analysis of the Wake behind an Isolated Cambered Airfoil," Unsteady Turbulent Shear Flows , IUTAM...ARD-A139 836 EXPERIMENTAL INVESTIGATION OF THE TURBULENT LARGE SCALE 1/3 TEMPORAL FLOW IN T.. (U) CATHOLIC UNIV OF AMERICA WASHINGTON DC SCHOOL OF
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
Large-Scale medical image analytics: Recent methodologies, applications and Future directions.
Zhang, Shaoting; Metaxas, Dimitris
2016-10-01
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.
Radial variations of large-scale magnetohydrodynamic fluctuations in the solar wind
NASA Technical Reports Server (NTRS)
Burlaga, L. F.; Goldstein, M. L.
1983-01-01
Two time periods are studied for which comprehensive data coverage is available at both 1 AU using IMP-8 and ISEE-3 and beyond using Voyager 1. One of these periods is characterized by the predominance of corotating stream interactions. Relatively small scale transient flows characterize the second period. The evolution of these flows with heliocentric distance is studied using power spectral techniques. The evolution of the transient dominated period is consistent with the hypothesis of turbulent evolution including an inverse cascade of large scales. The evolution of the corotating period is consistent with the entrainment of slow streams by faster streams in a deterministic model.
Using MHD Models for Context for Multispacecraft Missions
NASA Astrophysics Data System (ADS)
Reiff, P. H.; Sazykin, S. Y.; Webster, J.; Daou, A.; Welling, D. T.; Giles, B. L.; Pollock, C.
2016-12-01
The use of global MHD models such as BATS-R-US to provide context to data from widely spaced multispacecraft mission platforms is gaining in popularity and in effectiveness. Examples are shown, primarily from the Magnetospheric Multiscale Mission (MMS) program compared to BATS-R-US. We present several examples of large-scale magnetospheric configuration changes such as tail dipolarization events and reconfigurations after a sector boundary crossing which are made much more easily understood by placing the spacecraft in the model fields. In general, the models can reproduce the large-scale changes observed by the various spacecraft but sometimes miss small-scale or rapid time changes.
Large-scale horizontal flows from SOUP observations of solar granulation
NASA Technical Reports Server (NTRS)
November, L. J.; Simon, G. W.; Tarbell, T. D.; Title, A. M.; Ferguson, S. H.
1987-01-01
Using high resolution time sequence photographs of solar granulation from the SOUP experiment on Spacelab 2, large scale horizontal flows were observed in the solar surface. The measurement method is based upon a local spatial cross correlation analysis. The horizontal motions have amplitudes in the range 300 to 1000 m/s. Radial outflow of granulation from a sunspot penumbra into surrounding photosphere is a striking new discovery. Both the supergranulation pattern and cellular structures having the scale of mesogranulation are seen. The vertical flows that are inferred by continuity of mass from these observed horizontal flows have larger upflow amplitudes in cell centers than downflow amplitudes at cell boundaries.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
Turbulent Superstructures in Rayleigh-Bénard convection at different Prandtl number
NASA Astrophysics Data System (ADS)
Schumacher, Jörg; Pandey, Ambrish; Ender, Martin; Westermann, Rüdiger; Scheel, Janet D.
2017-11-01
Large-scale patterns of the temperature and velocity field in horizontally extended cells can be considered as turbulent superstructures in Rayleigh-Bénard convection (RBC). These structures are obtained once the turbulent fluctuations are removed by a finite-time average. Their existence has been reported for example in Bailon-Cuba et al.. This large-scale order obeys a strong similarity with the well-studied patterns from the weakly nonlinear regime at lower Rayleigh number in RBC. In the present work we analyze the superstructures of RBC at different Prandtl number for Prandtl values between Pr = 0.005 for liquid sodium and 7 for water. The characteristic evolution time scales, the typical spatial extension of the rolls and the properties of the defects of the resulting superstructure patterns are analyzed. Data are obtained from well-resolved spectral element direct numerical simulations. The work is supported by the Priority Programme SPP 1881 of the Deutsche Forschungsgemeinschaft.
The Case for Modular Redundancy in Large-Scale High Performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L
2009-01-01
Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less
Yu, Wenya; Lv, Yipeng; Hu, Chaoqun; Liu, Xu; Chen, Haiping; Xue, Chen; Zhang, Lulu
2018-01-01
Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise.
NASA Astrophysics Data System (ADS)
Harrington, Kathleen; CLASS Collaboration
2018-01-01
The search for inflationary primordial gravitational waves and the optical depth to reionization, both through their imprint on the large angular scale correlations in the polarization of the cosmic microwave background (CMB), has created the need for high sensitivity measurements of polarization across large fractions of the sky at millimeter wavelengths. These measurements are subjected to instrumental and atmospheric 1/f noise, which has motivated the development of polarization modulators to facilitate the rejection of these large systematic effects.Variable-delay polarization modulators (VPMs) are used in the Cosmology Large Angular Scale Surveyor (CLASS) telescopes as the first element in the optical chain to rapidly modulate the incoming polarization. VPMs consist of a linearly polarizing wire grid in front of a moveable flat mirror; varying the distance between the grid and the mirror produces a changing phase shift between polarization states parallel and perpendicular to the grid which modulates Stokes U (linear polarization at 45°) and Stokes V (circular polarization). The reflective and scalable nature of the VPM enables its placement as the first optical element in a reflecting telescope. This simultaneously allows a lock-in style polarization measurement and the separation of sky polarization from any instrumental polarization farther along in the optical chain.The Q-Band CLASS VPM was the first VPM to begin observing the CMB full time in 2016. I will be presenting its design and characterization as well as demonstrating how modulating polarization significantly rejects atmospheric and instrumental long time scale noise.
NASA Astrophysics Data System (ADS)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
a Structure of Experienced Time
NASA Astrophysics Data System (ADS)
Havel, Ivan M.
2005-10-01
The subjective experience of time will be taken as a primary motivation for an alternative, essentially discontinuous conception of time. Two types of such experience will be discussed, one based on personal episodic memory, the other on the theoretical fine texture of experienced time below the threshold of phenomenal awareness. The former case implies a discrete structure of temporal episodes on a large scale, while the latter case suggests endowing psychological time with a granular structure on a small scale, i.e. interpreting it as a semi-ordered flow of smeared (not point-like) subliminal time grains. Only on an intermediate temporal scale would the subjectively felt continuity and fluency of time emerge. Consequently, there is no locally smooth mapping of phenomenal time onto the real number continuum. Such a model has certain advantages; for instance, it avoids counterintuitive interpretations of some neuropsychological experiments (e.g. Libet's measurement) in which the temporal order of events is crucial.
Crater size estimates for large-body terrestrial impact
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.; Housen, Kevin R.
1988-01-01
Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonnesen, Stephanie; Cen, Renyue, E-mail: stonnes@gmail.com, E-mail: cen@astro.princeton.edu
2015-10-20
The connection between dark matter halos and galactic baryons is often not well constrained nor well resolved in cosmological hydrodynamical simulations. Thus, halo occupation distribution models that assign galaxies to halos based on halo mass are frequently used to interpret clustering observations, even though it is well known that the assembly history of dark matter halos is related to their clustering. In this paper we use high-resolution hydrodynamical cosmological simulations to compare the halo and stellar mass growth of galaxies in a large-scale overdensity to those in a large-scale underdensity (on scales of about 20 Mpc). The simulation reproduces assemblymore » bias, in which halos have earlier formation times in overdense environments than in underdense regions. We find that the ratio of stellar mass to halo mass is larger in overdense regions in central galaxies residing in halos with masses between 10{sup 11} and 10{sup 12.9} M{sub ⊙}. When we force the local density (within 2 Mpc) at z = 0 to be the same for galaxies in the large-scale over- and underdensities, we find the same results. We posit that this difference can be explained by a combination of earlier formation times, more interactions at early times with neighbors, and more filaments feeding galaxies in overdense regions. This result puts the standard practice of assigning stellar mass to halos based only on their mass, rather than considering their larger environment, into question.« less
A mesostructured Y zeolite as a superior FCC catalyst--lab to refinery.
García-Martínez, Javier; Li, Kunhao; Krishnaiah, Gautham
2012-12-18
A mesostructured Y zeolite was prepared by a surfactant-templated process at the commercial scale and tested in a refinery, showing superior hydrothermal stability and catalytic cracking selectivity, which demonstrates, for the first time, the promising future of mesoporous zeolites in large scale industrial applications.
Drug Use Disorder (DUD) Questionnaire: Scale Development and Validation
ERIC Educational Resources Information Center
Scherer, Michael; Furr-Holden, C. Debra; Voas, Robert B.
2013-01-01
Background: Despite the ample interest in the measurement of substance abuse and dependence, obtaining biological samples from participants as a means to validate a scale is considered time and cost intensive and is, subsequently, largely overlooked. Objectives: To report the psychometric properties of the drug use disorder (DUD) questionnaire…
Improving crop condition monitoring at field scale by using optimal Landsat and MODIS images
USDA-ARS?s Scientific Manuscript database
Satellite remote sensing data at coarse resolution (kilometers) have been widely used in monitoring crop condition for decades. However, crop condition monitoring at field scale requires high resolution data in both time and space. Although a large number of remote sensing instruments with different...
Exclusively Visual Analysis of Classroom Group Interactions
ERIC Educational Resources Information Center
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-01-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data…
Identifying large scale structures at 1 AU using fluctuations and wavelets
NASA Astrophysics Data System (ADS)
Niembro, T.; Lara, A.
2016-12-01
The solar wind (SW) is inhomogeneous and it is dominated for two types of flows: one quasi-stationary and one related to large scale transients (such as coronal mass ejections and co-rotating interaction regions). The SW inhomogeneities can be study as fluctuations characterized by a wide range of length and time scales. We are interested in the study of the characteristic fluctuations caused by large scale transient events. To do so, we define the vector space F with the normalized moving monthly/annual deviations as the orthogonal basis. Then, we compute the norm in this space of the solar wind parameters (velocity, magnetic field, density and temperature) fluctuations using WIND data from August 1992 to August 2015. This norm gives important information about the presence of a large structure disturbance in the solar wind and by applying a wavelet transform to this norm, we are able to determine, without subjectivity, the duration of the compression regions of these large transient structures and, even more, to identify if the structure corresponds to a single or complex (or merged) event. With this method we have automatically detected most of the events identified and published by other authors.
NASA Astrophysics Data System (ADS)
Yue, X.; Wang, W.; Schreiner, W. S.; Kuo, Y. H.; Lei, J.; Liu, J.; Burns, A. G.; Zhang, Y.; Zhang, S.
2015-12-01
Based on slant total electron content (TEC) observations made by ~10 satellites and ~450 ground IGS GNSS stations, we constructed a 4-D ionospheric electron density reanalysis during the March 17, 2013 geomagnetic storm. Four main large-scale ionospheric disturbances are identified from reanalysis: (1) The positive storm during the initial phase; (2) The SED (storm enhanced density) structure in both northern and southern hemisphere; (3) The large positive storm in main phase; (4) The significant negative storm in middle and low latitude during recovery phase. We then run the NCAR-TIEGCM model with Heelis electric potential empirical model as polar input. The TIEGCM can reproduce 3 of 4 large-scale structures (except SED) very well. We then further analyzed the altitudinal variations of these large-scale disturbances and found several interesting things, such as the altitude variation of SED, the rotation of positive/negative storm phase with local time. Those structures could not be identified clearly by traditional used data sources, which either has no gloval coverage or no vertical resolution. The drivers such as neutral wind/density and electric field from TIEGCM simulations are also analyzed to self-consistantly explain the identified disturbance features.
Phase transitions triggered by quantum fluctuations in the inflationary universe
NASA Technical Reports Server (NTRS)
Nagasawa, Michiyasu; Yokoyama, Junichi
1991-01-01
The dynamics of a second-order phase transition during inflation, which is induced by time-variation of spacetime curvature, is studied as a natural mechanism to produce topological defects of typical grand unification scales such as cosmic strings or global textures. It is shown that their distribution is almost scale-invariant with small- and large-scale cutoffs. Also discussed is how these cutoffs are given.
Validity of Scores for a Developmental Writing Scale Based on Automated Scoring
ERIC Educational Resources Information Center
Attali, Yigal; Powers, Donald
2009-01-01
A developmental writing scale for timed essay-writing performance was created on the basis of automatically computed indicators of writing fluency, word choice, and conventions of standard written English. In a large-scale data collection effort that involved a national sample of more than 12,000 students from 4th, 6th, 8th, 10th, and 12th grade,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Duane L; Pouquet, Dr. Annick; Mininni, Dr. Pablo D.
2015-01-01
We report results on rotating stratified turbulence in the absence of forcing, with large-scale isotropic initial conditions, using direct numerical simulations computed on grids of up tomore » $4096^3$ points. The Reynolds and Froude numbers are respectively equal to $$Re=5.4\\times 10^4$$ and $Fr=0.0242$$. The ratio of the Brunt-V\\"ais\\"al\\"a to the inertial wave frequency, $$N/f$, is taken to be equal to 5, a choice appropriate to model the dynamics of the southern abyssal ocean at mid latitudes. This gives a global buoyancy Reynolds number $$R_B=ReFr^2=32$$, a value sufficient for some isotropy to be recovered in the small scales beyond the Ozmidov scale, but still moderate enough that the intermediate scales where waves are prevalent are well resolved. We concentrate on the large-scale dynamics and confirm that the Froude number based on a typical vertical length scale is of order unity, with strong gradients in the vertical. Two characteristic scales emerge from this computation, and are identified from sharp variations in the spectral distribution of either total energy or helicity. A spectral break is also observed at a scale at which the partition of energy between the kinetic and potential modes changes abruptly, and beyond which a Kolmogorov-like spectrum recovers. Large slanted layers are ubiquitous in the flow in the velocity and temperature fields, and a large-scale enhancement of energy is also observed, directly attributable to the effect of rotation.« less
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey
2017-04-01
Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value
Impact phenomena as factors in the evolution of the Earth
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Parmentier, E. M.
1984-01-01
It is estimated that 30 to 200 large impact basins could have been formed on the early Earth. These large impacts may have resulted in extensive volcanism and enhanced endogenic geologic activity over large areas. Initial modelling of the thermal and subsidence history of large terrestrial basins indicates that they created geologic and thermal anomalies which lasted for geologically significant times. The role of large-scale impact in the biological evolution of the Earth has been highlighted by the discovery of siderophile anomalies at the Cretaceous-Tertiary boundary and associated with North American microtektites. Although in neither case has an associated crater been identified, the observations are consistent with the deposition of projectile-contaminated high-speed ejecta from major impact events. Consideration of impact processes reveals a number of mechanisms by which large-scale impact may induce extinctions.
Large Eddy Simulation of a Turbulent Jet
NASA Technical Reports Server (NTRS)
Webb, A. T.; Mansour, Nagi N.
2001-01-01
Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.
Kyriacou, Demetrios N; Dobrez, Debra; Parada, Jorge P; Steinberg, Justin M; Kahn, Adam; Bennett, Charles L; Schmitt, Brian P
2012-09-01
Rapid public health response to a large-scale anthrax attack would reduce overall morbidity and mortality. However, there is uncertainty about the optimal cost-effective response strategy based on timing of intervention, public health resources, and critical care facilities. We conducted a decision analytic study to compare response strategies to a theoretical large-scale anthrax attack on the Chicago metropolitan area beginning either Day 2 or Day 5 after the attack. These strategies correspond to the policy options set forth by the Anthrax Modeling Working Group for population-wide responses to a large-scale anthrax attack: (1) postattack antibiotic prophylaxis, (2) postattack antibiotic prophylaxis and vaccination, (3) preattack vaccination with postattack antibiotic prophylaxis, and (4) preattack vaccination with postattack antibiotic prophylaxis and vaccination. Outcomes were measured in costs, lives saved, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). We estimated that postattack antibiotic prophylaxis of all 1,390,000 anthrax-exposed people beginning on Day 2 after attack would result in 205,835 infected victims, 35,049 fulminant victims, and 28,612 deaths. Only 6,437 (18.5%) of the fulminant victims could be saved with the existing critical care facilities in the Chicago metropolitan area. Mortality would increase to 69,136 if the response strategy began on Day 5. Including postattack vaccination with antibiotic prophylaxis of all exposed people reduces mortality and is cost-effective for both Day 2 (ICER=$182/QALY) and Day 5 (ICER=$1,088/QALY) response strategies. Increasing ICU bed availability significantly reduces mortality for all response strategies. We conclude that postattack antibiotic prophylaxis and vaccination of all exposed people is the optimal cost-effective response strategy for a large-scale anthrax attack. Our findings support the US government's plan to provide antibiotic prophylaxis and vaccination for all exposed people within 48 hours of the recognition of a large-scale anthrax attack. Future policies should consider expanding critical care capacity to allow for the rescue of more victims.
Dobrez, Debra; Parada, Jorge P.; Steinberg, Justin M.; Kahn, Adam; Bennett, Charles L.; Schmitt, Brian P.
2012-01-01
Rapid public health response to a large-scale anthrax attack would reduce overall morbidity and mortality. However, there is uncertainty about the optimal cost-effective response strategy based on timing of intervention, public health resources, and critical care facilities. We conducted a decision analytic study to compare response strategies to a theoretical large-scale anthrax attack on the Chicago metropolitan area beginning either Day 2 or Day 5 after the attack. These strategies correspond to the policy options set forth by the Anthrax Modeling Working Group for population-wide responses to a large-scale anthrax attack: (1) postattack antibiotic prophylaxis, (2) postattack antibiotic prophylaxis and vaccination, (3) preattack vaccination with postattack antibiotic prophylaxis, and (4) preattack vaccination with postattack antibiotic prophylaxis and vaccination. Outcomes were measured in costs, lives saved, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). We estimated that postattack antibiotic prophylaxis of all 1,390,000 anthrax-exposed people beginning on Day 2 after attack would result in 205,835 infected victims, 35,049 fulminant victims, and 28,612 deaths. Only 6,437 (18.5%) of the fulminant victims could be saved with the existing critical care facilities in the Chicago metropolitan area. Mortality would increase to 69,136 if the response strategy began on Day 5. Including postattack vaccination with antibiotic prophylaxis of all exposed people reduces mortality and is cost-effective for both Day 2 (ICER=$182/QALY) and Day 5 (ICER=$1,088/QALY) response strategies. Increasing ICU bed availability significantly reduces mortality for all response strategies. We conclude that postattack antibiotic prophylaxis and vaccination of all exposed people is the optimal cost-effective response strategy for a large-scale anthrax attack. Our findings support the US government's plan to provide antibiotic prophylaxis and vaccination for all exposed people within 48 hours of the recognition of a large-scale anthrax attack. Future policies should consider expanding critical care capacity to allow for the rescue of more victims. PMID:22845046
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
NASA Astrophysics Data System (ADS)
Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.
2018-05-01
Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
The importance of antipersistence for traffic jams
NASA Astrophysics Data System (ADS)
Krause, Sebastian M.; Habel, Lars; Guhr, Thomas; Schreckenberg, Michael
2017-05-01
Universal characteristics of road networks and traffic patterns can help to forecast and control traffic congestion. The antipersistence of traffic flow time series has been found for many data sets, but its relevance for congestion has been overseen. Based on empirical data from motorways in Germany, we study how antipersistence of traffic flow time-series impacts the duration of traffic congestion on a wide range of time scales. We find a large number of short-lasting traffic jams, which implies a large risk for rear-end collisions.
Advanced computer architecture for large-scale real-time applications.
DOT National Transportation Integrated Search
1973-04-01
Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...
Microfilament-Eruption Mechanism for Solar Spicules
NASA Technical Reports Server (NTRS)
Sterling, Alphonse C.; Moore, Ronald L.
2017-01-01
Recent studies indicate that solar coronal jets result from eruption of small-scale filaments, or "minifilaments" (Sterling et al. 2015, Nature, 523, 437; Panesar et al. ApJL, 832L, 7). In many aspects, these coronal jets appear to be small-scale versions of long-recognized large-scale solar eruptions that are often accompanied by eruption of a large-scale filament and that produce solar flares and coronal mass ejections (CMEs). In coronal jets, a jet-base bright point (JBP) that is often observed to accompany the jet and that sits on the magnetic neutral line from which the minifilament erupts, corresponds to the solar flare of larger-scale eruptions that occurs at the neutral line from which the large-scale filament erupts. Large-scale eruptions are relatively uncommon (approximately 1 per day) and occur with relatively large-scale erupting filaments (approximately 10 (sup 5) kilometers long). Coronal jets are more common (approximately 100s per day), but occur from erupting minifilaments of smaller size (approximately 10 (sup 4) kilometers long). It is known that solar spicules are much more frequent (many millions per day) than coronal jets. Just as coronal jets are small-scale versions of large-scale eruptions, here we suggest that solar spicules might in turn be small-scale versions of coronal jets; we postulate that the spicules are produced by eruptions of "microfilaments" of length comparable to the width of observed spicules (approximately 300 kilometers). A plot of the estimated number of the three respective phenomena (flares/CMEs, coronal jets, and spicules) occurring on the Sun at a given time, against the average sizes of erupting filaments, minifilaments, and the putative microfilaments, results in a size distribution that can be fitted with a power-law within the estimated uncertainties. The counterparts of the flares of large-scale eruptions and the JBPs of jets might be weak, pervasive, transient brightenings observed in Hinode/CaII images, and the production of spicules by microfilament eruptions might explain why spicules spin, as do coronal jets. The expected small-scale neutral lines from which the microfilaments would be expected to erupt would be difficult to detect reliably with current instrumentation, but might be apparent with instrumentation of the near future. A full report on this work appears in Sterling and Moore 2016, ApJL, 829, L9.
NASA Astrophysics Data System (ADS)
Kirkil, Gokhan; Constantinescu, George
2014-11-01
Large Eddy Simulation is used to investigate the structure of the laminar horseshoe vortex (HV) system and the dynamics of the necklace vortices as they fold around the base of a circular cylinder mounted on the flat bed of an open channel for Reynolds numbers defined with the cylinder diameter, D, smaller than 4,460. The study concentrates on the analysis of the structure of the HV system in the periodic breakaway sub-regime which is characterized by the formation of three main necklace vortices. For the relatively shallow flow conditions considered in this study (H/D 1, H is the channel depth), at times, the disturbances induced by the legs of the necklace vortices do not allow the SSLs on the two sides of the cylinder to interact in a way that allows the vorticity redistribution mechanism to lead to the formation of a new wake roller. As a result, the shedding of large scale rollers in the turbulent wake is suppressed for relatively large periods of time. Simulation results show that the wake structure changes randomly between time intervals when large-scale rollers are forming and are convected in the wake (von Karman regime), and time intervals when the rollers do not form.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola
1992-01-01
A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.
Scaling laws of strategic behavior and size heterogeneity in agent dynamics
NASA Astrophysics Data System (ADS)
Vaglica, Gabriella; Lillo, Fabrizio; Moro, Esteban; Mantegna, Rosario N.
2008-03-01
We consider the financial market as a model system and study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact. We quantify this strategic behavior by detecting scaling relations between the variables characterizing the trading activity of different institutions. We also observe power-law distributions in the investment time horizon, in the number of transactions needed to execute a large order, and in the traded value exchanged by large institutions, and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system.
Friction Stir Welding of Large Scale Cryogenic Tanks for Aerospace Applications
NASA Technical Reports Server (NTRS)
Russell, Carolyn; Ding, R. Jeffrey
1998-01-01
The Marshall Space Flight Center (MSFC) has established a facility for the joining of large-scale aluminum cryogenic propellant tanks using the friction stir welding process. Longitudinal welds, approximately five meters in length, have been made by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and travel system will be described in this presentation along with process controls and real-time data acquisition developed for this application. The approach to retrofitting other large welding tools at MSFC with the friction stir welding process will also be discussed.
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
Scaling properties of marathon races
NASA Astrophysics Data System (ADS)
Alvarez-Ramirez, Jose; Rodriguez, Eduardo
2006-06-01
Some regularities in popular marathon races are identified in this paper. It is found for high-performance participants (i.e., racing times in the range [2:15,3:15] h), the average velocity as a function of the marathoner's ranking behaves as a power-law, which may be suggesting the presence of critical phenomena. Elite marathoners with racing times below 2:15 h can be considered as outliers with respect to this behavior. For the main marathon pack (i.e., racing times in the range [3:00,6:00] h), the average velocity as a function of the marathoner's ranking behaves linearly. For this racing times, the interpersonal velocity, defined as the difference of velocities between consecutive runners, displays a continuum of scaling behavior ranging from uncorrelated noise for small scales to correlated 1/f-noise for large scales. It is a matter of fact that 1/f-noise is characterized by correlations extended over a wide range of scales, a clear indication of some sort of cooperative effect.
Unification of small and large time scales for biological evolution: deviations from power law.
Chowdhury, Debashish; Stauffer, Dietrich; Kunwar, Ambarish
2003-02-14
We develop a unified model that describes both "micro" and "macro" evolutions within a single theoretical framework. The ecosystem is described as a dynamic network; the population dynamics at each node of this network describes the "microevolution" over ecological time scales (i.e., birth, ageing, and natural death of individual organisms), while the appearance of new nodes, the slow changes of the links, and the disappearance of existing nodes accounts for the "macroevolution" over geological time scales (i.e., the origination, evolution, and extinction of species). In contrast to several earlier claims in the literature, we observe strong deviations from power law in the regime of long lifetimes.
Predicting Regional Drought on Sub-Seasonal to Decadal Time Scales
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Wang, Hailan; Suarez, Max; Koster, Randal
2011-01-01
Drought occurs on a wide range of time scales, and within a variety of different types of regional climates. It is driven foremost by an extended period of reduced precipitation, but it is the impacts on such quantities as soil moisture, streamflow and crop yields that are often most important from a users perspective. While recognizing that different users have different needs for drought information, it is nevertheless important to understand that progress in predicting drought and satisfying such user needs, largely hinges on our ability to improve predictions of precipitation. This talk reviews our current understanding of the physical mechanisms that drive precipitation variations on subseasonal to decadal time scales, and the implications for predictability and prediction skill. Examples are given highlighting the phenomena and mechanisms controlling precipitation on monthly (e.g., stationary Rossby waves, soil moisture), seasonal (ENSO) and decadal time scales (PD and AMO).
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
Effects on aquatic and human health due to large scale bioenergy crop expansion.
Love, Bradley J; Einheuser, Matthew D; Nejadhashemi, A Pouyan
2011-08-01
In this study, the environmental impacts of large scale bioenergy crops were evaluated using the Soil and Water Assessment Tool (SWAT). Daily pesticide concentration data for a study area consisting of four large watersheds located in Michigan (totaling 53,358 km²) was estimated over a six year period (2000-2005). Model outputs for atrazine, bromoxynil, glyphosate, metolachlor, pendimethalin, sethoxydim, triflualin, and 2,4-D model output were used to predict the possible long-term implications that large-scale bioenergy crop expansion may have on the bluegill (Lepomis macrochirus) and humans. Threshold toxicity levels were obtained for the bluegill and for human consumption for all pesticides being evaluated through an extensive literature review. Model output was compared to each toxicity level for the suggested exposure time (96-hour for bluegill and 24-hour for humans). The results suggest that traditional intensive row crops such as canola, corn and sorghum may negatively impact aquatic life, and in most cases affect the safe drinking water availability. The continuous corn rotation, the most representative rotation for current agricultural practices for a starch-based ethanol economy, delivers the highest concentrations of glyphosate to the stream. In addition, continuous canola contributed to a concentration of 1.11 ppm of trifluralin, a highly toxic herbicide, which is 8.7 times the 96-hour ecotoxicity of bluegills and 21 times the safe drinking water level. Also during the period of study, continuous corn resulted in the impairment of 541,152 km of stream. However, there is promise with second-generation lignocellulosic bioenergy crops such as switchgrass, which resulted in a 171,667 km reduction in total stream length that exceeds the human threshold criteria, as compared to the base scenario. Results of this study may be useful in determining the suitability of bioenergy crop rotations and aid in decision making regarding the adaptation of large-scale bioenergy cropping systems. Published by Elsevier B.V.
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Towards the computation of time-periodic inertial range dynamics
NASA Astrophysics Data System (ADS)
van Veen, L.; Vela-Martín, A.; Kawahara, G.
2018-04-01
We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme.
Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha
2016-06-01
Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government's Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit.
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
A space-time multifractal analysis on radar rainfall sequences from central Poland
NASA Astrophysics Data System (ADS)
Licznar, Paweł; Deidda, Roberto
2014-05-01
Rainfall downscaling belongs to most important tasks of modern hydrology. Especially from the perspective of urban hydrology there is real need for development of practical tools for possible rainfall scenarios generation. Rainfall scenarios of fine temporal scale reaching single minutes are indispensable as inputs for hydrological models. Assumption of probabilistic philosophy of drainage systems design and functioning leads to widespread application of hydrodynamic models in engineering practice. However models like these covering large areas could not be supplied with only uncorrelated point-rainfall time series. They should be rather supplied with space time rainfall scenarios displaying statistical properties of local natural rainfall fields. Implementation of a Space-Time Rainfall (STRAIN) model for hydrometeorological applications in Polish conditions, such as rainfall downscaling from the large scales of meteorological models to the scale of interest for rainfall-runoff processes is the long-distance aim of our research. As an introduction part of our study we verify the veracity of the following STRAIN model assumptions: rainfall fields are isotropic and statistically homogeneous in space; self-similarity holds (so that, after having rescaled the time by the advection velocity, rainfall is a fully homogeneous and isotropic process in the space-time domain); statistical properties of rainfall are characterized by an "a priori" known multifractal behavior. We conduct a space-time multifractal analysis on radar rainfall sequences selected from the Polish national radar system POLRAD. Radar rainfall sequences covering the area of 256 km x 256 km of original 2 km x 2 km spatial resolution and 15 minutes temporal resolution are used as study material. Attention is mainly focused on most severe summer convective rainfalls. It is shown that space-time rainfall can be considered with a good approximation to be a self-similar multifractal process. Multifractal analysis is carried out assuming Taylor's hypothesis to hold and the advection velocity needed to rescale the time dimension is assumed to be equal about 16 km/h. This assumption is verified by the analysis of autocorrelation functions along the x and y directions of "rainfall cubes" and along the time axis rescaled with assumed advection velocity. In general for analyzed rainfall sequences scaling is observed for spatial scales ranging from 4 to 256 km and for timescales from 15 min to 16 hours. However in most cases scaling break is identified for spatial scales between 4 and 8, corresponding to spatial dimensions of 16 km to 32 km. It is assumed that the scaling break occurrence at these particular scales in central Poland conditions could be at least partly explained by the rainfall mesoscale gap (on the edge of meso-gamma, storm-scale and meso-beta scale).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. J.; Marriage, T. A.; Appel, J. W.
2016-02-20
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less
NASA Astrophysics Data System (ADS)
Uritskaya, Olga Y.
2005-05-01
Results of fractal stability analysis of daily exchange rate fluctuations of more than 30 floating currencies for a 10-year period are presented. It is shown for the first time that small- and large-scale dynamical instabilities of national monetary systems correlate with deviations of the detrended fluctuation analysis (DFA) exponent from the value 1.5 predicted by the efficient market hypothesis. The observed dependence is used for classification of long-term stability of floating exchange rates as well as for revealing various forms of distortion of stable currency dynamics prior to large-scale crises. A normal range of DFA exponents consistent with crisis-free long-term exchange rate fluctuations is determined, and several typical scenarios of unstable currency dynamics with DFA exponents fluctuating beyond the normal range are identified. It is shown that monetary crashes are usually preceded by prolonged periods of abnormal (decreased or increased) DFA exponent, with the after-crash exponent tending to the value 1.5 indicating a more reliable exchange rate dynamics. Statistically significant regression relations (R=0.99, p<0.01) between duration and magnitude of currency crises and the degree of distortion of monofractal patterns of exchange rate dynamics are found. It is demonstrated that the parameters of these relations characterizing small- and large-scale crises are nearly equal, which implies a common instability mechanism underlying these events. The obtained dependences have been used as a basic ingredient of a forecasting technique which provided correct in-sample predictions of monetary crisis magnitude and duration over various time scales. The developed technique can be recommended for real-time monitoring of dynamical stability of floating exchange rate systems and creating advanced early-warning-system models for currency crisis prevention.
Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2013-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2012-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
NASA Technical Reports Server (NTRS)
Squires, Kyle D.; Eaton, John K.
1991-01-01
Direct numerical simulation is used to study dispersion in decaying isotropic turbulence and homogeneous shear flow. Both Lagrangian and Eulerian data are presented allowing direct comparison, but at fairly low Reynolds number. The quantities presented include properties of the dispersion tensor, isoprobability contours of particle displacement, Lagrangian and Eulerian velocity autocorrelations and time scale ratios, and the eddy diffusivity tensor. The Lagrangian time microscale is found to be consistently larger than the Eulerian microscale, presumably due to the advection of the small scales by the large scales in the Eulerian reference frame.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
The influence of the Atlantic Warm Pool on the Florida panhandle sea breeze
Misra, Vasubandhu; Moeller, Lauren; Stefanova, Lydia; Chan, Steven; O'Brien, James J.; Smith, Thomas J.; Plant, Nathaniel
2011-01-01
In this paper we examine the variations of the boreal summer season sea breeze circulation along the Florida panhandle coast from relatively high resolution (10 km) regional climate model integrations. The 23 year climatology (1979–2001) of the multidecadal dynamically downscaled simulations forced by the National Centers for Environmental Prediction–Department of Energy (NCEP-DOE) Reanalysis II at the lateral boundaries verify quite well with the observed climatology. The variations at diurnal and interannual time scales are also well simulated with respect to the observations. We show from composite analyses made from these downscaled simulations that sea breezes in northwestern Florida are associated with changes in the size of the Atlantic Warm Pool (AWP) on interannual time scales. In large AWP years when the North Atlantic Subtropical High becomes weaker and moves further eastward relative to the small AWP years, a large part of the southeast U.S. including Florida comes under the influence of relatively strong anomalous low-level northerly flow and large-scale subsidence consistent with the theory of the Sverdrup balance. This tends to suppress the diurnal convection over the Florida panhandle coast in large AWP years. This study is also an illustration of the benefit of dynamic downscaling in understanding the low-frequency variations of the sea breeze.
Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E
NASA Technical Reports Server (NTRS)
Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie
2001-01-01
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.
NASA Astrophysics Data System (ADS)
Akanda, A. S.; Jutla, A. S.; Islam, S.
2009-12-01
Despite ravaging the continents through seven global pandemics in past centuries, the seasonal and interannual variability of cholera outbreaks remain a mystery. Previous studies have focused on the role of various environmental and climatic factors, but provided little or no predictive capability. Recent findings suggest a more prominent role of large scale hydroclimatic extremes - droughts and floods - and attempt to explain the seasonality and the unique dual cholera peaks in the Bengal Delta region of South Asia. We investigate the seasonal and interannual nature of cholera epidemiology in three geographically distinct locations within the region to identify the larger scale hydroclimatic controls that can set the ecological and environmental ‘stage’ for outbreaks and have significant memory on a seasonal scale. Here we show that two distinctly different, pre and post monsoon, cholera transmission mechanisms related to large scale climatic controls prevail in the region. An implication of our findings is that extreme climatic events such as prolonged droughts, record floods, and major cyclones may cause major disruption in the ecosystem and trigger large epidemics. We postulate that a quantitative understanding of the large-scale hydroclimatic controls and dominant processes with significant system memory will form the basis for forecasting such epidemic outbreaks. A multivariate regression method using these predictor variables to develop probabilistic forecasts of cholera outbreaks will be explored. Forecasts from such a system with a seasonal lead-time are likely to have measurable impact on early cholera detection and prevention efforts in endemic regions.
The Causal Connection Between Disc and Power-Law Variability in Hard State Black Hole X-Ray Binaries
NASA Technical Reports Server (NTRS)
Uttley, P.; Wilkinson, T.; Cassatella, P.; Wilms, J.; Pottschimdt, K.; Hanke, M.; Boeck, M.
2010-01-01
We use the XMM-Newton EPIC-pn instrument in timing mode to extend spectral time-lag studies of hard state black hole X-ray binaries into the soft X-ray band. \\Ve show that variations of the disc blackbody emission substantially lead variations in the power-law emission, by tenths of a second on variability time-scales of seconds or longer. The large lags cannot be explained by Compton scattering but are consistent with time-delays due to viscous propagation of mass accretion fluctuations in the disc. However, on time-scales less than a second the disc lags the power-law variations by a few ms, consistent with the disc variations being dominated by X-ray heating by the power-law, with the short lag corresponding to the light-travel time between the power-law emitting region and the disc. Our results indicate that instabilities in the accretion disc are responsible for continuum variability on time-scales of seconds or longer and probably also on shorter time-scales.
Sigehuzi, Tomoo; Tanaka, Hajime
2004-11-01
We study phase-separation behavior of an off-symmetric fluid mixture induced by a "double temperature quench." We first quench a system into the unstable region. After a large phase-separated structure is formed, we again quench the system more deeply and follow the pattern-evolution process. The second quench makes the domains formed by the first quench unstable and leads to double phase separation; that is, small droplets are formed inside the large domains created by the first quench. The complex coarsening behavior of this hierarchic structure having two characteristic length scales is studied in detail by using the digital image analysis. We find three distinct time regimes in the time evolution of the structure factor of the system. In the first regime, small droplets coarsen with time inside large domains. There a large domain containing small droplets in it can be regarded as an isolated system. Later, however, the coarsening of small droplets stops when they start to interact via diffusion with the large domain containing them. Finally, small droplets disappear due to the Lifshitz-Slyozov mechanism. Thus the observed behavior can be explained by the crossover of the nature of a large domain from the isolated to the open system; this is a direct consequence of the existence of the two characteristic length scales.
A Hybrid, Large-Scale Wireless Sensor Network for Real-Time Acquisition and Tracking
2007-06-01
multicolor, Quantum Well Infrared Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance...Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance analysis. The thesis proposes...7 1. Multi-color IR Sensors - Operational Advantages ...........................8 2. Quantum-Well IR Photodetector ( QWIP
ERIC Educational Resources Information Center
Veaner, Allen B.
Project BALLOTS is a large-scale library automation development project of the Stanford University Libraries which has demonstrated the feasibility of conducting on-line interactive searches of complex bibliographic files, with a large number of users working simultaneously in the same or different files. This report documents the continuing…
Wan, Shixiang; Zou, Quan
2017-01-01
Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao, E-mail: bh2478@columbia.edu, E-mail: lh399@columbia.edu, E-mail: xx2146@columbia.edu
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Seidl, Rupert; Müller, Jörg; Hothorn, Torsten; Bässler, Claus; Heurich, Marco; Kautz, Markus
2016-01-01
Summary 1. Unprecedented bark beetle outbreaks have been observed for a variety of forest ecosystems recently, and damage is expected to further intensify as a consequence of climate change. In Central Europe, the response of ecosystem management to increasing infestation risk has hitherto focused largely on the stand level, while the contingency of outbreak dynamics on large-scale drivers remains poorly understood. 2. To investigate how factors beyond the local scale contribute to the infestation risk from Ips typographus (Col., Scol.), we analysed drivers across seven orders of magnitude in scale (from 103 to 1010 m2) over a 23-year period, focusing on the Bavarian Forest National Park. Time-discrete hazard modelling was used to account for local factors and temporal dependencies. Subsequently, beta regression was applied to determine the influence of regional and landscape factors, the latter characterized by means of graph theory. 3. We found that in addition to stand variables, large-scale drivers also strongly influenced bark beetle infestation risk. Outbreak waves were closely related to landscape-scale connectedness of both host and beetle populations as well as to regional bark beetle infestation levels. Furthermore, regional summer drought was identified as an important trigger for infestation pulses. Large-scale synchrony and connectivity are thus key drivers of the recently observed bark beetle outbreak in the area. 4. Synthesis and applications. Our multiscale analysis provides evidence that the risk for biotic disturbances is highly dependent on drivers beyond the control of traditional stand-scale management. This finding highlights the importance of fostering the ability to cope with and recover from disturbance. It furthermore suggests that a stronger consideration of landscape and regional processes is needed to address changing disturbance regimes in ecosystem management. PMID:27041769
NASA Astrophysics Data System (ADS)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.
2018-03-01
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.
Micron-scale coherence in interphase chromatin dynamics
Zidovska, Alexandra; Weitz, David A.; Mitchison, Timothy J.
2013-01-01
Chromatin structure and dynamics control all aspects of DNA biology yet are poorly understood, especially at large length scales. We developed an approach, displacement correlation spectroscopy based on time-resolved image correlation analysis, to map chromatin dynamics simultaneously across the whole nucleus in cultured human cells. This method revealed that chromatin movement was coherent across large regions (4–5 µm) for several seconds. Regions of coherent motion extended beyond the boundaries of single-chromosome territories, suggesting elastic coupling of motion over length scales much larger than those of genes. These large-scale, coupled motions were ATP dependent and unidirectional for several seconds, perhaps accounting for ATP-dependent directed movement of single genes. Perturbation of major nuclear ATPases such as DNA polymerase, RNA polymerase II, and topoisomerase II eliminated micron-scale coherence, while causing rapid, local movement to increase; i.e., local motions accelerated but became uncoupled from their neighbors. We observe similar trends in chromatin dynamics upon inducing a direct DNA damage; thus we hypothesize that this may be due to DNA damage responses that physically relax chromatin and block long-distance communication of forces. PMID:24019504
Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking
NASA Astrophysics Data System (ADS)
Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.
2008-12-01
Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.
Selecting a proper design period for heliostat field layout optimization using Campo code
NASA Astrophysics Data System (ADS)
Saghafifar, Mohammad; Gadalla, Mohamed
2016-09-01
In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.
A functional model for characterizing long-distance movement behaviour
Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.
2016-01-01
Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.
Time-Resolved Small-Angle X-ray Scattering Reveals Millisecond Transitions of a DNA Origami Switch.
Bruetzel, Linda K; Walker, Philipp U; Gerling, Thomas; Dietz, Hendrik; Lipfert, Jan
2018-04-11
Self-assembled DNA structures enable creation of specific shapes at the nanometer-micrometer scale with molecular resolution. The construction of functional DNA assemblies will likely require dynamic structures that can undergo controllable conformational changes. DNA devices based on shape complementary stacking interactions have been demonstrated to undergo reversible conformational changes triggered by changes in ionic environment or temperature. An experimentally unexplored aspect is how quickly conformational transitions of large synthetic DNA origami structures can actually occur. Here, we use time-resolved small-angle X-ray scattering to monitor large-scale conformational transitions of a two-state DNA origami switch in free solution. We show that the DNA device switches from its open to its closed conformation upon addition of MgCl 2 in milliseconds, which is close to the theoretical diffusive speed limit. In contrast, measurements of the dimerization of DNA origami bricks reveal much slower and concentration-dependent assembly kinetics. DNA brick dimerization occurs on a time scale of minutes to hours suggesting that the kinetics depend on local concentration and molecular alignment.
Large scale structure formation of the normal branch in the DGP brane world model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Yong-Seon
2008-06-15
In this paper, we study the large scale structure formation of the normal branch in the DGP model (Dvail, Gabadadze, and Porrati brane world model) by applying the scaling method developed by Sawicki, Song, and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is a detectable departure of perturbed gravitational potential from the cold dark matter model with vacuum energy even at the minimal deviation of the effective equation of state w{sub eff} below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model.more » Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.« less
Self-affinity in the dengue fever time series
NASA Astrophysics Data System (ADS)
Azevedo, S. M.; Saba, H.; Miranda, J. G. V.; Filho, A. S. Nascimento; Moret, M. A.
2016-06-01
Dengue is a complex public health problem that is common in tropical and subtropical regions. This disease has risen substantially in the last three decades, and the physical symptoms depict the self-affine behavior of the occurrences of reported dengue cases in Bahia, Brazil. This study uses detrended fluctuation analysis (DFA) to verify the scale behavior in a time series of dengue cases and to evaluate the long-range correlations that are characterized by the power law α exponent for different cities in Bahia, Brazil. The scaling exponent (α) presents different long-range correlations, i.e. uncorrelated, anti-persistent, persistent and diffusive behaviors. The long-range correlations highlight the complex behavior of the time series of this disease. The findings show that there are two distinct types of scale behavior. In the first behavior, the time series presents a persistent α exponent for a one-month period. For large periods, the time series signal approaches subdiffusive behavior. The hypothesis of the long-range correlations in the time series of the occurrences of reported dengue cases was validated. The observed self-affinity is useful as a forecasting tool for future periods through extrapolation of the α exponent behavior. This complex system has a higher predictability in a relatively short time (approximately one month), and it suggests a new tool in epidemiological control strategies. However, predictions for large periods using DFA are hidden by the subdiffusive behavior.
Performance Assessment of a Large Scale Pulsejet- Driven Ejector System
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.
2006-01-01
Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems
NASA Astrophysics Data System (ADS)
Tenney, Andrew; Coleman, Thomas; Berry, Matthew; Magstadt, Andy; Gogineni, Sivaram; Kiel, Barry
2015-11-01
Shock cells and large scale structures present in a three-stream non-axisymmetric jet are studied both qualitatively and quantitatively. Large Eddy Simulation is utilized first to gain an understanding of the underlying physics of the flow and direct the focus of the physical experiment. The flow in the experiment is visualized using long exposure Schlieren photography, with time resolved Schlieren photography also a possibility. Velocity derivative diagnostics are calculated from the grey-scale Schlieren images are analyzed using continuous wavelet transforms. Pressure signals are also captured in the near-field of the jet to correlate with the velocity derivative diagnostics and assist in unraveling this complex flow. We acknowledge the support of AFRL through an SBIR grant.
Large-scale semidefinite programming for many-electron quantum mechanics.
Mazziotti, David A
2011-02-25
The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)]. We illustrate with (i) the dissociation of N(2) and (ii) the metal-to-insulator transition of H(50). For H(50) the SDP problem has 9.4×10(6) variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics. © 2011 American Physical Society
Large-Scale Semidefinite Programming for Many-Electron Quantum Mechanics
NASA Astrophysics Data System (ADS)
Mazziotti, David A.
2011-02-01
The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)PRLTAO0031-900710.1103/PhysRevLett.93.213001]. We illustrate with (i) the dissociation of N2 and (ii) the metal-to-insulator transition of H50. For H50 the SDP problem has 9.4×106 variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics.
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Seogjoo; Hoyer, Stephan; Fleming, Graham
2014-10-31
A generalized master equation (GME) governing quantum evolution of modular exciton density (MED) is derived for large scale light harvesting systems composed of weakly interacting modules of multiple chromophores. The GME-MED offers a practical framework to incorporate real time coherent quantum dynamics calculations of small length scales into dynamics over large length scales, and also provides a non-Markovian generalization and rigorous derivation of the Pauli master equation employing multichromophoric Förster resonance energy transfer rates. A test of the GME-MED for four sites of the Fenna-Matthews-Olson complex demonstrates how coherent dynamics of excitonic populations over coupled chromophores can be accurately describedmore » by transitions between subgroups (modules) of delocalized excitons. Application of the GME-MED to the exciton dynamics between a pair of light harvesting complexes in purple bacteria demonstrates its promise as a computationally efficient tool to investigate large scale exciton dynamics in complex environments.« less
A 100,000 Scale Factor Radar Range.
Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser
2017-12-19
The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.
Fire extinguishing tests -80 with methyl alcohol gasoline (in MIXED)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmstedt, G.; Ryderman, A.; Carlsson, B.
1980-01-01
Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15more » fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results, and performances with small pools can hardly be correlated with results from large scale fires.« less
Ubiquitous and Continuous Propagating Disturbances in the Solar Corona
NASA Astrophysics Data System (ADS)
Morgan, Huw; Hutton, Joseph
2018-02-01
A new processing method applied to Atmospheric Imaging Assembly/Solar Dynamic Observatory observations reveals continuous propagating faint motions throughout the corona. The amplitudes are small, typically 2% of the background intensity. An hour’s data are processed from four AIA channels for a region near disk center, and the motions are characterized using an optical flow method. The motions trace the underlying large-scale magnetic field. The motion vector field describes large-scale coherent regions that tend to converge at narrow corridors. Large-scale vortices can also be seen. The hotter channels have larger-scale regions of coherent motion compared to the cooler channels, interpreted as the typical length of magnetic loops at different heights. Regions of low mean and high time variance in velocity are where the dominant motion component is along the line of sight as a result of a largely vertical magnetic field. The mean apparent magnitude of the optical velocities are a few tens of km s‑1, with different distributions in different channels. Over time, the velocities vary smoothly between a few km s‑1 to 100 km s‑1 or higher, varying on timescales of minutes. A clear bias of a few km s‑1 toward positive x-velocities is due to solar rotation and may be used as calibration in future work. All regions of the low corona thus experience a continuous stream of propagating disturbances at the limit of both spatial resolution and signal level. The method provides a powerful new diagnostic tool for tracing the magnetic field, and to probe motions at sub-pixel scales, with important implications for models of heating and of the magnetic field.
Scale and time dependence of serial correlations in word-length time series of written texts
NASA Astrophysics Data System (ADS)
Rodriguez, E.; Aguilar-Cornejo, M.; Femat, R.; Alvarez-Ramirez, J.
2014-11-01
This work considered the quantitative analysis of large written texts. To this end, the text was converted into a time series by taking the sequence of word lengths. The detrended fluctuation analysis (DFA) was used for characterizing long-range serial correlations of the time series. To this end, the DFA was implemented within a rolling window framework for estimating the variations of correlations, quantified in terms of the scaling exponent, strength along the text. Also, a filtering derivative was used to compute the dependence of the scaling exponent relative to the scale. The analysis was applied to three famous English-written literary narrations; namely, Alice in Wonderland (by Lewis Carrol), Dracula (by Bram Stoker) and Sense and Sensibility (by Jane Austen). The results showed that high correlations appear for scales of about 50-200 words, suggesting that at these scales the text contains the stronger coherence. The scaling exponent was not constant along the text, showing important variations with apparent cyclical behavior. An interesting coincidence between the scaling exponent variations and changes in narrative units (e.g., chapters) was found. This suggests that the scaling exponent obtained from the DFA is able to detect changes in narration structure as expressed by the usage of words of different lengths.
Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch
2011-11-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.
Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.
2014-01-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599
Large-scale phenomena, chapter 3, part D
NASA Technical Reports Server (NTRS)
1975-01-01
Oceanic phenomena with horizontal scales from approximately 100 km up to the widths of the oceans themselves are examined. Data include: shape of geoid, quasi-stationary anomalies due to spatial variations in sea density and steady current systems, and the time dependent variations due to tidal and meteorological forces and to varying currents.
ERIC Educational Resources Information Center
Guth, Douglas J.
2017-01-01
A community college's success hinges in large part on the effectiveness of its teaching faculty, no more so than in times of major organizational change. However, any large-scale foundational shift requires institutional buy-in, with the onus on leadership to create an environment where everyone is working together toward the same endpoint.…
Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André
2015-01-01
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform. PMID:26041985
As a Matter of Force—Systematic Biases in Idealized Turbulence Simulations
NASA Astrophysics Data System (ADS)
Grete, Philipp; O’Shea, Brian W.; Beckwith, Kris
2018-05-01
Many astrophysical systems encompass very large dynamical ranges in space and time, which are not accessible by direct numerical simulations. Thus, idealized subvolumes are often used to study small-scale effects including the dynamics of turbulence. These turbulent boxes require an artificial driving in order to mimic energy injection from large-scale processes. In this Letter, we show and quantify how the autocorrelation time of the driving and its normalization systematically change the properties of an isothermal compressible magnetohydrodynamic flow in the sub- and supersonic regime and affect astrophysical observations such as Faraday rotation. For example, we find that δ-in-time forcing with a constant energy injection leads to a steeper slope in kinetic energy spectrum and less-efficient small-scale dynamo action. In general, we show that shorter autocorrelation times require more power in the acceleration field, which results in more power in compressive modes that weaken the anticorrelation between density and magnetic field strength. Thus, derived observables, such as the line-of-sight (LOS) magnetic field from rotation measures, are systematically biased by the driving mechanism. We argue that δ-in-time forcing is unrealistic and numerically unresolved, and conclude that special care needs to be taken in interpreting observational results based on the use of idealized simulations.
Dissipative structures in magnetorotational turbulence
NASA Astrophysics Data System (ADS)
Ross, Johnathan; Latter, Henrik N.
2018-07-01
Via the process of accretion, magnetorotational turbulence removes energy from a disc's orbital motion and transforms it into heat. Turbulent heating is far from uniform and is usually concentrated in small regions of intense dissipation, characterized by abrupt magnetic reconnection and higher temperatures. These regions are of interest because they might generate non-thermal emission, in the form of flares and energetic particles, or thermally process solids in protoplanetary discs. Moreover, the nature of the dissipation bears on the fundamental dynamics of the magnetorotational instability (MRI) itself: local simulations indicate that the large-scale properties of the turbulence (e.g. saturation levels and the stress-pressure relationship) depend on the short dissipative scales. In this paper we undertake a numerical study of how the MRI dissipates and the small-scale dissipative structures it employs to do so. We use the Godunov code RAMSES and unstratified compressible shearing boxes. Our simulations reveal that dissipation is concentrated in ribbons of strong magnetic reconnection that are significantly elongated in azimuth, up to a scale height. Dissipative structures are hence meso-scale objects, and potentially provide a route by which large scales and small scales interact. We go on to show how these ribbons evolve over time - forming, merging, breaking apart, and disappearing. Finally, we reveal important couplings between the large-scale density waves generated by the MRI and the small-scale structures, which may illuminate the stress-pressure relationship in MRI turbulence.
NASA Astrophysics Data System (ADS)
Bryant, Gerald
2015-04-01
Large-scale soft-sediment deformation features in the Navajo Sandstone have been a topic of interest for nearly 40 years, ever since they were first explored as a criterion for discriminating between marine and continental processes in the depositional environment. For much of this time, evidence for large-scale sediment displacements was commonly attributed to processes of mass wasting. That is, gravity-driven movements of surficial sand. These slope failures were attributed to the inherent susceptibility of dune sand responding to environmental triggers such as earthquakes, floods, impacts, and the differential loading associated with dune topography. During the last decade, a new wave of research is focusing on the event significance of deformation features in more detail, revealing a broad diversity of large-scale deformation morphologies. This research has led to a better appreciation of subsurface dynamics in the early Jurassic deformation events recorded in the Navajo Sandstone, including the important role of intrastratal sediment flow. This report documents two illustrative examples of large-scale sediment displacements represented in extensive outcrops of the Navajo Sandstone along the Utah/Arizona border. Architectural relationships in these outcrops provide definitive constraints that enable the recognition of a large-scale sediment outflow, at one location, and an equally large-scale subsurface flow at the other. At both sites, evidence for associated processes of liquefaction appear at depths of at least 40 m below the original depositional surface, which is nearly an order of magnitude greater than has commonly been reported from modern settings. The surficial, mass flow feature displays attributes that are consistent with much smaller-scale sediment eruptions (sand volcanoes) that are often documented from modern earthquake zones, including the development of hydraulic pressure from localized, subsurface liquefaction and the subsequent escape of fluidized sand toward the unconfined conditions of the surface. The origin of the forces that produced the lateral, subsurface movement of a large body of sand at the other site is not readily apparent. The various constraints on modeling the generation of the lateral force required to produce the observed displacement are considered here, along with photodocumentation of key outcrop relationships.